Compare commits

..

145 Commits

Author SHA1 Message Date
Péter Szilágyi
dae82f0985 params, swarm: release Geth v1.8.19 and Swarm v0.3.7 2018-11-28 14:42:37 +02:00
Péter Szilágyi
8fdbbef72d Merge pull request #18196 from karalabe/downloader-cht-fix
eth/downloader: fix light client cht binary search issue
2018-11-28 14:40:32 +02:00
Péter Szilágyi
8696986547 Merge pull request #18197 from karalabe/v1.8.19-chts
params: update CHTs for the v1.8.19 release
2018-11-28 13:54:32 +02:00
Péter Szilágyi
d606a7a46a params: update CHTs for the v1.8.19 release 2018-11-28 13:53:33 +02:00
Péter Szilágyi
174083c3ae eth/downloader: fix light client cht binary search issue 2018-11-28 13:46:13 +02:00
Martin Holst Swende
bfed28a421 core: more detailed metrics for block processing (#18119) 2018-11-28 10:29:05 +02:00
Péter Szilágyi
edc39aaedf p2p/discv5: gofmt 2018-11-27 14:50:47 +02:00
ANOTHEL
89fe24bbcc p2p/discv5: minor code simplification (#18188)
* Update net.go

more simple

* Update net.go
2018-11-27 14:00:57 +02:00
Liang Ma
8b9f469419 p2p/protocols: fix minor comments typo (#18185) 2018-11-27 12:52:30 +02:00
holisticode
695a5cce1e Increase bzz version (#18184)
* swarm/network/stream/: added stream protocol version match tests

* Increase BZZ version due to streamer version change; version tests

* swarm/network: increased hive and test protocol version
2018-11-26 19:34:40 +01:00
Janoš Guljaš
c207edf2a3 swarm: add database abstractions (shed package) (#18183) 2018-11-26 18:49:01 +01:00
Javier Peletier
4f0d978eaa cmd/swarm: update should error on manifest mismatch (#18047)
* cmd/swarm: fix ethersphere/go-ethereum#979:

update should error on manifest mistmatch

* cmd/swarm: fixed comments and remove sprintf from log.Info

* cmd/swarm: remove unnecessary comment
2018-11-26 17:37:59 +01:00
lash
1cd007ecae swarm/network: Correct neighborhood depth (#18066) 2018-11-26 17:13:59 +01:00
holisticode
bba5fd8192 Accounting metrics reporter (#18136) 2018-11-26 17:05:18 +01:00
Javier Peletier
2714e8f091 Remove multihash from Swarm bzz:// for Feeds (#18175) 2018-11-26 16:10:22 +01:00
Paweł Bylica
0699287440 tests: Add flag to use EVMC for state tests (#18084) 2018-11-26 16:09:32 +01:00
lash
197d609b9a swarm/pss: Message handler refactor (#18169) 2018-11-26 13:52:04 +01:00
Sheldon
ca228569e4 light: odrTrie tryUpdate should use update (#18107)
TryUpdate does not call t.trie.TryUpdate(key, value) and calls t.trie.TryDelete
instead. The update operation simply deletes the corresponding entry, though
it could retrieve later by odr. However, it adds further network overhead.
2018-11-26 13:27:49 +01:00
Elad
f5e6634fd2 swarm/api: improve not found error msg (#18171) 2018-11-26 12:53:15 +01:00
Janoš Guljaš
93854bbad4 swarm/network/simulation: fix New function for-loop scope (#18161) 2018-11-26 12:39:38 +01:00
Felföldi Zsolt
f0515800e6 les: fix fetcher syncing logic (#18072) 2018-11-26 13:34:33 +02:00
Péter Szilágyi
bb29d20828 Merge pull request #18179 from holiman/fix_tests
config: add constantinople block to testchainconfig
2018-11-26 11:33:09 +02:00
Jaynti Kanani
38592a13a3 fix mixHash/nonce for parity compatible network (#18166) 2018-11-26 09:59:04 +01:00
Martin Holst Swende
a5898ba621 config: add constantinople block to testchainconfig 2018-11-26 09:55:45 +01:00
mr_franklin
2a113f6d72 core: return error if repair block failed (#18126)
* core: return error if repair block failed

* make error a bit shorter
2018-11-23 11:16:14 +02:00
Felix Lange
b24ef5e05d eth: increase timeout in TestBroadcastBlock (#18064) 2018-11-23 11:14:09 +02:00
Ferenc Szabo
76f5f662cc cmd/swarm: FUSE do not require --ipcpath (#18112)
- Have `${DataDir}/bzzd.ipc` as IPC path default.
- Respect the `--datadir` flag.
- Keep only the global `--ipcpath` flag and drop the local `--ipcpath` flag
  as flags might overwrite each other. (Note: before global `--ipcpath`
  was ignored even if it was set)

fixes ethersphere#795
2018-11-23 01:32:34 +01:00
Anton Evangelatov
6b2cc8950e travis: increase open file limits (#18155) 2018-11-22 16:32:50 +02:00
Martin Holst Swende
2843001ac2 trie: fix overflow in write cache parent tracking (#18165)
trie/database: fix overflow in parent tracking
2018-11-22 15:14:31 +02:00
Enrique Fynn
9d5e3e0637 params: add Constantinople block to AllXYZProtocolChanges (#18162)
* params: Add Constantinople block to AllCliqueProtocolChanges

* params: Add Constantinople block to AllEthashProtocolChanges
2018-11-22 15:03:50 +02:00
Péter Szilágyi
3ba0418a9a Merge pull request #17973 from holiman/splitter2
core: better side-chain importing
2018-11-22 15:01:10 +02:00
Martin Holst Swende
e0d091e090 core: better printout of receipts in bad block reports (#18156)
* core/blockchain: better printout of receipts in bad block reports

* fix splleing
2018-11-22 11:00:16 +02:00
Janoš Guljaš
070caec4bd swarm/network/stream: use swarm/mock/mem as mock global store (#18157) 2018-11-21 20:49:13 +01:00
Anton Evangelatov
4c181e4fb9 swarm/state: refactor InmemoryStore (#18143) 2018-11-21 14:36:56 +01:00
Péter Szilágyi
333b5fb123 core: polish side chain importer a bit 2018-11-21 13:19:56 +02:00
mr_franklin
3fd87f2193 core: fix comment typo (#18144) 2018-11-21 12:52:02 +02:00
a-sklyarov
c7e522fd17 Update minimum required Go version in README.md (#18151) 2018-11-21 12:38:49 +02:00
Guillaume Ballet
5d80a1b665 whisper/mailserver: reduce the max number of opened files (#18142)
This should reduce the occurences of travis failures on MacOS

Also fix some linter warnings
2018-11-20 20:14:37 +01:00
Martin Holst Swende
493903eede core: better side-chain importing 2018-11-20 12:28:43 +02:00
Anton Evangelatov
3d997b6dec whisper: log errors on failed tests (#18134)
Debug traces to investigate a travis issue on MacOS
2018-11-20 10:08:02 +01:00
Ferenc Szabo
d876f214e5 swarm/storage: move 'running migrations for' log line (#18120)
So that we only see the log message when we actually have to migrate.
2018-11-20 08:30:38 +01:00
Javier Peletier
7bf7bd2f50 internal/cmdtest: Expose process exit status and errors (#18046) 2018-11-20 08:23:43 +01:00
Anton Evangelatov
d31f1f4fdb cmd/swarm/swarm-smoke: update smoke tests to fit the new scheme for the k8s cluster (#18104) 2018-11-19 14:58:10 +01:00
Anton Evangelatov
6b6c4d1c27 cmd/swarm: speed up tests - use global cluster (#18129) 2018-11-19 14:57:22 +01:00
Anton Evangelatov
3333fe660f swarm/storage: speed up garbage collection and rpc tests (#18128) 2018-11-19 12:26:45 +01:00
Elad
51e2e78d26 swarm/api/http: change request served msg log level (#18127) 2018-11-18 10:54:03 +01:00
Péter Szilágyi
d136e985e8 trie: go fmt package 2018-11-16 16:35:39 +02:00
Péter Szilágyi
91c66d47ef Merge pull request #18085 from holiman/downloader_span
downloader: different sync strategy
2018-11-16 16:34:30 +02:00
Péter Szilágyi
accc0fab4f core, eth/downloader: fix ancestor lookup for fast sync 2018-11-16 13:21:20 +02:00
Martin Holst Swende
51b2f1620c downloader: different sync strategy 2018-11-16 11:54:36 +02:00
Łukasz Kurowski
68be45e5f8 trie: return hasher to pool (#18116)
* trie: return hasher to pool

* trie: minor code formatting fix
2018-11-16 11:50:48 +02:00
holisticode
ffe2fc3bc4 Swarm accounting (#18050)
* swarm: completed 1st phase of swap accounting

* swarm: swap accounting for swarm with p2p accounting

* swarm/swap: addressed PR comments

* swarm/swap: ignore ErrNotFound on stateStore.Get()

* swarm/swap: GetPeerBalance test; add TODO for chequebook API check

* swarm/network/stream: fix NewRegistry calls with new arguments

* swarm/swap: address @justelad's PR comments
2018-11-15 23:41:19 +01:00
Janoš Guljaš
324027640b swarm/network/simulation: use simulations.Event instead p2p.PeerEvent (#18098) 2018-11-15 21:06:27 +01:00
mr_franklin
b91766fe6d eth: fix comment typo (#18114)
* consensus/clique: fix comment typo

* eth,eth/downloader: fix comment typo
2018-11-15 16:31:24 +02:00
lash
a6942b9f25 swarm/storage: Batched database migration (#18113) 2018-11-15 14:57:03 +01:00
Péter Szilágyi
17d67c5834 Merge pull request #18087 from karalabe/trie-read-cacher
cmd, core, eth, light, trie: add trie read caching layer
2018-11-15 14:42:19 +02:00
Péter Szilágyi
434dd5bc00 cmd, core, eth, light, trie: add trie read caching layer 2018-11-15 12:22:13 +02:00
Kenso Trabing
14346e4ef9 internal: fix typo in comments (#18106)
Changed "signTransactions" to "signTransaction"
2018-11-15 11:11:14 +02:00
Sheldon
b8a2ac3fcf les: fix pubkey index typo (#18093) 2018-11-15 11:10:45 +02:00
mr_franklin
9a000601c6 consensus/clique: fix comment typo (#18103) 2018-11-14 14:50:30 +02:00
Kenso Trabing
23de6197f9 rpc: fix package doc typo (#18101)
Changed "send" to "send," in two places
2018-11-14 12:21:52 +02:00
Kenso Trabing
698843b45f rpc: fix example typo (#18100)
whishes --> wishes
2018-11-14 12:21:10 +02:00
Péter Szilágyi
48b4e8069c params, swarm: begin Geth v1.8.19 and Swarm v0.3.7 cycle 2018-11-14 10:27:30 +02:00
Péter Szilágyi
58632d4402 params, swarm: release Geth v1.8.18 and Swarm v0.3.6 2018-11-14 10:25:19 +02:00
Alexey Sharov
eb8fa3cc89 cmd/swarm, swarm/api/http, swarm/bmt, swarm/fuse, swarm/network/stream, swarm/storage, swarm/storage/encryption, swarm/testutil: use pseudo-random instead of crypto-random for test files content generation (#18083)
- Replace "crypto/rand" to "math/rand" for files content generation
- Remove swarm/network_test.go.Shuffle and swarm/btm/btm_test.go.Shuffle - because go1.9 support dropped (see https://github.com/ethereum/go-ethereum/pull/17807 and comments to swarm/network_test.go.Shuffle)
2018-11-14 09:21:14 +01:00
Péter Szilágyi
cff97119a7 Merge pull request #18097 from karalabe/update-chts-2
params: update CHTs
2018-11-14 10:20:51 +02:00
Péter Szilágyi
cef7ed53bd params: update CHTs 2018-11-14 10:16:28 +02:00
Ferenc Szabo
c41e1bd1eb swarm/storage: fix garbage collector index skew (#18080)
On file access LDBStore's tryAccessIdx() function created a faulty
GC Index Data entry, because not indexing the ikey correctly.
That caused the chunk addresses/hashes to start with '00' and the last
two digits were dropped. => Incorrect chunk address.

Besides the fix, the commit also contains a schema change which will
run the CleanGCIndex() function to clean the GC index from erroneous
entries.

Note: CleanGCIndex() rebuilds the index from scratch which can take
a really-really long time with a huge DB (possibly an hour).
2018-11-13 15:22:53 +01:00
mr_franklin
4fecc7a3b1 eth: fix minor grammar issue in comment (#18091) 2018-11-13 11:57:46 +02:00
mr_franklin
588aa88121 github: format code owners file (#18090)
replace tabs by spaces in the code owners file
2018-11-13 11:02:04 +02:00
Ferenc Szabo
8080265f3f swarm/storage: fix access count on dbstore after cache hit (#17978)
Access count was not incremented when chunk was retrieved
from cache. So the garbage collector might have deleted the most
frequently accessed chunk from disk.

Co-authored-by: Ferenc Szabo <ferenc.szabo@ethereum.org>
2018-11-13 07:41:01 +01:00
gary rong
1212c7b844 core: fix default trie cache limit (#17860) 2018-11-12 18:06:34 +02:00
lash
201a0bf181 p2p/simulations, swarm/network: Custom services in snapshot (#17991)
* p2p/simulations: Add custom services to simnodes + remove sim down conn objs

* p2p/simulation, swarm/network: Add selective services to discovery sim

* p2p/simulations, swarm/network: Remove useless comments

* p2p/simulations, swarm/network: Clean up mess from rebase

* p2p/simulation: Add sleep to prevent connect flakiness in http test

* p2p/simulations: added concurrent goroutines to prevent sleeps on simulation connect/disconnect

* p2p/simulations, swarm/network/simulations: address pr comments

* reinstated dummy service

* fixed http snapshot test
2018-11-12 14:57:17 +01:00
Andrew Chiw
a0876f7433 Imply that SwarmApiFlag is the API endpoint to connect to, not to listen on (#18071) 2018-11-12 13:04:13 +01:00
Corey Lin
1ff152f3a4 rawdb: remove unused parameter for WritePreimages func (#18059)
* rawdb: remove unused parameter for WritePreimages func and modify a
spelling mistake

* rawdb: update the doc for function WritePreimages
2018-11-09 12:51:07 +02:00
Kurkó Mihály
f574c4e74b metrics, p2p: add ephemeral registry (#18067)
* metrics, p2p: add ephemeral registry

* metrics: fix linter issue
2018-11-09 10:20:51 +01:00
Felix Lange
870efeef01 core/state: remove lock (#18065)
The lock in StateDB is useless. It's only held in Copy, but Copy is safe
for concurrent use because all it does is read.
2018-11-08 21:37:19 +01:00
gary rong
144c1c6c52 consensus: extend getWork API with block number (#18038) 2018-11-08 17:08:57 +02:00
tamirms
b16cc501a8 ethclient: include block hash from FilterQuery (#17996)
ethereum/go-ethereum#16734 introduced BlockHash to the FilterQuery
struct. However, ethclient was not updated to include BlockHash in the actual
RPC request.
2018-11-08 13:26:47 +01:00
Felix Lange
9313fa63f9 event/filter: delete unused package (#18063) 2018-11-08 14:26:29 +02:00
Péter Szilágyi
d0675e9d9c Merge pull request #17982 from holiman/polish_contantinople_extcodehash
core/vm: check empty in extcodehash
2018-11-08 14:26:04 +02:00
Ryan Schneider
bd519ab8ae internal/web3ext: add eth.getProof (#18052) 2018-11-08 13:18:38 +01:00
JoranHonig
1064e3283d common/compiler: capture runtime code and source maps (#18020) 2018-11-08 13:09:25 +01:00
Corey Lin
a5dc087845 core/vm, eth/tracers: use pointer receiver for GetRefund (#18018) 2018-11-08 13:07:15 +01:00
Corey Lin
212bf266c5 eth, p2p: fix comment typos (#18014) 2018-11-08 12:25:14 +01:00
Liang Ma
c71e4fc4d5 p2p: fix comment typo (#18027) 2018-11-08 12:22:28 +01:00
Corey Lin
968f6019d0 event, event/filter: minor code cleanup (#18061) 2018-11-08 12:17:01 +01:00
Kurkó Mihály
503993c819 p2p: use enode.ID type in metered connection (#17933)
Change the type of the metered connection's id field from string to enode.ID.
2018-11-08 12:11:20 +01:00
Anton Evangelatov
cf3b187bde swarm, cmd/swarm: address ineffectual assignments (#18048)
* swarm, cmd/swarm: address ineffectual assignments

* swarm/network: remove unused vars from testHandshake

* swarm/storage/feed: revert cursor changes
2018-11-07 20:39:08 +01:00
Mark Vujevits
81533deae5 swarm/network: light nodes are not dialed, saved and requested from (#17975)
* RequestFromPeers does not use peers marked as lightnode

* fix warning about variable name

* write tests for RequestFromPeers

* lightnodes should be omitted from the addressbook

* resolve pr comments regarding logging, formatting and comments

* resolve pr comments regarding comments and added a missing newline

* add assertions to check peers in live connections
2018-11-07 20:33:36 +01:00
Felix Lange
0bcff8f525 eth/downloader: speed up tests by generating chain only once (#17916)
* core: speed up GenerateChain

Use a mock implementation of ChainReader instead of creating
and destroying a BlockChain object for each generated block.

* eth/downloader: speed up tests by generating chain only once

This change reworks the downloader tests so they share a common test
blockchain instead of generating a chain in every test. The tests are
roughly twice as fast now.
2018-11-07 15:07:43 +01:00
Javier Peletier
36ca85fa1c swarm/api: Fix #18007, missing signature should return HTTP 400 (#18008) 2018-11-07 14:49:42 +01:00
Wenbiao Zheng
b35165555d eth/downloader: remove the expired id directly (#17963) 2018-11-07 15:30:19 +02:00
Martin Holst Swende
5b74bb6445 signer: remove ineffectual assignments (#18049)
* signer: remove ineffectual assignments

* signer: remove ineffectual assignments
2018-11-07 14:55:54 +02:00
Martin Holst Swende
eea3ae42a3 core, eth/downloader: fix validation flaw, fix downloader printout flaw (#17974) 2018-11-07 14:47:11 +02:00
Martin Holst Swende
dc6648bb58 downloader: measure successfull deliveries, not failed (#17983)
* downloader: measure successfull deliveries, not failed

* downloader: fix typos
2018-11-07 14:18:07 +02:00
Corey Lin
0fe0b8f7b9 p2p/protocols: use keyed fields for struct instantiation (#18017) 2018-11-07 13:22:31 +02:00
Samuel Marks
80e2f3aca4 travis, appveyor: bump to Go 1.11.2 (#18031) 2018-11-07 13:17:41 +02:00
gary rong
e2640a96d4 miner: fix miner stress test (#18039) 2018-11-07 10:55:56 +02:00
holisticode
79c7a69ac8 swarm: Better syncing and retrieval option definition (#17986)
* swarm: Better syncing and retrieval option definition

* swarm/network/stream: better comments

* swarm/network/stream: addressed PR comments
2018-11-07 00:04:18 +01:00
Anton Evangelatov
53eb4e0b0f swarm/api: unexport Respond methods (#18037) 2018-11-06 12:34:34 +01:00
KimMachineGun
baee850471 swarm: modify context key (#17925)
* swarm: modify context key

* gofmt sctx.go
2018-11-06 09:54:19 +01:00
Elad
126dfde6c9 cmd/swarm: auto resolve default path according to env flag (#17960) 2018-11-04 07:59:58 +01:00
Elad
f08f596a37 all: updated code owners file (#17987) 2018-10-31 23:36:33 +01:00
Roc Yu
3e1cfbae93 cmd/swarm/swarm-smoke: fix issue that loop variable capture in func (#17992) 2018-10-29 10:00:00 +01:00
Ferenc Szabo
54f650a3be swarm: clean up unused private types and functions (#17989)
* swarm: clean up unused private types and functions

Those that were identified by code inspection tool.

* swarm/storage: move/add Proximity GoDoc from deleted private function

The mentioned proximity() private function was deleted in:
1ca8fc1e6f
2018-10-27 16:18:42 +02:00
Martin Holst Swende
1b6fd032e3 core/vm: check empty in extcodehash 2018-10-26 08:56:37 +02:00
holisticode
8ed4739176 p2p accounting (#17951)
* p2p/protocols: introduced protocol accounting

* p2p/protocols: added TestExchange simulation

* p2p/protocols: add accounting simulation

* p2p/protocols: remove unnecessary tests

* p2p/protocols: comments for accounting simulation

* p2p/protocols: addressed PR comments

* p2p/protocols: finalized accounting implementation

* p2p/protocols: removed unused code

* p2p/protocols: addressed @nonsense PR comments
2018-10-26 00:26:31 +02:00
Johns Beharry
80d3907767 cmd/clef: replace password arg with prompt (#17897)
* cmd/clef: replace password arg with prompt (#17829)

Entering passwords on the command line is not secure as it is easy to recover from bash_history or the process table.
1. The clef command addpw was renamed to setpw to better describe the functionality
2. The <password> argument was removed and replaced with an interactive prompt

* cmd/clef: remove undeclared variable
2018-10-25 21:45:56 +02:00
Wenbiao Zheng
6810933640 eth/downloader: SetBlocksIdle is not used (#17962)
__
 <(o )___
  ( ._> /
   `---'
2018-10-24 01:27:49 +02:00
Felix Lange
7f22b59f87 core/state: simplify proof methods (#17965)
This fixes the import cycle build error in core/vm tests.
There is no need to refer to core/vm for a type definition.
2018-10-23 21:51:41 +02:00
Martin Holst Swende
4c0883e20d core/vm: adds refund as part of the json standard trace (#17910)
This adds the global accumulated refund counter to the standard
json output as a numeric json value. Previously this was not very
interesting since it was not used much, but with the new sstore
gas changes the value is a lot more interesting from a consensus
investigation perspective.
2018-10-23 16:28:18 +02:00
Wenbiao Zheng
3088c122d8 eth/downloader: fix comment typos (#17956) 2018-10-23 13:21:16 +02:00
holisticode
88b41a9e68 swarm/network/stream: disambiguate chunk delivery messages (retrieval… (#17920)
* swarm/network/stream: disambiguate chunk delivery messages (retrieval vs syncing)

* swarm/network/stream: addressed PR comments

* swarm/network/stream: stream protocol version change due to new message types in this PR
2018-10-21 09:30:41 +02:00
Elad
66debd91d9 swarm/api/http: remove ModTime=now for direct and multipart uploads (#17945) 2018-10-19 16:02:44 +02:00
Felix Lange
75060ef96e cmd/bootnode: fix -writeaddress output (#17932) 2018-10-19 16:41:27 +03:00
Wenbiao Zheng
6ff97bf2e5 accounts: wallet derivation path comment is mistaken (#17934) 2018-10-19 16:40:10 +03:00
Wuxiang
d98c45f70f core: fix a typo (#17941) 2018-10-19 16:33:27 +03:00
Elad
aeb733623e swarm/network: disallow historical retrieval requests (#17936) 2018-10-19 10:50:25 +02:00
Simon Jentzsch
97fb08342d EIP-1186 eth_getProof (#17737)
* first impl of eth_getProof

* fixed docu

* added comments and refactored based on comments from holiman

* created structs

* handle errors correctly

* change Value to *hexutil.Big in order to have the same output as parity

* use ProofList as return type
2018-10-18 21:41:22 +02:00
Attila Gazso
cdf5982cfc swarm: Lightnode mode: disable sync, retrieve, subscription (#17899)
* swarm: Lightnode mode: disable sync, retrieve, subscription

* swarm/network/stream: assign error and check in one line

* swarm: restructured RegistryOption initializing

* swarm: empty commit to retrigger CI build

* swarm/network/stream: Added comments explaining RegistryOptions
2018-10-17 19:22:37 +02:00
Anton Evangelatov
4e693ad5a6 swarm/tracing: disable stdout logging for opentracing (#17931) 2018-10-17 14:46:59 +02:00
holisticode
4466c7b971 metrics: added NewCounterForced (#17919) 2018-10-16 16:22:51 +02:00
Smilenator
2868acd80b core/types: fix comment for func SignatureValues (#17921) 2018-10-16 12:45:28 +02:00
Wenbiao Zheng
6c313fff7b cmd/geth: don't set GOMAXPROCS by default (#17148)
This is no longer needed because Go uses all CPUs
by default. The change allows setting GOMAXPROCS in environment if needed.
2018-10-16 02:02:53 +02:00
Martin Holst Swende
a352de6a08 core/vm: add shortcuts for trivial exp cases (#16851) 2018-10-16 00:51:39 +02:00
Dmitrij Koniajev
6a7695e367 ethdb, rpc: support building on js/wasm (#17709)
The changes allow building WebAssembly applications which use ethclient.Client.
2018-10-16 00:47:25 +02:00
Kurkó Mihály
16e4d0e005 p2p: meter peer traffic, emit metered peer events (#17695)
This change extends the peer metrics collection:

- traces the life-cycle of the peers
- meters the peer traffic separately for every peer
- creates event feed for the peer events
- emits the peer events
2018-10-16 00:40:51 +02:00
Evgeny
331fa6d307 accounts/usbwallet: simplify code using -= operator (#17904) 2018-10-16 00:34:50 +02:00
Grachev Mikhail
3e92c853fb cmd/clef: fix typos in README (#17908) 2018-10-16 00:33:09 +02:00
Martin Holst Swende
60827dc50f tests: update tests, implement no-pow blocks (#17902)
This commit updates our tests with the latest and greatest from ethereum/tests.
It also contains implementation of NoProof for blockchain tests.
2018-10-16 00:26:47 +02:00
Felix Lange
2e98631c5e rpc: fix client shutdown hang when Close races with Unsubscribe (#17894)
Fixes #17837
2018-10-15 10:56:04 +02:00
Viktor Trón
6566a0a3b8 swarm/network/stream: generalise setting of next batch (#17818)
* swarm/network/stream: generalize SetNextBatch and add Server SessionIndex

* swarm/network/stream: fix a typo in comment

* swarm/network/stream: remove live argument from NewSwarmSyncerServer
2018-10-12 16:26:16 +02:00
lash
dc3c3fb1e1 swarm/storage: Add accessCnt for GC (#17845) 2018-10-12 16:25:38 +02:00
lash
862d6f2fbf cmd/swarm: Smoke test for Swarm Feed (#17892) 2018-10-12 16:24:00 +02:00
Elad
4868964bb9 cmd/swarm: split flags and cli command declarations to the relevant files (#17896) 2018-10-12 14:51:38 +02:00
Felix Lange
6f607de5d5 p2p, p2p/discover: add signed ENR generation (#17753)
This PR adds enode.LocalNode and integrates it into the p2p
subsystem. This new object is the keeper of the local node
record. For now, a new version of the record is produced every
time the client restarts. We'll make it smarter to avoid that in
the future.

There are a couple of other changes in this commit: discovery now
waits for all of its goroutines at shutdown and the p2p server
now closes the node database after discovery has shut down. This
fixes a leveldb crash in tests. p2p server startup is faster
because it doesn't need to wait for the external IP query
anymore.
2018-10-12 11:47:24 +02:00
Felix Lange
dcae0d348b p2p/simulations: fix a deadlock and clean up adapters (#17891)
This fixes a rare deadlock with the inproc adapter:

- A node is stopped, which acquires Network.lock.
- The protocol code being simulated (swarm/network in my case)
  waits for its goroutines to shut down.
- One of those goroutines calls into the simulation to add a peer,
  which waits for Network.lock.

The fix for the deadlock is really simple, just release the lock
before stopping the simulation node.

Other changes in this PR clean up the exec adapter so it reports
node startup errors better and remove the docker adapter because
it just adds overhead.

In the exec adapter, node information is now posted to a one-shot
server. This avoids log parsing and allows reporting startup
errors to the simulation host.

A small change in package node was needed because simulation
nodes use port zero. Node.{HTTP,WS}Endpoint now return the live
endpoints after startup by checking the TCP listener.
2018-10-11 20:32:14 +02:00
Péter Szilágyi
f951e23fb5 Merge pull request #17887 from karalabe/warn-failed-account-access
internal/ethapi: warn on failed account accesses
2018-10-10 13:22:43 +03:00
Péter Szilágyi
aff421e78c internal/ethapi: warn on failed account accesses 2018-10-10 12:29:05 +03:00
Felix Lange
4e474c74dc rpc: fix subscription corner case and speed up tests (#17874)
Notifier tracks whether subscription are 'active'. A subscription
becomes active when the subscription ID has been sent to the client. If
the client sends notifications in the request handler before the
subscription becomes active they are dropped. The tests tried to work
around this problem by always waiting 5s before sending the first
notification.

Fix it by buffering notifications until the subscription becomes active.
This speeds up all subscription tests.

Also fix TestSubscriptionMultipleNamespaces to wait for three messages
per subscription instead of six. The test now finishes just after all
notifications have been received and doesn't hit the 30s timeout anymore.
2018-10-09 16:34:24 +02:00
Elad
da290e9707 cmd/swarm: speed up tests (#17878)
These minor changes already shaved off around 30s.
2018-10-09 14:08:40 +02:00
Anton Evangelatov
0fe9a372b3 swarm, swarm/storage: lower constants for faster tests (#17876)
* swarm/storage: lower constants for faster tests

* swarm: reduce test size for TestLocalStoreAndRetrieve

* swarm: reduce nodes for dec_inc_node_count
2018-10-09 11:45:42 +02:00
Martin Holst Swende
d5c7a6056a cmd/clef: encrypt the master seed on disk (#17704)
* cmd/clef: encrypt master seed of clef

Signed-off-by: YaoZengzeng <yaozengzeng@zju.edu.cn>

* keystore: refactor for external use of encryption

* clef: utilize keystore encryption, check flags correctly

* clef: validate master password

* clef: add json wrapping around encrypted master seed
2018-10-09 11:05:41 +02:00
Péter Szilágyi
ff5538ad4c params, swarm: begin Geth v1.8.18, Swarm v0.3.6 cycle 2018-10-09 10:37:49 +03:00
289 changed files with 13800 additions and 4613 deletions

13
.github/CODEOWNERS vendored
View File

@@ -9,23 +9,26 @@ les/ @zsfelfoldi
light/ @zsfelfoldi
mobile/ @karalabe
p2p/ @fjl @zsfelfoldi
p2p/simulations @lmars
p2p/protocols @zelig
swarm/api/http @justelad
swarm/bmt @zelig
swarm/dev @lmars
swarm/fuse @jmozah @holisticode
swarm/grafana_dashboards @nonsense
swarm/metrics @nonsense @holisticode
swarm/multihash @nolash
swarm/network/bitvector @zelig @janos @gbalint
swarm/network/priorityqueue @zelig @janos @gbalint
swarm/network/simulations @zelig
swarm/network/stream @janos @zelig @gbalint @holisticode @justelad
swarm/network/bitvector @zelig @janos
swarm/network/priorityqueue @zelig @janos
swarm/network/simulations @zelig @janos
swarm/network/stream @janos @zelig @holisticode @justelad
swarm/network/stream/intervals @janos
swarm/network/stream/testing @zelig
swarm/pot @zelig
swarm/pss @nolash @zelig @nonsense
swarm/services @zelig
swarm/state @justelad
swarm/storage/encryption @gbalint @zelig @nagydani
swarm/storage/encryption @zelig @nagydani
swarm/storage/mock @janos
swarm/storage/feed @nolash @jpeletier
swarm/testutil @lmars

View File

@@ -29,6 +29,14 @@ matrix:
- os: osx
go: 1.11.x
script:
- echo "Increase the maximum number of open file descriptors on macOS"
- NOFILE=20480
- sudo sysctl -w kern.maxfiles=$NOFILE
- sudo sysctl -w kern.maxfilesperproc=$NOFILE
- sudo launchctl limit maxfiles $NOFILE $NOFILE
- sudo launchctl limit maxfiles
- ulimit -S -n $NOFILE
- ulimit -n
- unset -f cd # workaround for https://github.com/travis-ci/travis-ci/issues/8703
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
@@ -148,7 +156,7 @@ matrix:
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://storage.googleapis.com/golang/go1.11.1.linux-amd64.tar.gz | tar -xz
- curl https://storage.googleapis.com/golang/go1.11.2.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go

View File

@@ -18,7 +18,7 @@ For prerequisites and detailed build instructions please read the
[Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum)
on the wiki.
Building geth requires both a Go (version 1.7 or later) and a C compiler.
Building geth requires both a Go (version 1.9 or later) and a C compiler.
You can install them using your favourite package manager.
Once the dependencies are installed, run

View File

@@ -30,8 +30,8 @@ import (
var DefaultRootDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
// DefaultBaseDerivationPath is the base path from which custom derivation endpoints
// are incremented. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
// are incremented. As such, the first account will be at m/44'/60'/0'/0/0, the second
// at m/44'/60'/0'/0/1, etc.
var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}
// DefaultLedgerBaseDerivationPath is the base path from which custom derivation endpoints

View File

@@ -66,19 +66,19 @@ type plainKeyJSON struct {
type encryptedKeyJSONV3 struct {
Address string `json:"address"`
Crypto cryptoJSON `json:"crypto"`
Crypto CryptoJSON `json:"crypto"`
Id string `json:"id"`
Version int `json:"version"`
}
type encryptedKeyJSONV1 struct {
Address string `json:"address"`
Crypto cryptoJSON `json:"crypto"`
Crypto CryptoJSON `json:"crypto"`
Id string `json:"id"`
Version string `json:"version"`
}
type cryptoJSON struct {
type CryptoJSON struct {
Cipher string `json:"cipher"`
CipherText string `json:"ciphertext"`
CipherParams cipherparamsJSON `json:"cipherparams"`

View File

@@ -135,29 +135,26 @@ func (ks keyStorePassphrase) JoinPath(filename string) string {
return filepath.Join(ks.keysDirPath, filename)
}
// EncryptKey encrypts a key using the specified scrypt parameters into a json
// blob that can be decrypted later on.
func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
authArray := []byte(auth)
// Encryptdata encrypts the data given as 'data' with the password 'auth'.
func EncryptDataV3(data, auth []byte, scryptN, scryptP int) (CryptoJSON, error) {
salt := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
derivedKey, err := scrypt.Key(authArray, salt, scryptN, scryptR, scryptP, scryptDKLen)
derivedKey, err := scrypt.Key(auth, salt, scryptN, scryptR, scryptP, scryptDKLen)
if err != nil {
return nil, err
return CryptoJSON{}, err
}
encryptKey := derivedKey[:16]
keyBytes := math.PaddedBigBytes(key.PrivateKey.D, 32)
iv := make([]byte, aes.BlockSize) // 16
if _, err := io.ReadFull(rand.Reader, iv); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
cipherText, err := aesCTRXOR(encryptKey, keyBytes, iv)
cipherText, err := aesCTRXOR(encryptKey, data, iv)
if err != nil {
return nil, err
return CryptoJSON{}, err
}
mac := crypto.Keccak256(derivedKey[16:32], cipherText)
@@ -167,12 +164,11 @@ func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
scryptParamsJSON["p"] = scryptP
scryptParamsJSON["dklen"] = scryptDKLen
scryptParamsJSON["salt"] = hex.EncodeToString(salt)
cipherParamsJSON := cipherparamsJSON{
IV: hex.EncodeToString(iv),
}
cryptoStruct := cryptoJSON{
cryptoStruct := CryptoJSON{
Cipher: "aes-128-ctr",
CipherText: hex.EncodeToString(cipherText),
CipherParams: cipherParamsJSON,
@@ -180,6 +176,17 @@ func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
KDFParams: scryptParamsJSON,
MAC: hex.EncodeToString(mac),
}
return cryptoStruct, nil
}
// EncryptKey encrypts a key using the specified scrypt parameters into a json
// blob that can be decrypted later on.
func EncryptKey(key *Key, auth string, scryptN, scryptP int) ([]byte, error) {
keyBytes := math.PaddedBigBytes(key.PrivateKey.D, 32)
cryptoStruct, err := EncryptDataV3(keyBytes, []byte(auth), scryptN, scryptP)
if err != nil {
return nil, err
}
encryptedKeyJSONV3 := encryptedKeyJSONV3{
hex.EncodeToString(key.Address[:]),
cryptoStruct,
@@ -226,43 +233,48 @@ func DecryptKey(keyjson []byte, auth string) (*Key, error) {
PrivateKey: key,
}, nil
}
func DecryptDataV3(cryptoJson CryptoJSON, auth string) ([]byte, error) {
if cryptoJson.Cipher != "aes-128-ctr" {
return nil, fmt.Errorf("Cipher not supported: %v", cryptoJson.Cipher)
}
mac, err := hex.DecodeString(cryptoJson.MAC)
if err != nil {
return nil, err
}
iv, err := hex.DecodeString(cryptoJson.CipherParams.IV)
if err != nil {
return nil, err
}
cipherText, err := hex.DecodeString(cryptoJson.CipherText)
if err != nil {
return nil, err
}
derivedKey, err := getKDFKey(cryptoJson, auth)
if err != nil {
return nil, err
}
calculatedMAC := crypto.Keccak256(derivedKey[16:32], cipherText)
if !bytes.Equal(calculatedMAC, mac) {
return nil, ErrDecrypt
}
plainText, err := aesCTRXOR(derivedKey[:16], cipherText, iv)
if err != nil {
return nil, err
}
return plainText, err
}
func decryptKeyV3(keyProtected *encryptedKeyJSONV3, auth string) (keyBytes []byte, keyId []byte, err error) {
if keyProtected.Version != version {
return nil, nil, fmt.Errorf("Version not supported: %v", keyProtected.Version)
}
if keyProtected.Crypto.Cipher != "aes-128-ctr" {
return nil, nil, fmt.Errorf("Cipher not supported: %v", keyProtected.Crypto.Cipher)
}
keyId = uuid.Parse(keyProtected.Id)
mac, err := hex.DecodeString(keyProtected.Crypto.MAC)
if err != nil {
return nil, nil, err
}
iv, err := hex.DecodeString(keyProtected.Crypto.CipherParams.IV)
if err != nil {
return nil, nil, err
}
cipherText, err := hex.DecodeString(keyProtected.Crypto.CipherText)
if err != nil {
return nil, nil, err
}
derivedKey, err := getKDFKey(keyProtected.Crypto, auth)
if err != nil {
return nil, nil, err
}
calculatedMAC := crypto.Keccak256(derivedKey[16:32], cipherText)
if !bytes.Equal(calculatedMAC, mac) {
return nil, nil, ErrDecrypt
}
plainText, err := aesCTRXOR(derivedKey[:16], cipherText, iv)
plainText, err := DecryptDataV3(keyProtected.Crypto, auth)
if err != nil {
return nil, nil, err
}
@@ -303,7 +315,7 @@ func decryptKeyV1(keyProtected *encryptedKeyJSONV1, auth string) (keyBytes []byt
return plainText, keyId, err
}
func getKDFKey(cryptoJSON cryptoJSON, auth string) ([]byte, error) {
func getKDFKey(cryptoJSON CryptoJSON, auth string) ([]byte, error) {
authArray := []byte(auth)
salt, err := hex.DecodeString(cryptoJSON.KDFParams["salt"].(string))
if err != nil {

View File

@@ -350,7 +350,7 @@ func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
signature[64] -= byte(chainID.Uint64()*2 + 35)
}
signed, err := tx.WithSignature(signer, signature)
if err != nil {

View File

@@ -221,7 +221,7 @@ func (w *trezorDriver) trezorSign(derivationPath []uint32, tx *types.Transaction
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
signature[64] -= byte(chainID.Uint64()*2 + 35)
}
// Inject the final signature into the transaction and sanity check the sender
signed, err := tx.WithSignature(signer, signature)

View File

@@ -23,8 +23,8 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.11.1.windows-%GETH_ARCH%.zip
- 7z x go1.11.1.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.11.2.windows-%GETH_ARCH%.zip
- 7z x go1.11.2.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version

View File

@@ -38,7 +38,7 @@ func main() {
var (
listenAddr = flag.String("addr", ":30301", "listen address")
genKey = flag.String("genkey", "", "generate a node key")
writeAddr = flag.Bool("writeaddress", false, "write out the node's pubkey hash and quit")
writeAddr = flag.Bool("writeaddress", false, "write out the node's public key and quit")
nodeKeyFile = flag.String("nodekey", "", "private key filename")
nodeKeyHex = flag.String("nodekeyhex", "", "private key as hex (for testing)")
natdesc = flag.String("nat", "none", "port mapping mechanism (any|none|upnp|pmp|extip:<IP>)")
@@ -86,7 +86,7 @@ func main() {
}
if *writeAddr {
fmt.Printf("%v\n", enode.PubkeyToIDV4(&nodeKey.PublicKey))
fmt.Printf("%x\n", crypto.FromECDSAPub(&nodeKey.PublicKey)[1:])
os.Exit(0)
}
@@ -119,16 +119,17 @@ func main() {
}
if *runv5 {
if _, err := discv5.ListenUDP(nodeKey, conn, realaddr, "", restrictList); err != nil {
if _, err := discv5.ListenUDP(nodeKey, conn, "", restrictList); err != nil {
utils.Fatalf("%v", err)
}
} else {
db, _ := enode.OpenDB("")
ln := enode.NewLocalNode(db, nodeKey)
cfg := discover.Config{
PrivateKey: nodeKey,
AnnounceAddr: realaddr,
NetRestrict: restrictList,
PrivateKey: nodeKey,
NetRestrict: restrictList,
}
if _, err := discover.ListenUDP(conn, cfg); err != nil {
if _, err := discover.ListenUDP(conn, ln, cfg); err != nil {
utils.Fatalf("%v", err)
}
}

View File

@@ -91,7 +91,7 @@ invoking methods with the following info:
* [x] Version info about the signer
* [x] Address of API (http/ipc)
* [ ] List of known accounts
* [ ] Have a default timeout on signing operations, so that if the user has not answered withing e.g. 60 seconds, the request is rejected.
* [ ] Have a default timeout on signing operations, so that if the user has not answered within e.g. 60 seconds, the request is rejected.
* [ ] `account_signRawTransaction`
* [ ] `account_bulkSignTransactions([] transactions)` should
* only exist if enabled via config/flag
@@ -129,7 +129,7 @@ The signer listens to HTTP requests on `rpcaddr`:`rpcport`, with the same JSONRP
expected to be JSON [jsonrpc 2.0 standard](http://www.jsonrpc.org/specification).
Some of these call can require user interaction. Clients must be aware that responses
may be delayed significanlty or may never be received if a users decides to ignore the confirmation request.
may be delayed significantly or may never be received if a users decides to ignore the confirmation request.
The External API is **untrusted** : it does not accept credentials over this api, nor does it expect
that requests have any authority.
@@ -862,7 +862,7 @@ A UI should conform to the following rules.
* A UI SHOULD inform the user about the `SHA256` or `MD5` hash of the binary being executed
* A UI SHOULD NOT maintain a secondary storage of data, e.g. list of accounts
* The signer provides accounts
* A UI SHOULD, to the best extent possible, use static linking / bundling, so that requried libraries are bundled
* A UI SHOULD, to the best extent possible, use static linking / bundling, so that required libraries are bundled
along with the UI.

View File

@@ -1,5 +1,9 @@
### Changelog for internal API (ui-api)
### 3.0.0
* Make use of `OnInputRequired(info UserInputRequest)` for obtaining master password during startup
### 2.1.0
* Add `OnInputRequired(info UserInputRequest)` to internal API. This method is used when Clef needs user input, e.g. passwords.
@@ -14,7 +18,6 @@ The following structures are used:
UserInputResponse struct {
Text string `json:"text"`
}
```
### 2.0.0

View File

@@ -35,8 +35,10 @@ import (
"runtime"
"strings"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
@@ -48,10 +50,10 @@ import (
)
// ExternalAPIVersion -- see extapi_changelog.md
const ExternalAPIVersion = "3.0.0"
const ExternalAPIVersion = "4.0.0"
// InternalAPIVersion -- see intapi_changelog.md
const InternalAPIVersion = "2.0.0"
const InternalAPIVersion = "3.0.0"
const legalWarning = `
WARNING!
@@ -91,7 +93,7 @@ var (
}
signerSecretFlag = cli.StringFlag{
Name: "signersecret",
Usage: "A file containing the password used to encrypt Clef credentials, e.g. keystore credentials and ruleset hash",
Usage: "A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash",
}
dBFlag = cli.StringFlag{
Name: "4bytedb",
@@ -155,18 +157,18 @@ Whenever you make an edit to the rule file, you need to use attestation to tell
Clef that the file is 'safe' to execute.`,
}
addCredentialCommand = cli.Command{
Action: utils.MigrateFlags(addCredential),
Name: "addpw",
setCredentialCommand = cli.Command{
Action: utils.MigrateFlags(setCredential),
Name: "setpw",
Usage: "Store a credential for a keystore file",
ArgsUsage: "<address> <password>",
ArgsUsage: "<address>",
Flags: []cli.Flag{
logLevelFlag,
configdirFlag,
signerSecretFlag,
},
Description: `
The addpw command stores a password for a given address (keyfile). If you invoke it with only one parameter, it will
The setpw command stores a password for a given address (keyfile). If you enter a blank passphrase, it will
remove any stored credential for that address (keyfile)
`,
}
@@ -198,7 +200,7 @@ func init() {
advancedMode,
}
app.Action = signer
app.Commands = []cli.Command{initCommand, attestCommand, addCredentialCommand}
app.Commands = []cli.Command{initCommand, attestCommand, setCredentialCommand}
}
func main() {
@@ -212,25 +214,45 @@ func initializeSecrets(c *cli.Context) error {
if err := initialize(c); err != nil {
return err
}
configDir := c.String(configdirFlag.Name)
configDir := c.GlobalString(configdirFlag.Name)
masterSeed := make([]byte, 256)
n, err := io.ReadFull(rand.Reader, masterSeed)
num, err := io.ReadFull(rand.Reader, masterSeed)
if err != nil {
return err
}
if n != len(masterSeed) {
if num != len(masterSeed) {
return fmt.Errorf("failed to read enough random")
}
n, p := keystore.StandardScryptN, keystore.StandardScryptP
if c.GlobalBool(utils.LightKDFFlag.Name) {
n, p = keystore.LightScryptN, keystore.LightScryptP
}
text := "The master seed of clef is locked with a password. Please give a password. Do not forget this password."
var password string
for {
password = getPassPhrase(text, true)
if err := core.ValidatePasswordFormat(password); err != nil {
fmt.Printf("invalid password: %v\n", err)
} else {
break
}
}
cipherSeed, err := encryptSeed(masterSeed, []byte(password), n, p)
if err != nil {
return fmt.Errorf("failed to encrypt master seed: %v", err)
}
err = os.Mkdir(configDir, 0700)
if err != nil && !os.IsExist(err) {
return err
}
location := filepath.Join(configDir, "secrets.dat")
location := filepath.Join(configDir, "masterseed.json")
if _, err := os.Stat(location); err == nil {
return fmt.Errorf("file %v already exists, will not overwrite", location)
}
err = ioutil.WriteFile(location, masterSeed, 0400)
err = ioutil.WriteFile(location, cipherSeed, 0400)
if err != nil {
return err
}
@@ -255,11 +277,11 @@ func attestFile(ctx *cli.Context) error {
return err
}
stretchedKey, err := readMasterKey(ctx)
stretchedKey, err := readMasterKey(ctx, nil)
if err != nil {
utils.Fatalf(err.Error())
}
configDir := ctx.String(configdirFlag.Name)
configDir := ctx.GlobalString(configdirFlag.Name)
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
confKey := crypto.Keccak256([]byte("config"), stretchedKey)
@@ -271,38 +293,36 @@ func attestFile(ctx *cli.Context) error {
return nil
}
func addCredential(ctx *cli.Context) error {
func setCredential(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires at leaste one argument.")
utils.Fatalf("This command requires an address to be passed as an argument.")
}
if err := initialize(ctx); err != nil {
return err
}
stretchedKey, err := readMasterKey(ctx)
address := ctx.Args().First()
password := getPassPhrase("Enter a passphrase to store with this address.", true)
stretchedKey, err := readMasterKey(ctx, nil)
if err != nil {
utils.Fatalf(err.Error())
}
configDir := ctx.String(configdirFlag.Name)
configDir := ctx.GlobalString(configdirFlag.Name)
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
// Initialize the encrypted storages
pwStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
key := ctx.Args().First()
value := ""
if len(ctx.Args()) > 1 {
value = ctx.Args().Get(1)
}
pwStorage.Put(key, value)
log.Info("Credential store updated", "key", key)
pwStorage.Put(address, password)
log.Info("Credential store updated", "key", address)
return nil
}
func initialize(c *cli.Context) error {
// Set up the logger to print everything
logOutput := os.Stdout
if c.Bool(stdiouiFlag.Name) {
if c.GlobalBool(stdiouiFlag.Name) {
logOutput = os.Stderr
// If using the stdioui, we can't do the 'confirm'-flow
fmt.Fprintf(logOutput, legalWarning)
@@ -323,26 +343,28 @@ func signer(c *cli.Context) error {
var (
ui core.SignerUI
)
if c.Bool(stdiouiFlag.Name) {
if c.GlobalBool(stdiouiFlag.Name) {
log.Info("Using stdin/stdout as UI-channel")
ui = core.NewStdIOUI()
} else {
log.Info("Using CLI as UI-channel")
ui = core.NewCommandlineUI()
}
db, err := core.NewAbiDBFromFiles(c.String(dBFlag.Name), c.String(customDBFlag.Name))
fourByteDb := c.GlobalString(dBFlag.Name)
fourByteLocal := c.GlobalString(customDBFlag.Name)
db, err := core.NewAbiDBFromFiles(fourByteDb, fourByteLocal)
if err != nil {
utils.Fatalf(err.Error())
}
log.Info("Loaded 4byte db", "signatures", db.Size(), "file", c.String("4bytedb"))
log.Info("Loaded 4byte db", "signatures", db.Size(), "file", fourByteDb, "local", fourByteLocal)
var (
api core.ExternalAPI
)
configDir := c.String(configdirFlag.Name)
if stretchedKey, err := readMasterKey(c); err != nil {
log.Info("No master seed provided, rules disabled")
configDir := c.GlobalString(configdirFlag.Name)
if stretchedKey, err := readMasterKey(c, ui); err != nil {
log.Info("No master seed provided, rules disabled", "error", err)
} else {
if err != nil {
@@ -361,7 +383,7 @@ func signer(c *cli.Context) error {
configStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "config.json"), confkey)
//Do we have a rule-file?
ruleJS, err := ioutil.ReadFile(c.String(ruleFlag.Name))
ruleJS, err := ioutil.ReadFile(c.GlobalString(ruleFlag.Name))
if err != nil {
log.Info("Could not load rulefile, rules not enabled", "file", "rulefile")
} else {
@@ -385,17 +407,15 @@ func signer(c *cli.Context) error {
}
apiImpl := core.NewSignerAPI(
c.Int64(utils.NetworkIdFlag.Name),
c.String(keystoreFlag.Name),
c.Bool(utils.NoUSBFlag.Name),
c.GlobalInt64(utils.NetworkIdFlag.Name),
c.GlobalString(keystoreFlag.Name),
c.GlobalBool(utils.NoUSBFlag.Name),
ui, db,
c.Bool(utils.LightKDFFlag.Name),
c.Bool(advancedMode.Name))
c.GlobalBool(utils.LightKDFFlag.Name),
c.GlobalBool(advancedMode.Name))
api = apiImpl
// Audit logging
if logfile := c.String(auditLogFlag.Name); logfile != "" {
if logfile := c.GlobalString(auditLogFlag.Name); logfile != "" {
api, err = core.NewAuditLogger(logfile, api)
if err != nil {
utils.Fatalf(err.Error())
@@ -414,13 +434,13 @@ func signer(c *cli.Context) error {
Service: api,
Version: "1.0"},
}
if c.Bool(utils.RPCEnabledFlag.Name) {
if c.GlobalBool(utils.RPCEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.RPCVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.RPCCORSDomainFlag.Name))
// start http server
httpEndpoint := fmt.Sprintf("%s:%d", c.String(utils.RPCListenAddrFlag.Name), c.Int(rpcPortFlag.Name))
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.RPCListenAddrFlag.Name), c.Int(rpcPortFlag.Name))
listener, _, err := rpc.StartHTTPEndpoint(httpEndpoint, rpcAPI, []string{"account"}, cors, vhosts, rpc.DefaultHTTPTimeouts)
if err != nil {
utils.Fatalf("Could not start RPC api: %v", err)
@@ -434,9 +454,9 @@ func signer(c *cli.Context) error {
}()
}
if !c.Bool(utils.IPCDisabledFlag.Name) {
if !c.GlobalBool(utils.IPCDisabledFlag.Name) {
if c.IsSet(utils.IPCPathFlag.Name) {
ipcapiURL = c.String(utils.IPCPathFlag.Name)
ipcapiURL = c.GlobalString(utils.IPCPathFlag.Name)
} else {
ipcapiURL = filepath.Join(configDir, "clef.ipc")
}
@@ -453,7 +473,7 @@ func signer(c *cli.Context) error {
}
if c.Bool(testFlag.Name) {
if c.GlobalBool(testFlag.Name) {
log.Info("Performing UI test")
go testExternalUI(apiImpl)
}
@@ -512,36 +532,52 @@ func homeDir() string {
}
return ""
}
func readMasterKey(ctx *cli.Context) ([]byte, error) {
func readMasterKey(ctx *cli.Context, ui core.SignerUI) ([]byte, error) {
var (
file string
configDir = ctx.String(configdirFlag.Name)
configDir = ctx.GlobalString(configdirFlag.Name)
)
if ctx.IsSet(signerSecretFlag.Name) {
file = ctx.String(signerSecretFlag.Name)
if ctx.GlobalIsSet(signerSecretFlag.Name) {
file = ctx.GlobalString(signerSecretFlag.Name)
} else {
file = filepath.Join(configDir, "secrets.dat")
file = filepath.Join(configDir, "masterseed.json")
}
if err := checkFile(file); err != nil {
return nil, err
}
masterKey, err := ioutil.ReadFile(file)
cipherKey, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
if len(masterKey) < 256 {
return nil, fmt.Errorf("master key of insufficient length, expected >255 bytes, got %d", len(masterKey))
var password string
// If ui is not nil, get the password from ui.
if ui != nil {
resp, err := ui.OnInputRequired(core.UserInputRequest{
Title: "Master Password",
Prompt: "Please enter the password to decrypt the master seed",
IsPassword: true})
if err != nil {
return nil, err
}
password = resp.Text
} else {
password = getPassPhrase("Decrypt master seed of clef", false)
}
masterSeed, err := decryptSeed(cipherKey, password)
if err != nil {
return nil, fmt.Errorf("failed to decrypt the master seed of clef")
}
if len(masterSeed) < 256 {
return nil, fmt.Errorf("master seed of insufficient length, expected >255 bytes, got %d", len(masterSeed))
}
// Create vault location
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), masterKey)[:10]))
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), masterSeed)[:10]))
err = os.Mkdir(vaultLocation, 0700)
if err != nil && !os.IsExist(err) {
return nil, err
}
//!TODO, use KDF to stretch the master key
// stretched_key := stretch_key(master_key)
return masterKey, nil
return masterSeed, nil
}
// checkFile is a convenience function to check if a file
@@ -619,6 +655,59 @@ func testExternalUI(api *core.SignerAPI) {
}
// getPassPhrase retrieves the password associated with clef, either fetched
// from a list of preloaded passphrases, or requested interactively from the user.
// TODO: there are many `getPassPhrase` functions, it will be better to abstract them into one.
func getPassPhrase(prompt string, confirmation bool) string {
fmt.Println(prompt)
password, err := console.Stdin.PromptPassword("Passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase confirmation: %v", err)
}
if password != confirm {
utils.Fatalf("Passphrases do not match")
}
}
return password
}
type encryptedSeedStorage struct {
Description string `json:"description"`
Version int `json:"version"`
Params keystore.CryptoJSON `json:"params"`
}
// encryptSeed uses a similar scheme as the keystore uses, but with a different wrapping,
// to encrypt the master seed
func encryptSeed(seed []byte, auth []byte, scryptN, scryptP int) ([]byte, error) {
cryptoStruct, err := keystore.EncryptDataV3(seed, auth, scryptN, scryptP)
if err != nil {
return nil, err
}
return json.Marshal(&encryptedSeedStorage{"Clef seed", 1, cryptoStruct})
}
// decryptSeed decrypts the master seed
func decryptSeed(keyjson []byte, auth string) ([]byte, error) {
var encSeed encryptedSeedStorage
if err := json.Unmarshal(keyjson, &encSeed); err != nil {
return nil, err
}
if encSeed.Version != 1 {
log.Warn(fmt.Sprintf("unsupported encryption format of seed: %d, operation will likely fail", encSeed.Version))
}
seed, err := keystore.DecryptDataV3(encSeed.Params, auth)
if err != nil {
return nil, err
}
return seed, err
}
/**
//Create Account

View File

@@ -45,14 +45,15 @@ func (l *JSONLogger) CaptureStart(from common.Address, to common.Address, create
// CaptureState outputs state information on the logger.
func (l *JSONLogger) CaptureState(env *vm.EVM, pc uint64, op vm.OpCode, gas, cost uint64, memory *vm.Memory, stack *vm.Stack, contract *vm.Contract, depth int, err error) error {
log := vm.StructLog{
Pc: pc,
Op: op,
Gas: gas,
GasCost: cost,
MemorySize: memory.Len(),
Storage: nil,
Depth: depth,
Err: err,
Pc: pc,
Op: op,
Gas: gas,
GasCost: cost,
MemorySize: memory.Len(),
Storage: nil,
Depth: depth,
RefundCounter: env.StateDB.GetRefund(),
Err: err,
}
if !l.cfg.DisableMemory {
log.Memory = memory.Data()

View File

@@ -21,7 +21,6 @@ import (
"fmt"
"math"
"os"
"runtime"
godebug "runtime/debug"
"sort"
"strconv"
@@ -90,6 +89,7 @@ var (
utils.LightKDFFlag,
utils.CacheFlag,
utils.CacheDatabaseFlag,
utils.CacheTrieFlag,
utils.CacheGCFlag,
utils.TrieCacheGenFlag,
utils.ListenPortFlag,
@@ -209,8 +209,6 @@ func init() {
app.Flags = append(app.Flags, metricsFlags...)
app.Before = func(ctx *cli.Context) error {
runtime.GOMAXPROCS(runtime.NumCPU())
logdir := ""
if ctx.GlobalBool(utils.DashboardEnabledFlag.Name) {
logdir = (&node.Config{DataDir: utils.MakeDataDir(ctx)}).ResolvePath("logs")

View File

@@ -132,6 +132,7 @@ var AppHelpFlagGroups = []flagGroup{
Flags: []cli.Flag{
utils.CacheFlag,
utils.CacheDatabaseFlag,
utils.CacheTrieFlag,
utils.CacheGCFlag,
utils.TrieCacheGenFlag,
},

View File

@@ -29,7 +29,65 @@ import (
"gopkg.in/urfave/cli.v1"
)
var salt = make([]byte, 32)
var (
salt = make([]byte, 32)
accessCommand = cli.Command{
CustomHelpTemplate: helpTemplate,
Name: "access",
Usage: "encrypts a reference and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root manifest",
Subcommands: []cli.Command{
{
CustomHelpTemplate: helpTemplate,
Name: "new",
Usage: "encrypts a reference and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
Subcommands: []cli.Command{
{
Action: accessNewPass,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
utils.PasswordFileFlag,
SwarmDryRunFlag,
},
Name: "pass",
Usage: "encrypts a reference with a password and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
{
Action: accessNewPK,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
utils.PasswordFileFlag,
SwarmDryRunFlag,
SwarmAccessGrantKeyFlag,
},
Name: "pk",
Usage: "encrypts a reference with the node's private key and a given grantee's public key and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
{
Action: accessNewACT,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
SwarmAccessGrantKeysFlag,
SwarmDryRunFlag,
utils.PasswordFileFlag,
},
Name: "act",
Usage: "encrypts a reference with the node's private key and a given grantee's public key and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
},
},
},
}
)
func init() {
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
@@ -56,6 +114,9 @@ func accessNewPass(ctx *cli.Context) {
utils.Fatalf("error getting session key: %v", err)
}
m, err := api.GenerateAccessControlManifest(ctx, ref, accessKey, ae)
if err != nil {
utils.Fatalf("had an error generating the manifest: %v", err)
}
if dryRun {
err = printManifests(m, nil)
if err != nil {
@@ -89,6 +150,9 @@ func accessNewPK(ctx *cli.Context) {
utils.Fatalf("error getting session key: %v", err)
}
m, err := api.GenerateAccessControlManifest(ctx, ref, sessionKey, ae)
if err != nil {
utils.Fatalf("had an error generating the manifest: %v", err)
}
if dryRun {
err = printManifests(m, nil)
if err != nil {

View File

@@ -14,8 +14,6 @@
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// +build !windows
package main
import (
@@ -28,6 +26,7 @@ import (
gorand "math/rand"
"net/http"
"os"
"runtime"
"strings"
"testing"
"time"
@@ -37,7 +36,7 @@ import (
"github.com/ethereum/go-ethereum/crypto/sha3"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
swarmapi "github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/swarm/testutil"
)
@@ -48,23 +47,41 @@ const (
var DefaultCurve = crypto.S256()
// TestAccessPassword tests for the correct creation of an ACT manifest protected by a password.
func TestACT(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
initCluster(t)
cases := []struct {
name string
f func(t *testing.T)
}{
{"Password", testPassword},
{"PK", testPK},
{"ACTWithoutBogus", testACTWithoutBogus},
{"ACTWithBogus", testACTWithBogus},
}
for _, tc := range cases {
t.Run(tc.name, tc.f)
}
}
// testPassword tests for the correct creation of an ACT manifest protected by a password.
// The test creates bogus content, uploads it encrypted, then creates the wrapping manifest with the Access entry
// The parties participating - node (publisher), uploads to second node then disappears. Content which was uploaded
// is then fetched through 2nd node. since the tested code is not key-aware - we can just
// fetch from the 2nd node using HTTP BasicAuth
func TestAccessPassword(t *testing.T) {
cluster := newTestCluster(t, 1)
defer cluster.Shutdown()
proxyNode := cluster.Nodes[0]
func testPassword(t *testing.T) {
dataFilename := testutil.TempFileWithContent(t, data)
defer os.RemoveAll(dataFilename)
// upload the file with 'swarm up' and expect a hash
up := runSwarm(t,
"--bzzapi",
proxyNode.URL, //it doesn't matter through which node we upload content
cluster.Nodes[0].URL,
"up",
"--encrypt",
dataFilename)
@@ -138,16 +155,17 @@ func TestAccessPassword(t *testing.T) {
if a.Publisher != "" {
t.Fatal("should be empty")
}
client := swarm.NewClient(cluster.Nodes[0].URL)
client := swarmapi.NewClient(cluster.Nodes[0].URL)
hash, err := client.UploadManifest(&m, false)
if err != nil {
t.Fatal(err)
}
httpClient := &http.Client{}
url := cluster.Nodes[0].URL + "/" + "bzz:/" + hash
httpClient := &http.Client{}
response, err := httpClient.Get(url)
if err != nil {
t.Fatal(err)
@@ -189,7 +207,7 @@ func TestAccessPassword(t *testing.T) {
//download file with 'swarm down' with wrong password
up = runSwarm(t,
"--bzzapi",
proxyNode.URL,
cluster.Nodes[0].URL,
"down",
"bzz:/"+hash,
tmp,
@@ -203,16 +221,12 @@ func TestAccessPassword(t *testing.T) {
up.ExpectExit()
}
// TestAccessPK tests for the correct creation of an ACT manifest between two parties (publisher and grantee).
// testPK tests for the correct creation of an ACT manifest between two parties (publisher and grantee).
// The test creates bogus content, uploads it encrypted, then creates the wrapping manifest with the Access entry
// The parties participating - node (publisher), uploads to second node (which is also the grantee) then disappears.
// Content which was uploaded is then fetched through the grantee's http proxy. Since the tested code is private-key aware,
// the test will fail if the proxy's given private key is not granted on the ACT.
func TestAccessPK(t *testing.T) {
// Setup Swarm and upload a test file to it
cluster := newTestCluster(t, 2)
defer cluster.Shutdown()
func testPK(t *testing.T) {
dataFilename := testutil.TempFileWithContent(t, data)
defer os.RemoveAll(dataFilename)
@@ -318,7 +332,7 @@ func TestAccessPK(t *testing.T) {
if a.Publisher != pkComp {
t.Fatal("publisher key did not match")
}
client := swarm.NewClient(cluster.Nodes[0].URL)
client := swarmapi.NewClient(cluster.Nodes[0].URL)
hash, err := client.UploadManifest(&m, false)
if err != nil {
@@ -344,29 +358,24 @@ func TestAccessPK(t *testing.T) {
}
}
// TestAccessACT tests the creation of the ACT manifest end-to-end, without any bogus entries (i.e. default scenario = 3 nodes 1 unauthorized)
func TestAccessACT(t *testing.T) {
testAccessACT(t, 0)
// testACTWithoutBogus tests the creation of the ACT manifest end-to-end, without any bogus entries (i.e. default scenario = 3 nodes 1 unauthorized)
func testACTWithoutBogus(t *testing.T) {
testACT(t, 0)
}
// TestAccessACTScale tests the creation of the ACT manifest end-to-end, with 1000 bogus entries (i.e. 1000 EC keys + default scenario = 3 nodes 1 unauthorized = 1003 keys in the ACT manifest)
func TestAccessACTScale(t *testing.T) {
testAccessACT(t, 1000)
// testACTWithBogus tests the creation of the ACT manifest end-to-end, with 100 bogus entries (i.e. 100 EC keys + default scenario = 3 nodes 1 unauthorized = 103 keys in the ACT manifest)
func testACTWithBogus(t *testing.T) {
testACT(t, 100)
}
// TestAccessACT tests the e2e creation, uploading and downloading of an ACT access control with both EC keys AND password protection
// testACT tests the e2e creation, uploading and downloading of an ACT access control with both EC keys AND password protection
// the test fires up a 3 node cluster, then randomly picks 2 nodes which will be acting as grantees to the data
// set and also protects the ACT with a password. the third node should fail decoding the reference as it will not be granted access.
// the third node then then tries to download using a correct password (and succeeds) then uses a wrong password and fails.
// the publisher uploads through one of the nodes then disappears.
func testAccessACT(t *testing.T, bogusEntries int) {
// Setup Swarm and upload a test file to it
const clusterSize = 3
cluster := newTestCluster(t, clusterSize)
defer cluster.Shutdown()
func testACT(t *testing.T, bogusEntries int) {
var uploadThroughNode = cluster.Nodes[0]
client := swarm.NewClient(uploadThroughNode.URL)
client := swarmapi.NewClient(uploadThroughNode.URL)
r1 := gorand.New(gorand.NewSource(time.Now().UnixNano()))
nodeToSkip := r1.Intn(clusterSize) // a number between 0 and 2 (node indices in `cluster`)

View File

@@ -80,6 +80,7 @@ const (
SWARM_ENV_STORE_CAPACITY = "SWARM_STORE_CAPACITY"
SWARM_ENV_STORE_CACHE_CAPACITY = "SWARM_STORE_CACHE_CAPACITY"
SWARM_ACCESS_PASSWORD = "SWARM_ACCESS_PASSWORD"
SWARM_AUTO_DEFAULTPATH = "SWARM_AUTO_DEFAULTPATH"
GETH_ENV_DATADIR = "GETH_DATADIR"
)

View File

@@ -26,14 +26,14 @@ import (
"testing"
"time"
"github.com/docker/docker/pkg/reexec"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm"
"github.com/ethereum/go-ethereum/swarm/api"
"github.com/docker/docker/pkg/reexec"
)
func TestDumpConfig(t *testing.T) {
func TestConfigDump(t *testing.T) {
swarm := runSwarm(t, "dumpconfig")
defaultConf := api.NewConfig()
out, err := tomlSettings.Marshal(&defaultConf)
@@ -91,8 +91,8 @@ func TestConfigCmdLineOverrides(t *testing.T) {
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
fmt.Sprintf("--%s", SwarmDeliverySkipCheckFlag.Name),
fmt.Sprintf("--%s", EnsAPIFlag.Name), "",
"--datadir", dir,
"--ipcpath", conf.IPCPath,
fmt.Sprintf("--%s", utils.DataDirFlag.Name), dir,
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), conf.IPCPath,
}
node.Cmd = runSwarm(t, flags...)
node.Cmd.InputLine(testPassphrase)
@@ -189,9 +189,9 @@ func TestConfigFileOverrides(t *testing.T) {
flags := []string{
fmt.Sprintf("--%s", SwarmTomlConfigPathFlag.Name), f.Name(),
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
"--ens-api", "",
"--ipcpath", conf.IPCPath,
"--datadir", dir,
fmt.Sprintf("--%s", EnsAPIFlag.Name), "",
fmt.Sprintf("--%s", utils.DataDirFlag.Name), dir,
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), conf.IPCPath,
}
node.Cmd = runSwarm(t, flags...)
node.Cmd.InputLine(testPassphrase)
@@ -407,9 +407,9 @@ func TestConfigCmdLineOverridesFile(t *testing.T) {
fmt.Sprintf("--%s", SwarmSyncDisabledFlag.Name),
fmt.Sprintf("--%s", SwarmTomlConfigPathFlag.Name), f.Name(),
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
"--ens-api", "",
"--datadir", dir,
"--ipcpath", conf.IPCPath,
fmt.Sprintf("--%s", EnsAPIFlag.Name), "",
fmt.Sprintf("--%s", utils.DataDirFlag.Name), dir,
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), conf.IPCPath,
}
node.Cmd = runSwarm(t, flags...)
node.Cmd.InputLine(testPassphrase)
@@ -466,7 +466,7 @@ func TestConfigCmdLineOverridesFile(t *testing.T) {
node.Shutdown()
}
func TestValidateConfig(t *testing.T) {
func TestConfigValidate(t *testing.T) {
for _, c := range []struct {
cfg *api.Config
err string

View File

@@ -29,6 +29,48 @@ import (
"gopkg.in/urfave/cli.v1"
)
var dbCommand = cli.Command{
Name: "db",
CustomHelpTemplate: helpTemplate,
Usage: "manage the local chunk database",
ArgsUsage: "db COMMAND",
Description: "Manage the local chunk database",
Subcommands: []cli.Command{
{
Action: dbExport,
CustomHelpTemplate: helpTemplate,
Name: "export",
Usage: "export a local chunk database as a tar archive (use - to send to stdout)",
ArgsUsage: "<chunkdb> <file>",
Description: `
Export a local chunk database as a tar archive (use - to send to stdout).
swarm db export ~/.ethereum/swarm/bzz-KEY/chunks chunks.tar
The export may be quite large, consider piping the output through the Unix
pv(1) tool to get a progress bar:
swarm db export ~/.ethereum/swarm/bzz-KEY/chunks - | pv > chunks.tar
`,
},
{
Action: dbImport,
CustomHelpTemplate: helpTemplate,
Name: "import",
Usage: "import chunks from a tar archive into a local chunk database (use - to read from stdin)",
ArgsUsage: "<chunkdb> <file>",
Description: `Import chunks from a tar archive into a local chunk database (use - to read from stdin).
swarm db import ~/.ethereum/swarm/bzz-KEY/chunks chunks.tar
The import may be quite large, consider piping the input through the Unix
pv(1) tool to get a progress bar:
pv chunks.tar | swarm db import ~/.ethereum/swarm/bzz-KEY/chunks -`,
},
},
}
func dbExport(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 3 {

View File

@@ -28,6 +28,15 @@ import (
"gopkg.in/urfave/cli.v1"
)
var downloadCommand = cli.Command{
Action: download,
Name: "down",
Flags: []cli.Flag{SwarmRecursiveFlag, SwarmAccessPasswordFlag},
Usage: "downloads a swarm manifest or a file inside a manifest",
ArgsUsage: " <uri> [<dir>]",
Description: `Downloads a swarm bzz uri to the given dir. When no dir is provided, working directory is assumed. --recursive flag is expected when downloading a manifest with multiple entries.`,
}
func download(ctx *cli.Context) {
log.Debug("downloading content using swarm down")
args := ctx.Args()

View File

@@ -19,9 +19,7 @@ package main
import (
"bytes"
"crypto/md5"
"crypto/rand"
"io"
"io/ioutil"
"net/http"
"os"
"runtime"
@@ -29,6 +27,7 @@ import (
"testing"
"github.com/ethereum/go-ethereum/swarm"
"github.com/ethereum/go-ethereum/swarm/testutil"
)
// TestCLISwarmExportImport perform the following test:
@@ -44,12 +43,13 @@ func TestCLISwarmExportImport(t *testing.T) {
}
cluster := newTestCluster(t, 1)
// generate random 10mb file
f, cleanup := generateRandomFile(t, 10000000)
defer cleanup()
// generate random 1mb file
content := testutil.RandomBytes(1, 1000000)
fileName := testutil.TempFileWithContent(t, string(content))
defer os.Remove(fileName)
// upload the file with 'swarm up' and expect a hash
up := runSwarm(t, "--bzzapi", cluster.Nodes[0].URL, "up", f.Name())
up := runSwarm(t, "--bzzapi", cluster.Nodes[0].URL, "up", fileName)
_, matches := up.ExpectRegexp(`[a-f\d]{64}`)
up.ExpectExit()
hash := matches[0]
@@ -96,7 +96,7 @@ func TestCLISwarmExportImport(t *testing.T) {
}
// compare downloaded file with the generated random file
mustEqualFiles(t, f, res.Body)
mustEqualFiles(t, bytes.NewReader(content), res.Body)
}
func mustEqualFiles(t *testing.T, up io.Reader, down io.Reader) {
@@ -117,27 +117,3 @@ func mustEqualFiles(t *testing.T, up io.Reader, down io.Reader) {
t.Fatalf("downloaded imported file md5=%x (length %v) is not the same as the generated one mp5=%x (length %v)", downHash, downLen, upHash, upLen)
}
}
func generateRandomFile(t *testing.T, size int) (f *os.File, teardown func()) {
// create a tmp file
tmp, err := ioutil.TempFile("", "swarm-test")
if err != nil {
t.Fatal(err)
}
// callback for tmp file cleanup
teardown = func() {
tmp.Close()
os.Remove(tmp.Name())
}
// write 10mb random data to file
buf := make([]byte, 10000000)
_, err = rand.Read(buf)
if err != nil {
t.Fatal(err)
}
ioutil.WriteFile(tmp.Name(), buf, 0755)
return tmp, teardown
}

View File

@@ -31,6 +31,68 @@ import (
"gopkg.in/urfave/cli.v1"
)
var feedCommand = cli.Command{
CustomHelpTemplate: helpTemplate,
Name: "feed",
Usage: "(Advanced) Create and update Swarm Feeds",
ArgsUsage: "<create|update|info>",
Description: "Works with Swarm Feeds",
Subcommands: []cli.Command{
{
Action: feedCreateManifest,
CustomHelpTemplate: helpTemplate,
Name: "create",
Usage: "creates and publishes a new feed manifest",
Description: `creates and publishes a new feed manifest pointing to a specified user's updates about a particular topic.
The feed topic can be built in the following ways:
* use --topic to set the topic to an arbitrary binary hex string.
* use --name to set the topic to a human-readable name.
For example --name could be set to "profile-picture", meaning this feed allows to get this user's current profile picture.
* use both --topic and --name to create named subtopics.
For example, --topic could be set to an Ethereum contract address and --name could be set to "comments", meaning
this feed tracks a discussion about that contract.
The --user flag allows to have this manifest refer to a user other than yourself. If not specified,
it will then default to your local account (--bzzaccount)`,
Flags: []cli.Flag{SwarmFeedNameFlag, SwarmFeedTopicFlag, SwarmFeedUserFlag},
},
{
Action: feedUpdate,
CustomHelpTemplate: helpTemplate,
Name: "update",
Usage: "updates the content of an existing Swarm Feed",
ArgsUsage: "<0x Hex data>",
Description: `publishes a new update on the specified topic
The feed topic can be built in the following ways:
* use --topic to set the topic to an arbitrary binary hex string.
* use --name to set the topic to a human-readable name.
For example --name could be set to "profile-picture", meaning this feed allows to get this user's current profile picture.
* use both --topic and --name to create named subtopics.
For example, --topic could be set to an Ethereum contract address and --name could be set to "comments", meaning
this feed tracks a discussion about that contract.
If you have a manifest, you can specify it with --manifest to refer to the feed,
instead of using --topic / --name
`,
Flags: []cli.Flag{SwarmFeedManifestFlag, SwarmFeedNameFlag, SwarmFeedTopicFlag},
},
{
Action: feedInfo,
CustomHelpTemplate: helpTemplate,
Name: "info",
Usage: "obtains information about an existing Swarm feed",
Description: `obtains information about an existing Swarm feed
The topic can be specified directly with the --topic flag as an hex string
If no topic is specified, the default topic (zero) will be used
The --name flag can be used to specify subtopics with a specific name.
The --user flag allows to refer to a user other than yourself. If not specified,
it will then default to your local account (--bzzaccount)
If you have a manifest, you can specify it with --manifest instead of --topic / --name / ---user
to refer to the feed`,
Flags: []cli.Flag{SwarmFeedManifestFlag, SwarmFeedNameFlag, SwarmFeedTopicFlag, SwarmFeedUserFlag},
},
},
}
func NewGenericSigner(ctx *cli.Context) feed.Signer {
return feed.NewGenericSigner(getPrivKey(ctx))
}
@@ -107,7 +169,6 @@ func feedUpdate(ctx *cli.Context) {
query = new(feed.Query)
query.User = signer.Address()
query.Topic = getTopic(ctx)
}
// Retrieve a feed update request
@@ -116,6 +177,11 @@ func feedUpdate(ctx *cli.Context) {
utils.Fatalf("Error retrieving feed status: %s", err.Error())
}
// Check that the provided signer matches the request to sign
if updateRequest.User != signer.Address() {
utils.Fatalf("Signer address does not match the update request")
}
// set the new data
updateRequest.SetData(data)

View File

@@ -19,50 +19,35 @@ package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"testing"
"github.com/ethereum/go-ethereum/swarm/api"
"github.com/ethereum/go-ethereum/swarm/storage/feed/lookup"
"github.com/ethereum/go-ethereum/swarm/testutil"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/swarm/storage/feed"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
swarmhttp "github.com/ethereum/go-ethereum/swarm/api/http"
"github.com/ethereum/go-ethereum/swarm/storage/feed"
"github.com/ethereum/go-ethereum/swarm/storage/feed/lookup"
"github.com/ethereum/go-ethereum/swarm/testutil"
)
func TestCLIFeedUpdate(t *testing.T) {
srv := testutil.NewTestSwarmServer(t, func(api *api.API) testutil.TestServer {
srv := swarmhttp.NewTestSwarmServer(t, func(api *api.API) swarmhttp.TestServer {
return swarmhttp.NewServer(api, "")
}, nil)
log.Info("starting a test swarm server")
defer srv.Close()
// create a private key file for signing
pkfile, err := ioutil.TempFile("", "swarm-test")
if err != nil {
t.Fatal(err)
}
defer pkfile.Close()
defer os.Remove(pkfile.Name())
privkeyHex := "0000000000000000000000000000000000000000000000000000000000001979"
privKey, _ := crypto.HexToECDSA(privkeyHex)
address := crypto.PubkeyToAddress(privKey.PublicKey)
// save the private key to a file
_, err = io.WriteString(pkfile, privkeyHex)
if err != nil {
t.Fatal(err)
}
pkFileName := testutil.TempFileWithContent(t, privkeyHex)
defer os.Remove(pkFileName)
// compose a topic. We'll be doing quotes about Miguel de Cervantes
var topic feed.Topic
@@ -76,26 +61,23 @@ func TestCLIFeedUpdate(t *testing.T) {
flags := []string{
"--bzzapi", srv.URL,
"--bzzaccount", pkfile.Name(),
"--bzzaccount", pkFileName,
"feed", "update",
"--topic", topic.Hex(),
"--name", name,
hexData}
// create an update and expect an exit without errors
log.Info(fmt.Sprintf("updating a feed with 'swarm feed update'"))
log.Info("updating a feed with 'swarm feed update'")
cmd := runSwarm(t, flags...)
cmd.ExpectExit()
// now try to get the update using the client
client := swarm.NewClient(srv.URL)
if err != nil {
t.Fatal(err)
}
// build the same topic as before, this time
// we use NewTopic to create a topic automatically.
topic, err = feed.NewTopic(name, subject)
topic, err := feed.NewTopic(name, subject)
if err != nil {
t.Fatal(err)
}
@@ -133,7 +115,7 @@ func TestCLIFeedUpdate(t *testing.T) {
"--user", address.Hex(),
}
log.Info(fmt.Sprintf("getting feed info with 'swarm feed info'"))
log.Info("getting feed info with 'swarm feed info'")
cmd = runSwarm(t, flags...)
_, matches := cmd.ExpectRegexp(`.*`) // regex hack to extract stdout
cmd.ExpectExit()
@@ -153,14 +135,14 @@ func TestCLIFeedUpdate(t *testing.T) {
// test publishing a manifest
flags = []string{
"--bzzapi", srv.URL,
"--bzzaccount", pkfile.Name(),
"--bzzaccount", pkFileName,
"feed", "create",
"--topic", topic.Hex(),
}
log.Info(fmt.Sprintf("Publishing manifest with 'swarm feed create'"))
log.Info("Publishing manifest with 'swarm feed create'")
cmd = runSwarm(t, flags...)
_, matches = cmd.ExpectRegexp(`[a-f\d]{64}`) // regex hack to extract stdout
_, matches = cmd.ExpectRegexp(`[a-f\d]{64}`)
cmd.ExpectExit()
manifestAddress := matches[0] // read the received feed manifest
@@ -179,4 +161,36 @@ func TestCLIFeedUpdate(t *testing.T) {
if !bytes.Equal(data, retrieved) {
t.Fatalf("Received %s, expected %s", retrieved, data)
}
// test publishing a manifest for a different user
flags = []string{
"--bzzapi", srv.URL,
"feed", "create",
"--topic", topic.Hex(),
"--user", "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", // different user
}
log.Info("Publishing manifest with 'swarm feed create' for a different user")
cmd = runSwarm(t, flags...)
_, matches = cmd.ExpectRegexp(`[a-f\d]{64}`)
cmd.ExpectExit()
manifestAddress = matches[0] // read the received feed manifest
// now let's try to update that user's manifest which we don't have the private key for
flags = []string{
"--bzzapi", srv.URL,
"--bzzaccount", pkFileName,
"feed", "update",
"--manifest", manifestAddress,
hexData}
// create an update and expect an error given there is a user mismatch
log.Info("updating a feed with 'swarm feed update'")
cmd = runSwarm(t, flags...)
cmd.ExpectRegexp("Fatal:.*") // best way so far to detect a failure.
cmd.ExpectExit()
if cmd.ExitStatus() == 0 {
t.Fatal("Expected nonzero exit code when updating a manifest with the wrong user. Got 0.")
}
}

179
cmd/swarm/flags.go Normal file
View File

@@ -0,0 +1,179 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command feed allows the user to create and update signed Swarm feeds
package main
import cli "gopkg.in/urfave/cli.v1"
var (
ChequebookAddrFlag = cli.StringFlag{
Name: "chequebook",
Usage: "chequebook contract address",
EnvVar: SWARM_ENV_CHEQUEBOOK_ADDR,
}
SwarmAccountFlag = cli.StringFlag{
Name: "bzzaccount",
Usage: "Swarm account key file",
EnvVar: SWARM_ENV_ACCOUNT,
}
SwarmListenAddrFlag = cli.StringFlag{
Name: "httpaddr",
Usage: "Swarm HTTP API listening interface",
EnvVar: SWARM_ENV_LISTEN_ADDR,
}
SwarmPortFlag = cli.StringFlag{
Name: "bzzport",
Usage: "Swarm local http api port",
EnvVar: SWARM_ENV_PORT,
}
SwarmNetworkIdFlag = cli.IntFlag{
Name: "bzznetworkid",
Usage: "Network identifier (integer, default 3=swarm testnet)",
EnvVar: SWARM_ENV_NETWORK_ID,
}
SwarmSwapEnabledFlag = cli.BoolFlag{
Name: "swap",
Usage: "Swarm SWAP enabled (default false)",
EnvVar: SWARM_ENV_SWAP_ENABLE,
}
SwarmSwapAPIFlag = cli.StringFlag{
Name: "swap-api",
Usage: "URL of the Ethereum API provider to use to settle SWAP payments",
EnvVar: SWARM_ENV_SWAP_API,
}
SwarmSyncDisabledFlag = cli.BoolTFlag{
Name: "nosync",
Usage: "Disable swarm syncing",
EnvVar: SWARM_ENV_SYNC_DISABLE,
}
SwarmSyncUpdateDelay = cli.DurationFlag{
Name: "sync-update-delay",
Usage: "Duration for sync subscriptions update after no new peers are added (default 15s)",
EnvVar: SWARM_ENV_SYNC_UPDATE_DELAY,
}
SwarmMaxStreamPeerServersFlag = cli.IntFlag{
Name: "max-stream-peer-servers",
Usage: "Limit of Stream peer servers, 0 denotes unlimited",
EnvVar: SWARM_ENV_MAX_STREAM_PEER_SERVERS,
Value: 10000, // A very large default value is possible as stream servers have very small memory footprint
}
SwarmLightNodeEnabled = cli.BoolFlag{
Name: "lightnode",
Usage: "Enable Swarm LightNode (default false)",
EnvVar: SWARM_ENV_LIGHT_NODE_ENABLE,
}
SwarmDeliverySkipCheckFlag = cli.BoolFlag{
Name: "delivery-skip-check",
Usage: "Skip chunk delivery check (default false)",
EnvVar: SWARM_ENV_DELIVERY_SKIP_CHECK,
}
EnsAPIFlag = cli.StringSliceFlag{
Name: "ens-api",
Usage: "ENS API endpoint for a TLD and with contract address, can be repeated, format [tld:][contract-addr@]url",
EnvVar: SWARM_ENV_ENS_API,
}
SwarmApiFlag = cli.StringFlag{
Name: "bzzapi",
Usage: "Specifies the Swarm HTTP endpoint to connect to",
Value: "http://127.0.0.1:8500",
}
SwarmRecursiveFlag = cli.BoolFlag{
Name: "recursive",
Usage: "Upload directories recursively",
}
SwarmWantManifestFlag = cli.BoolTFlag{
Name: "manifest",
Usage: "Automatic manifest upload (default true)",
}
SwarmUploadDefaultPath = cli.StringFlag{
Name: "defaultpath",
Usage: "path to file served for empty url path (none)",
}
SwarmAccessGrantKeyFlag = cli.StringFlag{
Name: "grant-key",
Usage: "grants a given public key access to an ACT",
}
SwarmAccessGrantKeysFlag = cli.StringFlag{
Name: "grant-keys",
Usage: "grants a given list of public keys in the following file (separated by line breaks) access to an ACT",
}
SwarmUpFromStdinFlag = cli.BoolFlag{
Name: "stdin",
Usage: "reads data to be uploaded from stdin",
}
SwarmUploadMimeType = cli.StringFlag{
Name: "mime",
Usage: "Manually specify MIME type",
}
SwarmEncryptedFlag = cli.BoolFlag{
Name: "encrypt",
Usage: "use encrypted upload",
}
SwarmAccessPasswordFlag = cli.StringFlag{
Name: "password",
Usage: "Password",
EnvVar: SWARM_ACCESS_PASSWORD,
}
SwarmDryRunFlag = cli.BoolFlag{
Name: "dry-run",
Usage: "dry-run",
}
CorsStringFlag = cli.StringFlag{
Name: "corsdomain",
Usage: "Domain on which to send Access-Control-Allow-Origin header (multiple domains can be supplied separated by a ',')",
EnvVar: SWARM_ENV_CORS,
}
SwarmStorePath = cli.StringFlag{
Name: "store.path",
Usage: "Path to leveldb chunk DB (default <$GETH_ENV_DIR>/swarm/bzz-<$BZZ_KEY>/chunks)",
EnvVar: SWARM_ENV_STORE_PATH,
}
SwarmStoreCapacity = cli.Uint64Flag{
Name: "store.size",
Usage: "Number of chunks (5M is roughly 20-25GB) (default 5000000)",
EnvVar: SWARM_ENV_STORE_CAPACITY,
}
SwarmStoreCacheCapacity = cli.UintFlag{
Name: "store.cache.size",
Usage: "Number of recent chunks cached in memory (default 5000)",
EnvVar: SWARM_ENV_STORE_CACHE_CAPACITY,
}
SwarmCompressedFlag = cli.BoolFlag{
Name: "compressed",
Usage: "Prints encryption keys in compressed form",
}
SwarmFeedNameFlag = cli.StringFlag{
Name: "name",
Usage: "User-defined name for the new feed, limited to 32 characters. If combined with topic, it will refer to a subtopic with this name",
}
SwarmFeedTopicFlag = cli.StringFlag{
Name: "topic",
Usage: "User-defined topic this feed is tracking, hex encoded. Limited to 64 hexadecimal characters",
}
SwarmFeedDataOnCreateFlag = cli.StringFlag{
Name: "data",
Usage: "Initializes the feed with the given hex-encoded data. Data must be prefixed by 0x",
}
SwarmFeedManifestFlag = cli.StringFlag{
Name: "manifest",
Usage: "Refers to the feed through a manifest",
}
SwarmFeedUserFlag = cli.StringFlag{
Name: "user",
Usage: "Indicates the user who updates the feed",
}
)

View File

@@ -24,16 +24,50 @@ import (
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm/fuse"
"gopkg.in/urfave/cli.v1"
)
var fsCommand = cli.Command{
Name: "fs",
CustomHelpTemplate: helpTemplate,
Usage: "perform FUSE operations",
ArgsUsage: "fs COMMAND",
Description: "Performs FUSE operations by mounting/unmounting/listing mount points. This assumes you already have a Swarm node running locally. For all operation you must reference the correct path to bzzd.ipc in order to communicate with the node",
Subcommands: []cli.Command{
{
Action: mount,
CustomHelpTemplate: helpTemplate,
Name: "mount",
Usage: "mount a swarm hash to a mount point",
ArgsUsage: "swarm fs mount <manifest hash> <mount point>",
Description: "Mounts a Swarm manifest hash to a given mount point. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
{
Action: unmount,
CustomHelpTemplate: helpTemplate,
Name: "unmount",
Usage: "unmount a swarmfs mount",
ArgsUsage: "swarm fs unmount <mount point>",
Description: "Unmounts a swarmfs mount residing at <mount point>. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
{
Action: listMounts,
CustomHelpTemplate: helpTemplate,
Name: "list",
Usage: "list swarmfs mounts",
ArgsUsage: "swarm fs list",
Description: "Lists all mounted swarmfs volumes. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
},
}
func mount(cliContext *cli.Context) {
args := cliContext.Args()
if len(args) < 2 {
utils.Fatalf("Usage: swarm fs mount --ipcpath <path to bzzd.ipc> <manifestHash> <file name>")
utils.Fatalf("Usage: swarm fs mount <manifestHash> <file name>")
}
client, err := dialRPC(cliContext)
@@ -60,7 +94,7 @@ func unmount(cliContext *cli.Context) {
args := cliContext.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm fs unmount --ipcpath <path to bzzd.ipc> <mount path>")
utils.Fatalf("Usage: swarm fs unmount <mount path>")
}
client, err := dialRPC(cliContext)
if err != nil {
@@ -108,20 +142,21 @@ func listMounts(cliContext *cli.Context) {
}
func dialRPC(ctx *cli.Context) (*rpc.Client, error) {
var endpoint string
endpoint := getIPCEndpoint(ctx)
log.Info("IPC endpoint", "path", endpoint)
return rpc.Dial(endpoint)
}
if ctx.IsSet(utils.IPCPathFlag.Name) {
endpoint = ctx.String(utils.IPCPathFlag.Name)
} else {
utils.Fatalf("swarm ipc endpoint not specified")
}
func getIPCEndpoint(ctx *cli.Context) string {
cfg := defaultNodeConfig
utils.SetNodeConfig(ctx, &cfg)
if endpoint == "" {
endpoint = node.DefaultIPCEndpoint(clientIdentifier)
} else if strings.HasPrefix(endpoint, "rpc:") || strings.HasPrefix(endpoint, "ipc:") {
endpoint := cfg.IPCEndpoint()
if strings.HasPrefix(endpoint, "rpc:") || strings.HasPrefix(endpoint, "ipc:") {
// Backwards compatibility with geth < 1.5 which required
// these prefixes.
endpoint = endpoint[4:]
}
return rpc.Dial(endpoint)
return endpoint
}

View File

@@ -20,6 +20,7 @@ package main
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
@@ -28,20 +29,35 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/log"
colorable "github.com/mattn/go-colorable"
)
func init() {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
}
type testFile struct {
filePath string
content string
}
// TestCLISwarmFsDefaultIPCPath tests if the most basic fs command, i.e., list
// can find and correctly connect to a running Swarm node on the default
// IPCPath.
func TestCLISwarmFsDefaultIPCPath(t *testing.T) {
cluster := newTestCluster(t, 1)
defer cluster.Shutdown()
handlingNode := cluster.Nodes[0]
list := runSwarm(t, []string{
"--datadir", handlingNode.Dir,
"fs",
"list",
}...)
list.WaitExit()
if list.Err != nil {
t.Fatal(list.Err)
}
}
// TestCLISwarmFs is a high-level test of swarmfs
//
// This test fails on travis for macOS as this executable exits with code 1
@@ -65,9 +81,9 @@ func TestCLISwarmFs(t *testing.T) {
log.Debug("swarmfs cli test: mounting first run", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
mount := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"mount",
"--ipcpath", filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
mhash,
mountPoint,
}...)
@@ -80,6 +96,9 @@ func TestCLISwarmFs(t *testing.T) {
t.Fatal(err)
}
dirPath2, err := createDirInDir(dirPath, "AnotherTestSubDir")
if err != nil {
t.Fatal(err)
}
dummyContent := "somerandomtestcontentthatshouldbeasserted"
dirs := []string{
@@ -104,9 +123,9 @@ func TestCLISwarmFs(t *testing.T) {
log.Debug("swarmfs cli test: unmounting first run...", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
unmount := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"unmount",
"--ipcpath", filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
mountPoint,
}...)
_, matches := unmount.ExpectRegexp(hashRegexp)
@@ -139,9 +158,9 @@ func TestCLISwarmFs(t *testing.T) {
//remount, check files
newMount := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"mount",
"--ipcpath", filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
hash, // the latest hash
secondMountPoint,
}...)
@@ -175,9 +194,9 @@ func TestCLISwarmFs(t *testing.T) {
log.Debug("swarmfs cli test: unmounting second run", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
unmountSec := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"unmount",
"--ipcpath", filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
secondMountPoint,
}...)

View File

@@ -27,6 +27,15 @@ import (
"gopkg.in/urfave/cli.v1"
)
var hashCommand = cli.Command{
Action: hash,
CustomHelpTemplate: helpTemplate,
Name: "hash",
Usage: "print the swarm hash of a file or directory",
ArgsUsage: "<file>",
Description: "Prints the swarm hash of file or directory",
}
func hash(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {

View File

@@ -27,6 +27,15 @@ import (
"gopkg.in/urfave/cli.v1"
)
var listCommand = cli.Command{
Action: list,
CustomHelpTemplate: helpTemplate,
Name: "ls",
Usage: "list files and directories contained in a manifest",
ArgsUsage: "<manifest> [<prefix>]",
Description: "Lists files and directories contained in a manifest",
}
func list(ctx *cli.Context) {
args := ctx.Args()

View File

@@ -70,165 +70,6 @@ var (
gitCommit string // Git SHA1 commit hash of the release (set via linker flags)
)
var (
ChequebookAddrFlag = cli.StringFlag{
Name: "chequebook",
Usage: "chequebook contract address",
EnvVar: SWARM_ENV_CHEQUEBOOK_ADDR,
}
SwarmAccountFlag = cli.StringFlag{
Name: "bzzaccount",
Usage: "Swarm account key file",
EnvVar: SWARM_ENV_ACCOUNT,
}
SwarmListenAddrFlag = cli.StringFlag{
Name: "httpaddr",
Usage: "Swarm HTTP API listening interface",
EnvVar: SWARM_ENV_LISTEN_ADDR,
}
SwarmPortFlag = cli.StringFlag{
Name: "bzzport",
Usage: "Swarm local http api port",
EnvVar: SWARM_ENV_PORT,
}
SwarmNetworkIdFlag = cli.IntFlag{
Name: "bzznetworkid",
Usage: "Network identifier (integer, default 3=swarm testnet)",
EnvVar: SWARM_ENV_NETWORK_ID,
}
SwarmSwapEnabledFlag = cli.BoolFlag{
Name: "swap",
Usage: "Swarm SWAP enabled (default false)",
EnvVar: SWARM_ENV_SWAP_ENABLE,
}
SwarmSwapAPIFlag = cli.StringFlag{
Name: "swap-api",
Usage: "URL of the Ethereum API provider to use to settle SWAP payments",
EnvVar: SWARM_ENV_SWAP_API,
}
SwarmSyncDisabledFlag = cli.BoolTFlag{
Name: "nosync",
Usage: "Disable swarm syncing",
EnvVar: SWARM_ENV_SYNC_DISABLE,
}
SwarmSyncUpdateDelay = cli.DurationFlag{
Name: "sync-update-delay",
Usage: "Duration for sync subscriptions update after no new peers are added (default 15s)",
EnvVar: SWARM_ENV_SYNC_UPDATE_DELAY,
}
SwarmMaxStreamPeerServersFlag = cli.IntFlag{
Name: "max-stream-peer-servers",
Usage: "Limit of Stream peer servers, 0 denotes unlimited",
EnvVar: SWARM_ENV_MAX_STREAM_PEER_SERVERS,
Value: 10000, // A very large default value is possible as stream servers have very small memory footprint
}
SwarmLightNodeEnabled = cli.BoolFlag{
Name: "lightnode",
Usage: "Enable Swarm LightNode (default false)",
EnvVar: SWARM_ENV_LIGHT_NODE_ENABLE,
}
SwarmDeliverySkipCheckFlag = cli.BoolFlag{
Name: "delivery-skip-check",
Usage: "Skip chunk delivery check (default false)",
EnvVar: SWARM_ENV_DELIVERY_SKIP_CHECK,
}
EnsAPIFlag = cli.StringSliceFlag{
Name: "ens-api",
Usage: "ENS API endpoint for a TLD and with contract address, can be repeated, format [tld:][contract-addr@]url",
EnvVar: SWARM_ENV_ENS_API,
}
SwarmApiFlag = cli.StringFlag{
Name: "bzzapi",
Usage: "Swarm HTTP endpoint",
Value: "http://127.0.0.1:8500",
}
SwarmRecursiveFlag = cli.BoolFlag{
Name: "recursive",
Usage: "Upload directories recursively",
}
SwarmWantManifestFlag = cli.BoolTFlag{
Name: "manifest",
Usage: "Automatic manifest upload (default true)",
}
SwarmUploadDefaultPath = cli.StringFlag{
Name: "defaultpath",
Usage: "path to file served for empty url path (none)",
}
SwarmAccessGrantKeyFlag = cli.StringFlag{
Name: "grant-key",
Usage: "grants a given public key access to an ACT",
}
SwarmAccessGrantKeysFlag = cli.StringFlag{
Name: "grant-keys",
Usage: "grants a given list of public keys in the following file (separated by line breaks) access to an ACT",
}
SwarmUpFromStdinFlag = cli.BoolFlag{
Name: "stdin",
Usage: "reads data to be uploaded from stdin",
}
SwarmUploadMimeType = cli.StringFlag{
Name: "mime",
Usage: "Manually specify MIME type",
}
SwarmEncryptedFlag = cli.BoolFlag{
Name: "encrypt",
Usage: "use encrypted upload",
}
SwarmAccessPasswordFlag = cli.StringFlag{
Name: "password",
Usage: "Password",
EnvVar: SWARM_ACCESS_PASSWORD,
}
SwarmDryRunFlag = cli.BoolFlag{
Name: "dry-run",
Usage: "dry-run",
}
CorsStringFlag = cli.StringFlag{
Name: "corsdomain",
Usage: "Domain on which to send Access-Control-Allow-Origin header (multiple domains can be supplied separated by a ',')",
EnvVar: SWARM_ENV_CORS,
}
SwarmStorePath = cli.StringFlag{
Name: "store.path",
Usage: "Path to leveldb chunk DB (default <$GETH_ENV_DIR>/swarm/bzz-<$BZZ_KEY>/chunks)",
EnvVar: SWARM_ENV_STORE_PATH,
}
SwarmStoreCapacity = cli.Uint64Flag{
Name: "store.size",
Usage: "Number of chunks (5M is roughly 20-25GB) (default 5000000)",
EnvVar: SWARM_ENV_STORE_CAPACITY,
}
SwarmStoreCacheCapacity = cli.UintFlag{
Name: "store.cache.size",
Usage: "Number of recent chunks cached in memory (default 5000)",
EnvVar: SWARM_ENV_STORE_CACHE_CAPACITY,
}
SwarmCompressedFlag = cli.BoolFlag{
Name: "compressed",
Usage: "Prints encryption keys in compressed form",
}
SwarmFeedNameFlag = cli.StringFlag{
Name: "name",
Usage: "User-defined name for the new feed, limited to 32 characters. If combined with topic, it will refer to a subtopic with this name",
}
SwarmFeedTopicFlag = cli.StringFlag{
Name: "topic",
Usage: "User-defined topic this feed is tracking, hex encoded. Limited to 64 hexadecimal characters",
}
SwarmFeedDataOnCreateFlag = cli.StringFlag{
Name: "data",
Usage: "Initializes the feed with the given hex-encoded data. Data must be prefixed by 0x",
}
SwarmFeedManifestFlag = cli.StringFlag{
Name: "manifest",
Usage: "Refers to the feed through a manifest",
}
SwarmFeedUserFlag = cli.StringFlag{
Name: "user",
Usage: "Indicates the user who updates the feed",
}
)
//declare a few constant error messages, useful for later error check comparisons in test
var (
SWARM_ERR_NO_BZZACCOUNT = "bzzaccount option is required but not set; check your config file, command line or environment variables"
@@ -279,267 +120,24 @@ func init() {
Usage: "Print public key information",
Description: "The output of this command is supposed to be machine-readable",
},
{
Action: upload,
CustomHelpTemplate: helpTemplate,
Name: "up",
Usage: "uploads a file or directory to swarm using the HTTP API",
ArgsUsage: "<file>",
Flags: []cli.Flag{SwarmEncryptedFlag},
Description: "uploads a file or directory to swarm using the HTTP API and prints the root hash",
},
{
CustomHelpTemplate: helpTemplate,
Name: "access",
Usage: "encrypts a reference and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root manifest",
Subcommands: []cli.Command{
{
CustomHelpTemplate: helpTemplate,
Name: "new",
Usage: "encrypts a reference and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
Subcommands: []cli.Command{
{
Action: accessNewPass,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
utils.PasswordFileFlag,
SwarmDryRunFlag,
},
Name: "pass",
Usage: "encrypts a reference with a password and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
{
Action: accessNewPK,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
utils.PasswordFileFlag,
SwarmDryRunFlag,
SwarmAccessGrantKeyFlag,
},
Name: "pk",
Usage: "encrypts a reference with the node's private key and a given grantee's public key and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
{
Action: accessNewACT,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
SwarmAccessGrantKeysFlag,
SwarmDryRunFlag,
utils.PasswordFileFlag,
},
Name: "act",
Usage: "encrypts a reference with the node's private key and a given grantee's public key and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
},
},
},
},
{
CustomHelpTemplate: helpTemplate,
Name: "feed",
Usage: "(Advanced) Create and update Swarm Feeds",
ArgsUsage: "<create|update|info>",
Description: "Works with Swarm Feeds",
Subcommands: []cli.Command{
{
Action: feedCreateManifest,
CustomHelpTemplate: helpTemplate,
Name: "create",
Usage: "creates and publishes a new feed manifest",
Description: `creates and publishes a new feed manifest pointing to a specified user's updates about a particular topic.
The feed topic can be built in the following ways:
* use --topic to set the topic to an arbitrary binary hex string.
* use --name to set the topic to a human-readable name.
For example --name could be set to "profile-picture", meaning this feed allows to get this user's current profile picture.
* use both --topic and --name to create named subtopics.
For example, --topic could be set to an Ethereum contract address and --name could be set to "comments", meaning
this feed tracks a discussion about that contract.
The --user flag allows to have this manifest refer to a user other than yourself. If not specified,
it will then default to your local account (--bzzaccount)`,
Flags: []cli.Flag{SwarmFeedNameFlag, SwarmFeedTopicFlag, SwarmFeedUserFlag},
},
{
Action: feedUpdate,
CustomHelpTemplate: helpTemplate,
Name: "update",
Usage: "updates the content of an existing Swarm Feed",
ArgsUsage: "<0x Hex data>",
Description: `publishes a new update on the specified topic
The feed topic can be built in the following ways:
* use --topic to set the topic to an arbitrary binary hex string.
* use --name to set the topic to a human-readable name.
For example --name could be set to "profile-picture", meaning this feed allows to get this user's current profile picture.
* use both --topic and --name to create named subtopics.
For example, --topic could be set to an Ethereum contract address and --name could be set to "comments", meaning
this feed tracks a discussion about that contract.
If you have a manifest, you can specify it with --manifest to refer to the feed,
instead of using --topic / --name
`,
Flags: []cli.Flag{SwarmFeedManifestFlag, SwarmFeedNameFlag, SwarmFeedTopicFlag},
},
{
Action: feedInfo,
CustomHelpTemplate: helpTemplate,
Name: "info",
Usage: "obtains information about an existing Swarm feed",
Description: `obtains information about an existing Swarm feed
The topic can be specified directly with the --topic flag as an hex string
If no topic is specified, the default topic (zero) will be used
The --name flag can be used to specify subtopics with a specific name.
The --user flag allows to refer to a user other than yourself. If not specified,
it will then default to your local account (--bzzaccount)
If you have a manifest, you can specify it with --manifest instead of --topic / --name / ---user
to refer to the feed`,
Flags: []cli.Flag{SwarmFeedManifestFlag, SwarmFeedNameFlag, SwarmFeedTopicFlag, SwarmFeedUserFlag},
},
},
},
{
Action: list,
CustomHelpTemplate: helpTemplate,
Name: "ls",
Usage: "list files and directories contained in a manifest",
ArgsUsage: "<manifest> [<prefix>]",
Description: "Lists files and directories contained in a manifest",
},
{
Action: hash,
CustomHelpTemplate: helpTemplate,
Name: "hash",
Usage: "print the swarm hash of a file or directory",
ArgsUsage: "<file>",
Description: "Prints the swarm hash of file or directory",
},
{
Action: download,
Name: "down",
Flags: []cli.Flag{SwarmRecursiveFlag, SwarmAccessPasswordFlag},
Usage: "downloads a swarm manifest or a file inside a manifest",
ArgsUsage: " <uri> [<dir>]",
Description: `Downloads a swarm bzz uri to the given dir. When no dir is provided, working directory is assumed. --recursive flag is expected when downloading a manifest with multiple entries.`,
},
{
Name: "manifest",
CustomHelpTemplate: helpTemplate,
Usage: "perform operations on swarm manifests",
ArgsUsage: "COMMAND",
Description: "Updates a MANIFEST by adding/removing/updating the hash of a path.\nCOMMAND could be: add, update, remove",
Subcommands: []cli.Command{
{
Action: manifestAdd,
CustomHelpTemplate: helpTemplate,
Name: "add",
Usage: "add a new path to the manifest",
ArgsUsage: "<MANIFEST> <path> <hash>",
Description: "Adds a new path to the manifest",
},
{
Action: manifestUpdate,
CustomHelpTemplate: helpTemplate,
Name: "update",
Usage: "update the hash for an already existing path in the manifest",
ArgsUsage: "<MANIFEST> <path> <newhash>",
Description: "Update the hash for an already existing path in the manifest",
},
{
Action: manifestRemove,
CustomHelpTemplate: helpTemplate,
Name: "remove",
Usage: "removes a path from the manifest",
ArgsUsage: "<MANIFEST> <path>",
Description: "Removes a path from the manifest",
},
},
},
{
Name: "fs",
CustomHelpTemplate: helpTemplate,
Usage: "perform FUSE operations",
ArgsUsage: "fs COMMAND",
Description: "Performs FUSE operations by mounting/unmounting/listing mount points. This assumes you already have a Swarm node running locally. For all operation you must reference the correct path to bzzd.ipc in order to communicate with the node",
Subcommands: []cli.Command{
{
Action: mount,
CustomHelpTemplate: helpTemplate,
Name: "mount",
Flags: []cli.Flag{utils.IPCPathFlag},
Usage: "mount a swarm hash to a mount point",
ArgsUsage: "swarm fs mount --ipcpath <path to bzzd.ipc> <manifest hash> <mount point>",
Description: "Mounts a Swarm manifest hash to a given mount point. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
{
Action: unmount,
CustomHelpTemplate: helpTemplate,
Name: "unmount",
Flags: []cli.Flag{utils.IPCPathFlag},
Usage: "unmount a swarmfs mount",
ArgsUsage: "swarm fs unmount --ipcpath <path to bzzd.ipc> <mount point>",
Description: "Unmounts a swarmfs mount residing at <mount point>. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
{
Action: listMounts,
CustomHelpTemplate: helpTemplate,
Name: "list",
Flags: []cli.Flag{utils.IPCPathFlag},
Usage: "list swarmfs mounts",
ArgsUsage: "swarm fs list --ipcpath <path to bzzd.ipc>",
Description: "Lists all mounted swarmfs volumes. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
},
},
{
Name: "db",
CustomHelpTemplate: helpTemplate,
Usage: "manage the local chunk database",
ArgsUsage: "db COMMAND",
Description: "Manage the local chunk database",
Subcommands: []cli.Command{
{
Action: dbExport,
CustomHelpTemplate: helpTemplate,
Name: "export",
Usage: "export a local chunk database as a tar archive (use - to send to stdout)",
ArgsUsage: "<chunkdb> <file>",
Description: `
Export a local chunk database as a tar archive (use - to send to stdout).
swarm db export ~/.ethereum/swarm/bzz-KEY/chunks chunks.tar
The export may be quite large, consider piping the output through the Unix
pv(1) tool to get a progress bar:
swarm db export ~/.ethereum/swarm/bzz-KEY/chunks - | pv > chunks.tar
`,
},
{
Action: dbImport,
CustomHelpTemplate: helpTemplate,
Name: "import",
Usage: "import chunks from a tar archive into a local chunk database (use - to read from stdin)",
ArgsUsage: "<chunkdb> <file>",
Description: `Import chunks from a tar archive into a local chunk database (use - to read from stdin).
swarm db import ~/.ethereum/swarm/bzz-KEY/chunks chunks.tar
The import may be quite large, consider piping the input through the Unix
pv(1) tool to get a progress bar:
pv chunks.tar | swarm db import ~/.ethereum/swarm/bzz-KEY/chunks -`,
},
},
},
// See upload.go
upCommand,
// See access.go
accessCommand,
// See feeds.go
feedCommand,
// See list.go
listCommand,
// See hash.go
hashCommand,
// See download.go
downloadCommand,
// See manifest.go
manifestCommand,
// See fs.go
fsCommand,
// See db.go
dbCommand,
// See config.go
DumpConfigCommand,
}

View File

@@ -28,6 +28,40 @@ import (
"gopkg.in/urfave/cli.v1"
)
var manifestCommand = cli.Command{
Name: "manifest",
CustomHelpTemplate: helpTemplate,
Usage: "perform operations on swarm manifests",
ArgsUsage: "COMMAND",
Description: "Updates a MANIFEST by adding/removing/updating the hash of a path.\nCOMMAND could be: add, update, remove",
Subcommands: []cli.Command{
{
Action: manifestAdd,
CustomHelpTemplate: helpTemplate,
Name: "add",
Usage: "add a new path to the manifest",
ArgsUsage: "<MANIFEST> <path> <hash>",
Description: "Adds a new path to the manifest",
},
{
Action: manifestUpdate,
CustomHelpTemplate: helpTemplate,
Name: "update",
Usage: "update the hash for an already existing path in the manifest",
ArgsUsage: "<MANIFEST> <path> <newhash>",
Description: "Update the hash for an already existing path in the manifest",
},
{
Action: manifestRemove,
CustomHelpTemplate: helpTemplate,
Name: "remove",
Usage: "removes a path from the manifest",
ArgsUsage: "<MANIFEST> <path>",
Description: "Removes a path from the manifest",
},
},
}
// manifestAdd adds a new entry to the manifest at the given path.
// New entry hash, the last argument, must be the hash of a manifest
// with only one entry, which meta-data will be added to the original manifest.

View File

@@ -26,6 +26,7 @@ import (
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
swarmhttp "github.com/ethereum/go-ethereum/swarm/api/http"
)
// TestManifestChange tests manifest add, update and remove
@@ -57,8 +58,8 @@ func TestManifestChangeEncrypted(t *testing.T) {
// Argument encrypt controls whether to use encryption or not.
func testManifestChange(t *testing.T, encrypt bool) {
t.Parallel()
cluster := newTestCluster(t, 1)
defer cluster.Shutdown()
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
tmp, err := ioutil.TempDir("", "swarm-manifest-test")
if err != nil {
@@ -94,7 +95,7 @@ func testManifestChange(t *testing.T, encrypt bool) {
args := []string{
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"--recursive",
"--defaultpath",
indexDataFilename,
@@ -109,7 +110,7 @@ func testManifestChange(t *testing.T, encrypt bool) {
checkHashLength(t, origManifestHash, encrypt)
client := swarm.NewClient(cluster.Nodes[0].URL)
client := swarm.NewClient(srv.URL)
// upload a new file and use its manifest to add it the original manifest.
t.Run("add", func(t *testing.T) {
@@ -122,14 +123,14 @@ func testManifestChange(t *testing.T, encrypt bool) {
humansManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"up",
humansDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"add",
origManifestHash,
@@ -177,14 +178,14 @@ func testManifestChange(t *testing.T, encrypt bool) {
robotsManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"up",
robotsDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"add",
origManifestHash,
@@ -237,14 +238,14 @@ func testManifestChange(t *testing.T, encrypt bool) {
indexManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"up",
indexDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"update",
origManifestHash,
@@ -295,14 +296,14 @@ func testManifestChange(t *testing.T, encrypt bool) {
humansManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"up",
robotsDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"update",
origManifestHash,
@@ -348,7 +349,7 @@ func testManifestChange(t *testing.T, encrypt bool) {
t.Run("remove", func(t *testing.T) {
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"remove",
origManifestHash,
@@ -376,7 +377,7 @@ func testManifestChange(t *testing.T, encrypt bool) {
t.Run("remove nested", func(t *testing.T) {
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"remove",
origManifestHash,
@@ -429,8 +430,8 @@ func TestNestedDefaultEntryUpdateEncrypted(t *testing.T) {
func testNestedDefaultEntryUpdate(t *testing.T, encrypt bool) {
t.Parallel()
cluster := newTestCluster(t, 1)
defer cluster.Shutdown()
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
tmp, err := ioutil.TempDir("", "swarm-manifest-test")
if err != nil {
@@ -458,7 +459,7 @@ func testNestedDefaultEntryUpdate(t *testing.T, encrypt bool) {
args := []string{
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"--recursive",
"--defaultpath",
indexDataFilename,
@@ -473,7 +474,7 @@ func testNestedDefaultEntryUpdate(t *testing.T, encrypt bool) {
checkHashLength(t, origManifestHash, encrypt)
client := swarm.NewClient(cluster.Nodes[0].URL)
client := swarm.NewClient(srv.URL)
newIndexData := []byte("<h1>Ethereum Swarm</h1>")
newIndexDataFilename := filepath.Join(tmp, "index.html")
@@ -484,14 +485,14 @@ func testNestedDefaultEntryUpdate(t *testing.T, encrypt bool) {
newIndexManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"up",
newIndexDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
cluster.Nodes[0].URL,
srv.URL,
"manifest",
"update",
origManifestHash,

View File

@@ -40,6 +40,8 @@ import (
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm"
"github.com/ethereum/go-ethereum/swarm/api"
swarmhttp "github.com/ethereum/go-ethereum/swarm/api/http"
)
var loglevel = flag.Int("loglevel", 3, "verbosity of logs")
@@ -55,6 +57,20 @@ func init() {
})
}
const clusterSize = 3
var clusteronce sync.Once
var cluster *testCluster
func initCluster(t *testing.T) {
clusteronce.Do(func() {
cluster = newTestCluster(t, clusterSize)
})
}
func serverFunc(api *api.API) swarmhttp.TestServer {
return swarmhttp.NewServer(api, "")
}
func TestMain(m *testing.M) {
// check if we have been reexec'd
if reexec.Init() {

View File

@@ -0,0 +1,318 @@
package main
import (
"bytes"
"crypto/md5"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"os/exec"
"strings"
"sync"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/storage/feed"
colorable "github.com/mattn/go-colorable"
"github.com/pborman/uuid"
cli "gopkg.in/urfave/cli.v1"
)
const (
feedRandomDataLength = 8
)
// TODO: retrieve with manifest + extract repeating code
func cliFeedUploadAndSync(c *cli.Context) error {
log.Root().SetHandler(log.CallerFileHandler(log.LvlFilterHandler(log.Lvl(verbosity), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true)))))
defer func(now time.Time) { log.Info("total time", "time", time.Since(now), "size (kb)", filesize) }(time.Now())
generateEndpoints(scheme, cluster, from, to)
log.Info("generating and uploading feeds to " + endpoints[0] + " and syncing")
// create a random private key to sign updates with and derive the address
pkFile, err := ioutil.TempFile("", "swarm-feed-smoke-test")
if err != nil {
return err
}
defer pkFile.Close()
defer os.Remove(pkFile.Name())
privkeyHex := "0000000000000000000000000000000000000000000000000000000000001976"
privKey, err := crypto.HexToECDSA(privkeyHex)
if err != nil {
return err
}
user := crypto.PubkeyToAddress(privKey.PublicKey)
userHex := hexutil.Encode(user.Bytes())
// save the private key to a file
_, err = io.WriteString(pkFile, privkeyHex)
if err != nil {
return err
}
// keep hex strings for topic and subtopic
var topicHex string
var subTopicHex string
// and create combination hex topics for bzz-feed retrieval
// xor'ed with topic (zero-value topic if no topic)
var subTopicOnlyHex string
var mergedSubTopicHex string
// generate random topic and subtopic and put a hex on them
topicBytes, err := generateRandomData(feed.TopicLength)
topicHex = hexutil.Encode(topicBytes)
subTopicBytes, err := generateRandomData(8)
subTopicHex = hexutil.Encode(subTopicBytes)
if err != nil {
return err
}
mergedSubTopic, err := feed.NewTopic(subTopicHex, topicBytes)
if err != nil {
return err
}
mergedSubTopicHex = hexutil.Encode(mergedSubTopic[:])
subTopicOnlyBytes, err := feed.NewTopic(subTopicHex, nil)
if err != nil {
return err
}
subTopicOnlyHex = hexutil.Encode(subTopicOnlyBytes[:])
// create feed manifest, topic only
var out bytes.Buffer
cmd := exec.Command("swarm", "--bzzapi", endpoints[0], "feed", "create", "--topic", topicHex, "--user", userHex)
cmd.Stdout = &out
log.Debug("create feed manifest topic cmd", "cmd", cmd)
err = cmd.Run()
if err != nil {
return err
}
manifestWithTopic := strings.TrimRight(out.String(), string([]byte{0x0a}))
if len(manifestWithTopic) != 64 {
return fmt.Errorf("unknown feed create manifest hash format (topic): (%d) %s", len(out.String()), manifestWithTopic)
}
log.Debug("create topic feed", "manifest", manifestWithTopic)
out.Reset()
// create feed manifest, subtopic only
cmd = exec.Command("swarm", "--bzzapi", endpoints[0], "feed", "create", "--name", subTopicHex, "--user", userHex)
cmd.Stdout = &out
log.Debug("create feed manifest subtopic cmd", "cmd", cmd)
err = cmd.Run()
if err != nil {
return err
}
manifestWithSubTopic := strings.TrimRight(out.String(), string([]byte{0x0a}))
if len(manifestWithSubTopic) != 64 {
return fmt.Errorf("unknown feed create manifest hash format (subtopic): (%d) %s", len(out.String()), manifestWithSubTopic)
}
log.Debug("create subtopic feed", "manifest", manifestWithTopic)
out.Reset()
// create feed manifest, merged topic
cmd = exec.Command("swarm", "--bzzapi", endpoints[0], "feed", "create", "--topic", topicHex, "--name", subTopicHex, "--user", userHex)
cmd.Stdout = &out
log.Debug("create feed manifest mergetopic cmd", "cmd", cmd)
err = cmd.Run()
if err != nil {
log.Error(err.Error())
return err
}
manifestWithMergedTopic := strings.TrimRight(out.String(), string([]byte{0x0a}))
if len(manifestWithMergedTopic) != 64 {
return fmt.Errorf("unknown feed create manifest hash format (mergedtopic): (%d) %s", len(out.String()), manifestWithMergedTopic)
}
log.Debug("create mergedtopic feed", "manifest", manifestWithMergedTopic)
out.Reset()
// create test data
data, err := generateRandomData(feedRandomDataLength)
if err != nil {
return err
}
h := md5.New()
h.Write(data)
dataHash := h.Sum(nil)
dataHex := hexutil.Encode(data)
// update with topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, dataHex)
cmd.Stdout = &out
log.Debug("update feed manifest topic cmd", "cmd", cmd)
err = cmd.Run()
if err != nil {
return err
}
log.Debug("feed update topic", "out", out)
out.Reset()
// update with subtopic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--name", subTopicHex, dataHex)
cmd.Stdout = &out
log.Debug("update feed manifest subtopic cmd", "cmd", cmd)
err = cmd.Run()
if err != nil {
return err
}
log.Debug("feed update subtopic", "out", out)
out.Reset()
// update with merged topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, "--name", subTopicHex, dataHex)
cmd.Stdout = &out
log.Debug("update feed manifest merged topic cmd", "cmd", cmd)
err = cmd.Run()
if err != nil {
return err
}
log.Debug("feed update mergedtopic", "out", out)
out.Reset()
time.Sleep(3 * time.Second)
// retrieve the data
wg := sync.WaitGroup{}
for _, endpoint := range endpoints {
// raw retrieve, topic only
for _, hex := range []string{topicHex, subTopicOnlyHex, mergedSubTopicHex} {
wg.Add(1)
ruid := uuid.New()[:8]
go func(hex string, endpoint string, ruid string) {
for {
err := fetchFeed(hex, userHex, endpoint, dataHash, ruid)
if err != nil {
continue
}
wg.Done()
return
}
}(hex, endpoint, ruid)
}
}
wg.Wait()
log.Info("all endpoints synced random data successfully")
// upload test file
log.Info("uploading to " + endpoints[0] + " and syncing")
f, cleanup := generateRandomFile(filesize * 1000)
defer cleanup()
hash, err := upload(f, endpoints[0])
if err != nil {
return err
}
hashBytes, err := hexutil.Decode("0x" + hash)
if err != nil {
return err
}
multihashHex := hexutil.Encode(hashBytes)
fileHash, err := digest(f)
if err != nil {
return err
}
log.Info("uploaded successfully", "hash", hash, "digest", fmt.Sprintf("%x", fileHash))
// update file with topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, multihashHex)
cmd.Stdout = &out
err = cmd.Run()
if err != nil {
return err
}
log.Debug("feed update topic", "out", out)
out.Reset()
// update file with subtopic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--name", subTopicHex, multihashHex)
cmd.Stdout = &out
err = cmd.Run()
if err != nil {
return err
}
log.Debug("feed update subtopic", "out", out)
out.Reset()
// update file with merged topic
cmd = exec.Command("swarm", "--bzzaccount", pkFile.Name(), "--bzzapi", endpoints[0], "feed", "update", "--topic", topicHex, "--name", subTopicHex, multihashHex)
cmd.Stdout = &out
err = cmd.Run()
if err != nil {
return err
}
log.Debug("feed update mergedtopic", "out", out)
out.Reset()
time.Sleep(3 * time.Second)
for _, endpoint := range endpoints {
// manifest retrieve, topic only
for _, url := range []string{manifestWithTopic, manifestWithSubTopic, manifestWithMergedTopic} {
wg.Add(1)
ruid := uuid.New()[:8]
go func(url string, endpoint string, ruid string) {
for {
err := fetch(url, endpoint, fileHash, ruid)
if err != nil {
continue
}
wg.Done()
return
}
}(url, endpoint, ruid)
}
}
wg.Wait()
log.Info("all endpoints synced random file successfully")
return nil
}
func fetchFeed(topic string, user string, endpoint string, original []byte, ruid string) error {
log.Trace("sleeping", "ruid", ruid)
time.Sleep(3 * time.Second)
log.Trace("http get request (feed)", "ruid", ruid, "api", endpoint, "topic", topic, "user", user)
res, err := http.Get(endpoint + "/bzz-feed:/?topic=" + topic + "&user=" + user)
if err != nil {
return err
}
log.Trace("http get response (feed)", "ruid", ruid, "api", endpoint, "topic", topic, "user", user, "code", res.StatusCode, "len", res.ContentLength)
if res.StatusCode != 200 {
return fmt.Errorf("expected status code %d, got %v (ruid %v)", 200, res.StatusCode, ruid)
}
defer res.Body.Close()
rdigest, err := digest(res.Body)
if err != nil {
log.Warn(err.Error(), "ruid", ruid)
return err
}
if !bytes.Equal(rdigest, original) {
err := fmt.Errorf("downloaded imported file md5=%x is not the same as the generated one=%x", rdigest, original)
log.Warn(err.Error(), "ruid", ruid)
return err
}
log.Trace("downloaded file matches random file", "ruid", ruid, "len", res.ContentLength)
return nil
}

View File

@@ -21,7 +21,6 @@ import (
"sort"
"github.com/ethereum/go-ethereum/log"
colorable "github.com/mattn/go-colorable"
cli "gopkg.in/urfave/cli.v1"
)
@@ -34,11 +33,10 @@ var (
filesize int
from int
to int
verbosity int
)
func main() {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.LvlTrace, log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
app := cli.NewApp()
app.Name = "smoke-test"
@@ -47,8 +45,8 @@ func main() {
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "cluster-endpoint",
Value: "testing",
Usage: "cluster to point to (local, open or testing)",
Value: "prod",
Usage: "cluster to point to (prod or a given namespace)",
Destination: &cluster,
},
cli.IntFlag{
@@ -80,6 +78,12 @@ func main() {
Usage: "file size for generated random file in KB",
Destination: &filesize,
},
cli.IntFlag{
Name: "verbosity",
Value: 1,
Usage: "verbosity",
Destination: &verbosity,
},
}
app.Commands = []cli.Command{
@@ -89,6 +93,12 @@ func main() {
Usage: "upload and sync",
Action: cliUploadAndSync,
},
{
Name: "feed_sync",
Aliases: []string{"f"},
Usage: "feed update generate, upload and sync",
Action: cliFeedUploadAndSync,
},
}
sort.Sort(cli.FlagsByName(app.Flags))

View File

@@ -19,7 +19,9 @@ package main
import (
"bytes"
"crypto/md5"
"crypto/rand"
crand "crypto/rand"
"crypto/tls"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -31,6 +33,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/log"
colorable "github.com/mattn/go-colorable"
"github.com/pborman/uuid"
cli "gopkg.in/urfave/cli.v1"
@@ -38,18 +41,13 @@ import (
func generateEndpoints(scheme string, cluster string, from int, to int) {
if cluster == "prod" {
cluster = ""
} else if cluster == "local" {
for port := from; port <= to; port++ {
endpoints = append(endpoints, fmt.Sprintf("%s://localhost:%v", scheme, port))
endpoints = append(endpoints, fmt.Sprintf("%s://%v.swarm-gateways.net", scheme, port))
}
return
} else {
cluster = cluster + "."
}
for port := from; port <= to; port++ {
endpoints = append(endpoints, fmt.Sprintf("%s://%v.%sswarm-gateways.net", scheme, port, cluster))
for port := from; port <= to; port++ {
endpoints = append(endpoints, fmt.Sprintf("%s://swarm-%v-%s.stg.swarm-gateways.net", scheme, port, cluster))
}
}
if includeLocalhost {
@@ -58,6 +56,9 @@ func generateEndpoints(scheme string, cluster string, from int, to int) {
}
func cliUploadAndSync(c *cli.Context) error {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(verbosity), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
defer func(now time.Time) { log.Info("total time", "time", time.Since(now), "size (kb)", filesize) }(time.Now())
generateEndpoints(scheme, cluster, from, to)
@@ -85,7 +86,6 @@ func cliUploadAndSync(c *cli.Context) error {
wg := sync.WaitGroup{}
for _, endpoint := range endpoints {
endpoint := endpoint
ruid := uuid.New()[:8]
wg.Add(1)
go func(endpoint string, ruid string) {
@@ -112,7 +112,10 @@ func fetch(hash string, endpoint string, original []byte, ruid string) error {
time.Sleep(3 * time.Second)
log.Trace("http get request", "ruid", ruid, "api", endpoint, "hash", hash)
res, err := http.Get(endpoint + "/bzz:/" + hash + "/")
client := &http.Client{Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}}
res, err := client.Get(endpoint + "/bzz:/" + hash + "/")
if err != nil {
log.Warn(err.Error(), "ruid", ruid)
return err
@@ -166,6 +169,18 @@ func digest(r io.Reader) ([]byte, error) {
return h.Sum(nil), nil
}
// generates random data in heap buffer
func generateRandomData(datasize int) ([]byte, error) {
b := make([]byte, datasize)
c, err := crand.Read(b)
if err != nil {
return nil, err
} else if c != datasize {
return nil, errors.New("short read")
}
return b, nil
}
// generateRandomFile is creating a temporary file with the requested byte size
func generateRandomFile(size int) (f *os.File, teardown func()) {
// create a tmp file
@@ -181,7 +196,7 @@ func generateRandomFile(size int) (f *os.File, teardown func()) {
}
buf := make([]byte, size)
_, err = rand.Read(buf)
_, err = crand.Read(buf)
if err != nil {
panic(err)
}

View File

@@ -26,29 +26,47 @@ import (
"os/user"
"path"
"path/filepath"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/log"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/cmd/utils"
"gopkg.in/urfave/cli.v1"
)
func upload(ctx *cli.Context) {
var upCommand = cli.Command{
Action: upload,
CustomHelpTemplate: helpTemplate,
Name: "up",
Usage: "uploads a file or directory to swarm using the HTTP API",
ArgsUsage: "<file>",
Flags: []cli.Flag{SwarmEncryptedFlag},
Description: "uploads a file or directory to swarm using the HTTP API and prints the root hash",
}
func upload(ctx *cli.Context) {
args := ctx.Args()
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
recursive = ctx.GlobalBool(SwarmRecursiveFlag.Name)
wantManifest = ctx.GlobalBoolT(SwarmWantManifestFlag.Name)
defaultPath = ctx.GlobalString(SwarmUploadDefaultPath.Name)
fromStdin = ctx.GlobalBool(SwarmUpFromStdinFlag.Name)
mimeType = ctx.GlobalString(SwarmUploadMimeType.Name)
client = swarm.NewClient(bzzapi)
toEncrypt = ctx.Bool(SwarmEncryptedFlag.Name)
file string
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
recursive = ctx.GlobalBool(SwarmRecursiveFlag.Name)
wantManifest = ctx.GlobalBoolT(SwarmWantManifestFlag.Name)
defaultPath = ctx.GlobalString(SwarmUploadDefaultPath.Name)
fromStdin = ctx.GlobalBool(SwarmUpFromStdinFlag.Name)
mimeType = ctx.GlobalString(SwarmUploadMimeType.Name)
client = swarm.NewClient(bzzapi)
toEncrypt = ctx.Bool(SwarmEncryptedFlag.Name)
autoDefaultPath = false
file string
)
if autoDefaultPathString := os.Getenv(SWARM_AUTO_DEFAULTPATH); autoDefaultPathString != "" {
b, err := strconv.ParseBool(autoDefaultPathString)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SWARM_AUTO_DEFAULTPATH, err)
}
autoDefaultPath = b
}
if len(args) != 1 {
if fromStdin {
tmp, err := ioutil.TempFile("", "swarm-stdin")
@@ -97,6 +115,15 @@ func upload(ctx *cli.Context) {
if !recursive {
return "", errors.New("Argument is a directory and recursive upload is disabled")
}
if autoDefaultPath && defaultPath == "" {
defaultEntryCandidate := path.Join(file, "index.html")
log.Debug("trying to find default path", "path", defaultEntryCandidate)
defaultEntryStat, err := os.Stat(defaultEntryCandidate)
if err == nil && !defaultEntryStat.IsDir() {
log.Debug("setting auto detected default path", "path", defaultEntryCandidate)
defaultPath = defaultEntryCandidate
}
}
if defaultPath != "" {
// construct absolute default path
absDefaultPath, _ := filepath.Abs(defaultPath)

View File

@@ -31,7 +31,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/log"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
swarmapi "github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/swarm/testutil"
"github.com/mattn/go-colorable"
)
@@ -40,69 +41,66 @@ func init() {
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
}
// TestCLISwarmUp tests that running 'swarm up' makes the resulting file
func TestSwarmUp(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
initCluster(t)
cases := []struct {
name string
f func(t *testing.T)
}{
{"NoEncryption", testNoEncryption},
{"Encrypted", testEncrypted},
{"RecursiveNoEncryption", testRecursiveNoEncryption},
{"RecursiveEncrypted", testRecursiveEncrypted},
{"DefaultPathAll", testDefaultPathAll},
}
for _, tc := range cases {
t.Run(tc.name, tc.f)
}
}
// testNoEncryption tests that running 'swarm up' makes the resulting file
// available from all nodes via the HTTP API
func TestCLISwarmUp(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testCLISwarmUp(false, t)
}
func TestCLISwarmUpRecursive(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testCLISwarmUpRecursive(false, t)
func testNoEncryption(t *testing.T) {
testDefault(false, t)
}
// TestCLISwarmUpEncrypted tests that running 'swarm encrypted-up' makes the resulting file
// testEncrypted tests that running 'swarm up --encrypted' makes the resulting file
// available from all nodes via the HTTP API
func TestCLISwarmUpEncrypted(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testCLISwarmUp(true, t)
}
func TestCLISwarmUpEncryptedRecursive(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testCLISwarmUpRecursive(true, t)
func testEncrypted(t *testing.T) {
testDefault(true, t)
}
func testCLISwarmUp(toEncrypt bool, t *testing.T) {
log.Info("starting 3 node cluster")
cluster := newTestCluster(t, 3)
defer cluster.Shutdown()
func testRecursiveNoEncryption(t *testing.T) {
testRecursive(false, t)
}
// create a tmp file
tmp, err := ioutil.TempFile("", "swarm-test")
if err != nil {
t.Fatal(err)
}
defer tmp.Close()
defer os.Remove(tmp.Name())
func testRecursiveEncrypted(t *testing.T) {
testRecursive(true, t)
}
func testDefault(toEncrypt bool, t *testing.T) {
tmpFileName := testutil.TempFileWithContent(t, data)
defer os.Remove(tmpFileName)
// write data to file
data := "notsorandomdata"
_, err = io.WriteString(tmp, data)
if err != nil {
t.Fatal(err)
}
hashRegexp := `[a-f\d]{64}`
flags := []string{
"--bzzapi", cluster.Nodes[0].URL,
"up",
tmp.Name()}
tmpFileName}
if toEncrypt {
hashRegexp = `[a-f\d]{128}`
flags = []string{
"--bzzapi", cluster.Nodes[0].URL,
"up",
"--encrypt",
tmp.Name()}
tmpFileName}
}
// upload the file with 'swarm up' and expect a hash
log.Info(fmt.Sprintf("uploading file with 'swarm up'"))
@@ -191,18 +189,13 @@ func testCLISwarmUp(toEncrypt bool, t *testing.T) {
}
}
func testCLISwarmUpRecursive(toEncrypt bool, t *testing.T) {
fmt.Println("starting 3 node cluster")
cluster := newTestCluster(t, 3)
defer cluster.Shutdown()
func testRecursive(toEncrypt bool, t *testing.T) {
tmpUploadDir, err := ioutil.TempDir("", "swarm-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpUploadDir)
// create tmp files
data := "notsorandomdata"
for _, path := range []string{"tmp1", "tmp2"} {
if err := ioutil.WriteFile(filepath.Join(tmpUploadDir, path), bytes.NewBufferString(data).Bytes(), 0644); err != nil {
t.Fatal(err)
@@ -242,8 +235,7 @@ func testCLISwarmUpRecursive(toEncrypt bool, t *testing.T) {
}
defer os.RemoveAll(tmpDownload)
bzzLocator := "bzz:/" + hash
flagss := []string{}
flagss = []string{
flagss := []string{
"--bzzapi", cluster.Nodes[0].URL,
"down",
"--recursive",
@@ -264,7 +256,7 @@ func testCLISwarmUpRecursive(toEncrypt bool, t *testing.T) {
switch mode := fi.Mode(); {
case mode.IsRegular():
if file, err := swarm.Open(path.Join(tmpDownload, v.Name())); err != nil {
if file, err := swarmapi.Open(path.Join(tmpDownload, v.Name())); err != nil {
t.Fatalf("encountered an error opening the file returned from the CLI: %v", err)
} else {
ff := make([]byte, len(data))
@@ -285,22 +277,16 @@ func testCLISwarmUpRecursive(toEncrypt bool, t *testing.T) {
}
}
// TestCLISwarmUpDefaultPath tests swarm recursive upload with relative and absolute
// testDefaultPathAll tests swarm recursive upload with relative and absolute
// default paths and with encryption.
func TestCLISwarmUpDefaultPath(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testCLISwarmUpDefaultPath(false, false, t)
testCLISwarmUpDefaultPath(false, true, t)
testCLISwarmUpDefaultPath(true, false, t)
testCLISwarmUpDefaultPath(true, true, t)
func testDefaultPathAll(t *testing.T) {
testDefaultPath(false, false, t)
testDefaultPath(false, true, t)
testDefaultPath(true, false, t)
testDefaultPath(true, true, t)
}
func testCLISwarmUpDefaultPath(toEncrypt bool, absDefaultPath bool, t *testing.T) {
cluster := newTestCluster(t, 1)
defer cluster.Shutdown()
func testDefaultPath(toEncrypt bool, absDefaultPath bool, t *testing.T) {
tmp, err := ioutil.TempDir("", "swarm-defaultpath-test")
if err != nil {
t.Fatal(err)
@@ -340,7 +326,7 @@ func testCLISwarmUpDefaultPath(toEncrypt bool, absDefaultPath bool, t *testing.T
up.ExpectExit()
hash := matches[0]
client := swarm.NewClient(cluster.Nodes[0].URL)
client := swarmapi.NewClient(cluster.Nodes[0].URL)
m, isEncrypted, err := client.DownloadManifest(hash)
if err != nil {

View File

@@ -272,13 +272,13 @@ func ImportPreimages(db *ethdb.LDBDatabase, fn string) error {
// Accumulate the preimages and flush when enough ws gathered
preimages[crypto.Keccak256Hash(blob)] = common.CopyBytes(blob)
if len(preimages) > 1024 {
rawdb.WritePreimages(db, 0, preimages)
rawdb.WritePreimages(db, preimages)
preimages = make(map[common.Hash][]byte)
}
}
// Flush the last batch preimage data
if len(preimages) > 0 {
rawdb.WritePreimages(db, 0, preimages)
rawdb.WritePreimages(db, preimages)
}
return nil
}

View File

@@ -295,7 +295,12 @@ var (
CacheDatabaseFlag = cli.IntFlag{
Name: "cache.database",
Usage: "Percentage of cache memory allowance to use for database io",
Value: 75,
Value: 50,
}
CacheTrieFlag = cli.IntFlag{
Name: "cache.trie",
Usage: "Percentage of cache memory allowance to use for trie caching",
Value: 25,
}
CacheGCFlag = cli.IntFlag{
Name: "cache.gc",
@@ -973,16 +978,7 @@ func SetNodeConfig(ctx *cli.Context, cfg *node.Config) {
setWS(ctx, cfg)
setNodeUserIdent(ctx, cfg)
switch {
case ctx.GlobalIsSet(DataDirFlag.Name):
cfg.DataDir = ctx.GlobalString(DataDirFlag.Name)
case ctx.GlobalBool(DeveloperFlag.Name):
cfg.DataDir = "" // unless explicitly requested, use memory databases
case ctx.GlobalBool(TestnetFlag.Name):
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "testnet")
case ctx.GlobalBool(RinkebyFlag.Name):
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby")
}
setDataDir(ctx, cfg)
if ctx.GlobalIsSet(KeyStoreDirFlag.Name) {
cfg.KeyStoreDir = ctx.GlobalString(KeyStoreDirFlag.Name)
@@ -995,6 +991,19 @@ func SetNodeConfig(ctx *cli.Context, cfg *node.Config) {
}
}
func setDataDir(ctx *cli.Context, cfg *node.Config) {
switch {
case ctx.GlobalIsSet(DataDirFlag.Name):
cfg.DataDir = ctx.GlobalString(DataDirFlag.Name)
case ctx.GlobalBool(DeveloperFlag.Name):
cfg.DataDir = "" // unless explicitly requested, use memory databases
case ctx.GlobalBool(TestnetFlag.Name):
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "testnet")
case ctx.GlobalBool(RinkebyFlag.Name):
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby")
}
}
func setGPO(ctx *cli.Context, cfg *gasprice.Config) {
if ctx.GlobalIsSet(GpoBlocksFlag.Name) {
cfg.Blocks = ctx.GlobalInt(GpoBlocksFlag.Name)
@@ -1157,8 +1166,11 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *eth.Config) {
}
cfg.NoPruning = ctx.GlobalString(GCModeFlag.Name) == "archive"
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheTrieFlag.Name) {
cfg.TrieCleanCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheTrieFlag.Name) / 100
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheGCFlag.Name) {
cfg.TrieCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheGCFlag.Name) / 100
cfg.TrieDirtyCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheGCFlag.Name) / 100
}
if ctx.GlobalIsSet(MinerNotifyFlag.Name) {
cfg.MinerNotify = strings.Split(ctx.GlobalString(MinerNotifyFlag.Name), ",")
@@ -1393,12 +1405,16 @@ func MakeChain(ctx *cli.Context, stack *node.Node) (chain *core.BlockChain, chai
Fatalf("--%s must be either 'full' or 'archive'", GCModeFlag.Name)
}
cache := &core.CacheConfig{
Disabled: ctx.GlobalString(GCModeFlag.Name) == "archive",
TrieNodeLimit: eth.DefaultConfig.TrieCache,
TrieTimeLimit: eth.DefaultConfig.TrieTimeout,
Disabled: ctx.GlobalString(GCModeFlag.Name) == "archive",
TrieCleanLimit: eth.DefaultConfig.TrieCleanCache,
TrieDirtyLimit: eth.DefaultConfig.TrieDirtyCache,
TrieTimeLimit: eth.DefaultConfig.TrieTimeout,
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheTrieFlag.Name) {
cache.TrieCleanLimit = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheTrieFlag.Name) / 100
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheGCFlag.Name) {
cache.TrieNodeLimit = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheGCFlag.Name) / 100
cache.TrieDirtyLimit = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheGCFlag.Name) / 100
}
vmcfg := vm.Config{EnablePreimageRecording: ctx.GlobalBool(VMEnableDebugFlag.Name)}
chain, err = core.NewBlockChain(chainDb, cache, config, engine, vmcfg, nil)

View File

@@ -31,6 +31,15 @@ func ToHex(b []byte) string {
return "0x" + hex
}
// ToHexArray creates a array of hex-string based on []byte
func ToHexArray(b [][]byte) []string {
r := make([]string, len(b))
for i := range b {
r[i] = ToHex(b[i])
}
return r
}
// FromHex returns the bytes represented by the hexadecimal string s.
// s may be prefixed with "0x".
func FromHex(s string) []byte {

View File

@@ -31,14 +31,15 @@ import (
var versionRegexp = regexp.MustCompile(`([0-9]+)\.([0-9]+)\.([0-9]+)`)
// Contract contains information about a compiled contract, alongside its code.
// Contract contains information about a compiled contract, alongside its code and runtime code.
type Contract struct {
Code string `json:"code"`
Info ContractInfo `json:"info"`
Code string `json:"code"`
RuntimeCode string `json:"runtime-code"`
Info ContractInfo `json:"info"`
}
// ContractInfo contains information about a compiled contract, including access
// to the ABI definition, user and developer docs, and metadata.
// to the ABI definition, source mapping, user and developer docs, and metadata.
//
// Depending on the source, language version, compiler version, and compiler
// options will provide information about how the contract was compiled.
@@ -48,6 +49,8 @@ type ContractInfo struct {
LanguageVersion string `json:"languageVersion"`
CompilerVersion string `json:"compilerVersion"`
CompilerOptions string `json:"compilerOptions"`
SrcMap string `json:"srcMap"`
SrcMapRuntime string `json:"srcMapRuntime"`
AbiDefinition interface{} `json:"abiDefinition"`
UserDoc interface{} `json:"userDoc"`
DeveloperDoc interface{} `json:"developerDoc"`
@@ -63,14 +66,16 @@ type Solidity struct {
// --combined-output format
type solcOutput struct {
Contracts map[string]struct {
Bin, Abi, Devdoc, Userdoc, Metadata string
BinRuntime string `json:"bin-runtime"`
SrcMapRuntime string `json:"srcmap-runtime"`
Bin, SrcMap, Abi, Devdoc, Userdoc, Metadata string
}
Version string
}
func (s *Solidity) makeArgs() []string {
p := []string{
"--combined-json", "bin,abi,userdoc,devdoc",
"--combined-json", "bin,bin-runtime,srcmap,srcmap-runtime,abi,userdoc,devdoc",
"--optimize", // code optimizer switched on
}
if s.Major > 0 || s.Minor > 4 || s.Patch > 6 {
@@ -157,7 +162,7 @@ func (s *Solidity) run(cmd *exec.Cmd, source string) (map[string]*Contract, erro
// provided source, language and compiler version, and compiler options are all
// passed through into the Contract structs.
//
// The solc output is expected to contain ABI, user docs, and dev docs.
// The solc output is expected to contain ABI, source mapping, user docs, and dev docs.
//
// Returns an error if the JSON is malformed or missing data, or if the JSON
// embedded within the JSON is malformed.
@@ -184,13 +189,16 @@ func ParseCombinedJSON(combinedJSON []byte, source string, languageVersion strin
return nil, fmt.Errorf("solc: error reading dev doc: %v", err)
}
contracts[name] = &Contract{
Code: "0x" + info.Bin,
Code: "0x" + info.Bin,
RuntimeCode: "0x" + info.BinRuntime,
Info: ContractInfo{
Source: source,
Language: "Solidity",
LanguageVersion: languageVersion,
CompilerVersion: compilerVersion,
CompilerOptions: compilerOptions,
SrcMap: info.SrcMap,
SrcMapRuntime: info.SrcMapRuntime,
AbiDefinition: abi,
UserDoc: userdoc,
DeveloperDoc: devdoc,

View File

@@ -696,7 +696,7 @@ func (c *Clique) SealHash(header *types.Header) common.Hash {
return sigHash(header)
}
// Close implements consensus.Engine. It's a noop for clique as there is are no background threads.
// Close implements consensus.Engine. It's a noop for clique as there are no background threads.
func (c *Clique) Close() error {
return nil
}

View File

@@ -37,27 +37,28 @@ type API struct {
// result[0] - 32 bytes hex encoded current block header pow-hash
// result[1] - 32 bytes hex encoded seed hash used for DAG
// result[2] - 32 bytes hex encoded boundary condition ("target"), 2^256/difficulty
func (api *API) GetWork() ([3]string, error) {
// result[3] - hex encoded block number
func (api *API) GetWork() ([4]string, error) {
if api.ethash.config.PowMode != ModeNormal && api.ethash.config.PowMode != ModeTest {
return [3]string{}, errors.New("not supported")
return [4]string{}, errors.New("not supported")
}
var (
workCh = make(chan [3]string, 1)
workCh = make(chan [4]string, 1)
errc = make(chan error, 1)
)
select {
case api.ethash.fetchWorkCh <- &sealWork{errc: errc, res: workCh}:
case <-api.ethash.exitCh:
return [3]string{}, errEthashStopped
return [4]string{}, errEthashStopped
}
select {
case work := <-workCh:
return work, nil
case err := <-errc:
return [3]string{}, err
return [4]string{}, err
}
}

View File

@@ -432,7 +432,7 @@ type hashrate struct {
// sealWork wraps a seal work package for remote sealer.
type sealWork struct {
errc chan error
res chan [3]string
res chan [4]string
}
// Ethash is a consensus engine based on proof-of-work implementing the ethash

View File

@@ -107,7 +107,7 @@ func TestRemoteSealer(t *testing.T) {
ethash.Seal(nil, block, results, nil)
var (
work [3]string
work [4]string
err error
)
if work, err = api.GetWork(); err != nil || work[0] != sealhash.Hex() {

View File

@@ -30,6 +30,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
@@ -193,7 +194,7 @@ func (ethash *Ethash) remote(notify []string, noverify bool) {
results chan<- *types.Block
currentBlock *types.Block
currentWork [3]string
currentWork [4]string
notifyTransport = &http.Transport{}
notifyClient = &http.Client{
@@ -234,12 +235,14 @@ func (ethash *Ethash) remote(notify []string, noverify bool) {
// result[0], 32 bytes hex encoded current block header pow-hash
// result[1], 32 bytes hex encoded seed hash used for DAG
// result[2], 32 bytes hex encoded boundary condition ("target"), 2^256/difficulty
// result[3], hex encoded block number
makeWork := func(block *types.Block) {
hash := ethash.SealHash(block.Header())
currentWork[0] = hash.Hex()
currentWork[1] = common.BytesToHash(SeedHash(block.NumberU64())).Hex()
currentWork[2] = common.BytesToHash(new(big.Int).Div(two256, block.Difficulty()).Bytes()).Hex()
currentWork[3] = hexutil.EncodeBig(block.Number())
// Trace the seal work fetched by remote sealer.
currentBlock = block

View File

@@ -53,12 +53,6 @@ func (v *BlockValidator) ValidateBody(block *types.Block) error {
if v.bc.HasBlockAndState(block.Hash(), block.NumberU64()) {
return ErrKnownBlock
}
if !v.bc.HasBlockAndState(block.ParentHash(), block.NumberU64()-1) {
if !v.bc.HasBlock(block.ParentHash(), block.NumberU64()-1) {
return consensus.ErrUnknownAncestor
}
return consensus.ErrPrunedAncestor
}
// Header validity is known at this point, check the uncles and transactions
header := block.Header()
if err := v.engine.VerifyUncles(v.bc, block); err != nil {
@@ -70,6 +64,12 @@ func (v *BlockValidator) ValidateBody(block *types.Block) error {
if hash := types.DeriveSha(block.Transactions()); hash != header.TxHash {
return fmt.Errorf("transaction root hash mismatch: have %x, want %x", hash, header.TxHash)
}
if !v.bc.HasBlockAndState(block.ParentHash(), block.NumberU64()-1) {
if !v.bc.HasBlock(block.ParentHash(), block.NumberU64()-1) {
return consensus.ErrUnknownAncestor
}
return consensus.ErrPrunedAncestor
}
return nil
}

View File

@@ -47,7 +47,10 @@ import (
)
var (
blockInsertTimer = metrics.NewRegisteredTimer("chain/inserts", nil)
blockInsertTimer = metrics.NewRegisteredTimer("chain/inserts", nil)
blockValidationTimer = metrics.NewRegisteredTimer("chain/validation", nil)
blockExecutionTimer = metrics.NewRegisteredTimer("chain/execution", nil)
blockWriteTimer = metrics.NewRegisteredTimer("chain/write", nil)
ErrNoGenesis = errors.New("Genesis not found in chain")
)
@@ -68,9 +71,10 @@ const (
// CacheConfig contains the configuration values for the trie caching/pruning
// that's resident in a blockchain.
type CacheConfig struct {
Disabled bool // Whether to disable trie write caching (archive node)
TrieNodeLimit int // Memory limit (MB) at which to flush the current in-memory trie to disk
TrieTimeLimit time.Duration // Time limit after which to flush the current in-memory trie to disk
Disabled bool // Whether to disable trie write caching (archive node)
TrieCleanLimit int // Memory allowance (MB) to use for caching trie nodes in memory
TrieDirtyLimit int // Memory limit (MB) at which to start flushing dirty trie nodes to disk
TrieTimeLimit time.Duration // Time limit after which to flush the current in-memory trie to disk
}
// BlockChain represents the canonical chain given a database with a genesis
@@ -140,8 +144,9 @@ type BlockChain struct {
func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *params.ChainConfig, engine consensus.Engine, vmConfig vm.Config, shouldPreserve func(block *types.Block) bool) (*BlockChain, error) {
if cacheConfig == nil {
cacheConfig = &CacheConfig{
TrieNodeLimit: 256 * 1024 * 1024,
TrieTimeLimit: 5 * time.Minute,
TrieCleanLimit: 256,
TrieDirtyLimit: 256,
TrieTimeLimit: 5 * time.Minute,
}
}
bodyCache, _ := lru.New(bodyCacheLimit)
@@ -156,7 +161,7 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *par
cacheConfig: cacheConfig,
db: db,
triegc: prque.New(nil),
stateCache: state.NewDatabase(db),
stateCache: state.NewDatabaseWithCache(db, cacheConfig.TrieCleanLimit),
quit: make(chan struct{}),
shouldPreserve: shouldPreserve,
bodyCache: bodyCache,
@@ -393,6 +398,11 @@ func (bc *BlockChain) StateAt(root common.Hash) (*state.StateDB, error) {
return state.New(root, bc.stateCache)
}
// StateCache returns the caching database underpinning the blockchain instance.
func (bc *BlockChain) StateCache() state.Database {
return bc.stateCache
}
// Reset purges the entire blockchain, restoring it to its genesis state.
func (bc *BlockChain) Reset() error {
return bc.ResetWithGenesisBlock(bc.genesisBlock)
@@ -438,7 +448,11 @@ func (bc *BlockChain) repair(head **types.Block) error {
return nil
}
// Otherwise rewind one block and recheck state availability there
(*head) = bc.GetBlock((*head).ParentHash(), (*head).NumberU64()-1)
block := bc.GetBlock((*head).ParentHash(), (*head).NumberU64()-1)
if block == nil {
return fmt.Errorf("missing block %d [%x]", (*head).NumberU64()-1, (*head).ParentHash())
}
(*head) = block
}
}
@@ -554,6 +568,17 @@ func (bc *BlockChain) HasBlock(hash common.Hash, number uint64) bool {
return rawdb.HasBody(bc.db, hash, number)
}
// HasFastBlock checks if a fast block is fully present in the database or not.
func (bc *BlockChain) HasFastBlock(hash common.Hash, number uint64) bool {
if !bc.HasBlock(hash, number) {
return false
}
if bc.receiptsCache.Contains(hash) {
return true
}
return rawdb.HasReceipts(bc.db, hash, number)
}
// HasState checks if state trie is fully present in the database or not.
func (bc *BlockChain) HasState(hash common.Hash) bool {
_, err := bc.stateCache.OpenTrie(hash)
@@ -611,12 +636,10 @@ func (bc *BlockChain) GetReceiptsByHash(hash common.Hash) types.Receipts {
if receipts, ok := bc.receiptsCache.Get(hash); ok {
return receipts.(types.Receipts)
}
number := rawdb.ReadHeaderNumber(bc.db, hash)
if number == nil {
return nil
}
receipts := rawdb.ReadReceipts(bc.db, hash, *number)
bc.receiptsCache.Add(hash, receipts)
return receipts
@@ -938,7 +961,7 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
// If we exceeded our memory allowance, flush matured singleton nodes to disk
var (
nodes, imgs = triedb.Size()
limit = common.StorageSize(bc.cacheConfig.TrieNodeLimit) * 1024 * 1024
limit = common.StorageSize(bc.cacheConfig.TrieDirtyLimit) * 1024 * 1024
)
if nodes > limit || imgs > 4*1024*1024 {
triedb.Cap(limit - ethdb.IdealBatchSize)
@@ -1002,7 +1025,7 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
}
// Write the positional metadata for transaction/receipt lookups and preimages
rawdb.WriteTxLookupEntries(batch, block)
rawdb.WritePreimages(batch, block.NumberU64(), state.Preimages())
rawdb.WritePreimages(batch, state.Preimages())
status = CanonStatTy
} else {
@@ -1020,6 +1043,18 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
return status, nil
}
// addFutureBlock checks if the block is within the max allowed window to get
// accepted for future processing, and returns an error if the block is too far
// ahead and was not added.
func (bc *BlockChain) addFutureBlock(block *types.Block) error {
max := big.NewInt(time.Now().Unix() + maxTimeFutureBlocks)
if block.Time().Cmp(max) > 0 {
return fmt.Errorf("future block timestamp %v > allowed %v", block.Time(), max)
}
bc.futureBlocks.Add(block.Hash(), block)
return nil
}
// InsertChain attempts to insert the given batch of blocks in to the canonical
// chain or, otherwise, create a fork. If an error is returned it will return
// the index number of the failing block as well an error describing what went
@@ -1027,18 +1062,9 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
//
// After insertion is done, all accumulated events will be fired.
func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {
n, events, logs, err := bc.insertChain(chain)
bc.PostChainEvents(events, logs)
return n, err
}
// insertChain will execute the actual chain insertion and event aggregation. The
// only reason this method exists as a separate one is to make locking cleaner
// with deferred statements.
func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*types.Log, error) {
// Sanity check that we have something meaningful to import
if len(chain) == 0 {
return 0, nil, nil, nil
return 0, nil
}
// Do a sanity check that the provided chain is actually ordered and linked
for i := 1; i < len(chain); i++ {
@@ -1047,16 +1073,36 @@ func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*ty
log.Error("Non contiguous block insert", "number", chain[i].Number(), "hash", chain[i].Hash(),
"parent", chain[i].ParentHash(), "prevnumber", chain[i-1].Number(), "prevhash", chain[i-1].Hash())
return 0, nil, nil, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, chain[i-1].NumberU64(),
return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, chain[i-1].NumberU64(),
chain[i-1].Hash().Bytes()[:4], i, chain[i].NumberU64(), chain[i].Hash().Bytes()[:4], chain[i].ParentHash().Bytes()[:4])
}
}
// Pre-checks passed, start the full block imports
bc.wg.Add(1)
defer bc.wg.Done()
bc.chainmu.Lock()
defer bc.chainmu.Unlock()
n, events, logs, err := bc.insertChain(chain, true)
bc.chainmu.Unlock()
bc.wg.Done()
bc.PostChainEvents(events, logs)
return n, err
}
// insertChain is the internal implementation of insertChain, which assumes that
// 1) chains are contiguous, and 2) The chain mutex is held.
//
// This method is split out so that import batches that require re-injecting
// historical blocks can do so without releasing the lock, which could lead to
// racey behaviour. If a sidechain import is in progress, and the historic state
// is imported, but then new canon-head is added before the actual sidechain
// completes, then the historic state could be pruned again
func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, []interface{}, []*types.Log, error) {
// If the chain is terminating, don't even bother starting u
if atomic.LoadInt32(&bc.procInterrupt) == 1 {
return 0, nil, nil, nil
}
// Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss)
senderCacher.recoverFromBlocks(types.MakeSigner(bc.chainConfig, chain[0].Number()), chain)
// A queued approach to delivering events. This is generally
// faster than direct delivery and requires much less mutex
@@ -1073,16 +1119,56 @@ func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*ty
for i, block := range chain {
headers[i] = block.Header()
seals[i] = true
seals[i] = verifySeals
}
abort, results := bc.engine.VerifyHeaders(bc, headers, seals)
defer close(abort)
// Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss)
senderCacher.recoverFromBlocks(types.MakeSigner(bc.chainConfig, chain[0].Number()), chain)
// Peek the error for the first block to decide the directing import logic
it := newInsertIterator(chain, results, bc.Validator())
// Iterate over the blocks and insert when the verifier permits
for i, block := range chain {
block, err := it.next()
switch {
// First block is pruned, insert as sidechain and reorg only if TD grows enough
case err == consensus.ErrPrunedAncestor:
return bc.insertSidechain(it)
// First block is future, shove it (and all children) to the future queue (unknown ancestor)
case err == consensus.ErrFutureBlock || (err == consensus.ErrUnknownAncestor && bc.futureBlocks.Contains(it.first().ParentHash())):
for block != nil && (it.index == 0 || err == consensus.ErrUnknownAncestor) {
if err := bc.addFutureBlock(block); err != nil {
return it.index, events, coalescedLogs, err
}
block, err = it.next()
}
stats.queued += it.processed()
stats.ignored += it.remaining()
// If there are any still remaining, mark as ignored
return it.index, events, coalescedLogs, err
// First block (and state) is known
// 1. We did a roll-back, and should now do a re-import
// 2. The block is stored as a sidechain, and is lying about it's stateroot, and passes a stateroot
// from the canonical chain, which has not been verified.
case err == ErrKnownBlock:
// Skip all known blocks that behind us
current := bc.CurrentBlock().NumberU64()
for block != nil && err == ErrKnownBlock && current >= block.NumberU64() {
stats.ignored++
block, err = it.next()
}
// Falls through to the block import
// Some other error occurred, abort
case err != nil:
stats.ignored += len(it.chain)
bc.reportBlock(block, nil, err)
return it.index, events, coalescedLogs, err
}
// No validation errors for the first block (or chain prefix skipped)
for ; block != nil && err == nil; block, err = it.next() {
// If the chain is terminating, stop processing blocks
if atomic.LoadInt32(&bc.procInterrupt) == 1 {
log.Debug("Premature abort during blocks processing")
@@ -1091,115 +1177,53 @@ func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*ty
// If the header is a banned one, straight out abort
if BadHashes[block.Hash()] {
bc.reportBlock(block, nil, ErrBlacklistedHash)
return i, events, coalescedLogs, ErrBlacklistedHash
return it.index, events, coalescedLogs, ErrBlacklistedHash
}
// Wait for the block's verification to complete
bstart := time.Now()
// Retrieve the parent block and it's state to execute on top
start := time.Now()
err := <-results
if err == nil {
err = bc.Validator().ValidateBody(block)
}
switch {
case err == ErrKnownBlock:
// Block and state both already known. However if the current block is below
// this number we did a rollback and we should reimport it nonetheless.
if bc.CurrentBlock().NumberU64() >= block.NumberU64() {
stats.ignored++
continue
}
case err == consensus.ErrFutureBlock:
// Allow up to MaxFuture second in the future blocks. If this limit is exceeded
// the chain is discarded and processed at a later time if given.
max := big.NewInt(time.Now().Unix() + maxTimeFutureBlocks)
if block.Time().Cmp(max) > 0 {
return i, events, coalescedLogs, fmt.Errorf("future block: %v > %v", block.Time(), max)
}
bc.futureBlocks.Add(block.Hash(), block)
stats.queued++
continue
case err == consensus.ErrUnknownAncestor && bc.futureBlocks.Contains(block.ParentHash()):
bc.futureBlocks.Add(block.Hash(), block)
stats.queued++
continue
case err == consensus.ErrPrunedAncestor:
// Block competing with the canonical chain, store in the db, but don't process
// until the competitor TD goes above the canonical TD
currentBlock := bc.CurrentBlock()
localTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())
externTd := new(big.Int).Add(bc.GetTd(block.ParentHash(), block.NumberU64()-1), block.Difficulty())
if localTd.Cmp(externTd) > 0 {
if err = bc.WriteBlockWithoutState(block, externTd); err != nil {
return i, events, coalescedLogs, err
}
continue
}
// Competitor chain beat canonical, gather all blocks from the common ancestor
var winner []*types.Block
parent := bc.GetBlock(block.ParentHash(), block.NumberU64()-1)
for !bc.HasState(parent.Root()) {
winner = append(winner, parent)
parent = bc.GetBlock(parent.ParentHash(), parent.NumberU64()-1)
}
for j := 0; j < len(winner)/2; j++ {
winner[j], winner[len(winner)-1-j] = winner[len(winner)-1-j], winner[j]
}
// Import all the pruned blocks to make the state available
bc.chainmu.Unlock()
_, evs, logs, err := bc.insertChain(winner)
bc.chainmu.Lock()
events, coalescedLogs = evs, logs
if err != nil {
return i, events, coalescedLogs, err
}
case err != nil:
bc.reportBlock(block, nil, err)
return i, events, coalescedLogs, err
}
// Create a new statedb using the parent block and report an
// error if it fails.
var parent *types.Block
if i == 0 {
parent := it.previous()
if parent == nil {
parent = bc.GetBlock(block.ParentHash(), block.NumberU64()-1)
} else {
parent = chain[i-1]
}
state, err := state.New(parent.Root(), bc.stateCache)
if err != nil {
return i, events, coalescedLogs, err
return it.index, events, coalescedLogs, err
}
// Process block using the parent state as reference point.
t0 := time.Now()
receipts, logs, usedGas, err := bc.processor.Process(block, state, bc.vmConfig)
t1 := time.Now()
if err != nil {
bc.reportBlock(block, receipts, err)
return i, events, coalescedLogs, err
return it.index, events, coalescedLogs, err
}
// Validate the state using the default validator
err = bc.Validator().ValidateState(block, parent, state, receipts, usedGas)
if err != nil {
if err := bc.Validator().ValidateState(block, parent, state, receipts, usedGas); err != nil {
bc.reportBlock(block, receipts, err)
return i, events, coalescedLogs, err
return it.index, events, coalescedLogs, err
}
proctime := time.Since(bstart)
t2 := time.Now()
proctime := time.Since(start)
// Write the block to the chain and get the status.
status, err := bc.WriteBlockWithState(block, receipts, state)
t3 := time.Now()
if err != nil {
return i, events, coalescedLogs, err
return it.index, events, coalescedLogs, err
}
blockInsertTimer.UpdateSince(start)
blockExecutionTimer.Update(t1.Sub(t0))
blockValidationTimer.Update(t2.Sub(t1))
blockWriteTimer.Update(t3.Sub(t2))
switch status {
case CanonStatTy:
log.Debug("Inserted new block", "number", block.Number(), "hash", block.Hash(), "uncles", len(block.Uncles()),
"txs", len(block.Transactions()), "gas", block.GasUsed(), "elapsed", common.PrettyDuration(time.Since(bstart)))
log.Debug("Inserted new block", "number", block.Number(), "hash", block.Hash(),
"uncles", len(block.Uncles()), "txs", len(block.Transactions()), "gas", block.GasUsed(),
"elapsed", common.PrettyDuration(time.Since(start)),
"root", block.Root())
coalescedLogs = append(coalescedLogs, logs...)
blockInsertTimer.UpdateSince(bstart)
events = append(events, ChainEvent{block, block.Hash(), logs})
lastCanon = block
@@ -1207,78 +1231,153 @@ func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*ty
bc.gcproc += proctime
case SideStatTy:
log.Debug("Inserted forked block", "number", block.Number(), "hash", block.Hash(), "diff", block.Difficulty(), "elapsed",
common.PrettyDuration(time.Since(bstart)), "txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()))
blockInsertTimer.UpdateSince(bstart)
log.Debug("Inserted forked block", "number", block.Number(), "hash", block.Hash(),
"diff", block.Difficulty(), "elapsed", common.PrettyDuration(time.Since(start)),
"txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()),
"root", block.Root())
events = append(events, ChainSideEvent{block})
}
blockInsertTimer.UpdateSince(start)
stats.processed++
stats.usedGas += usedGas
cache, _ := bc.stateCache.TrieDB().Size()
stats.report(chain, i, cache)
stats.report(chain, it.index, cache)
}
// Any blocks remaining here? The only ones we care about are the future ones
if block != nil && err == consensus.ErrFutureBlock {
if err := bc.addFutureBlock(block); err != nil {
return it.index, events, coalescedLogs, err
}
block, err = it.next()
for ; block != nil && err == consensus.ErrUnknownAncestor; block, err = it.next() {
if err := bc.addFutureBlock(block); err != nil {
return it.index, events, coalescedLogs, err
}
stats.queued++
}
}
stats.ignored += it.remaining()
// Append a single chain head event if we've progressed the chain
if lastCanon != nil && bc.CurrentBlock().Hash() == lastCanon.Hash() {
events = append(events, ChainHeadEvent{lastCanon})
}
return 0, events, coalescedLogs, nil
return it.index, events, coalescedLogs, err
}
// insertStats tracks and reports on block insertion.
type insertStats struct {
queued, processed, ignored int
usedGas uint64
lastIndex int
startTime mclock.AbsTime
}
// statsReportLimit is the time limit during import and export after which we
// always print out progress. This avoids the user wondering what's going on.
const statsReportLimit = 8 * time.Second
// report prints statistics if some number of blocks have been processed
// or more than a few seconds have passed since the last message.
func (st *insertStats) report(chain []*types.Block, index int, cache common.StorageSize) {
// Fetch the timings for the batch
// insertSidechain is called when an import batch hits upon a pruned ancestor
// error, which happens when a sidechain with a sufficiently old fork-block is
// found.
//
// The method writes all (header-and-body-valid) blocks to disk, then tries to
// switch over to the new chain if the TD exceeded the current chain.
func (bc *BlockChain) insertSidechain(it *insertIterator) (int, []interface{}, []*types.Log, error) {
var (
now = mclock.Now()
elapsed = time.Duration(now) - time.Duration(st.startTime)
externTd *big.Int
current = bc.CurrentBlock().NumberU64()
)
// If we're at the last block of the batch or report period reached, log
if index == len(chain)-1 || elapsed >= statsReportLimit {
var (
end = chain[index]
txs = countTransactions(chain[st.lastIndex : index+1])
)
context := []interface{}{
"blocks", st.processed, "txs", txs, "mgas", float64(st.usedGas) / 1000000,
"elapsed", common.PrettyDuration(elapsed), "mgasps", float64(st.usedGas) * 1000 / float64(elapsed),
"number", end.Number(), "hash", end.Hash(),
}
if timestamp := time.Unix(end.Time().Int64(), 0); time.Since(timestamp) > time.Minute {
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
}
context = append(context, []interface{}{"cache", cache}...)
// The first sidechain block error is already verified to be ErrPrunedAncestor.
// Since we don't import them here, we expect ErrUnknownAncestor for the remaining
// ones. Any other errors means that the block is invalid, and should not be written
// to disk.
block, err := it.current(), consensus.ErrPrunedAncestor
for ; block != nil && (err == consensus.ErrPrunedAncestor); block, err = it.next() {
// Check the canonical state root for that number
if number := block.NumberU64(); current >= number {
if canonical := bc.GetBlockByNumber(number); canonical != nil && canonical.Root() == block.Root() {
// This is most likely a shadow-state attack. When a fork is imported into the
// database, and it eventually reaches a block height which is not pruned, we
// just found that the state already exist! This means that the sidechain block
// refers to a state which already exists in our canon chain.
//
// If left unchecked, we would now proceed importing the blocks, without actually
// having verified the state of the previous blocks.
log.Warn("Sidechain ghost-state attack detected", "number", block.NumberU64(), "sideroot", block.Root(), "canonroot", canonical.Root())
if st.queued > 0 {
context = append(context, []interface{}{"queued", st.queued}...)
// If someone legitimately side-mines blocks, they would still be imported as usual. However,
// we cannot risk writing unverified blocks to disk when they obviously target the pruning
// mechanism.
return it.index, nil, nil, errors.New("sidechain ghost-state attack")
}
}
if st.ignored > 0 {
context = append(context, []interface{}{"ignored", st.ignored}...)
if externTd == nil {
externTd = bc.GetTd(block.ParentHash(), block.NumberU64()-1)
}
log.Info("Imported new chain segment", context...)
externTd = new(big.Int).Add(externTd, block.Difficulty())
*st = insertStats{startTime: now, lastIndex: index + 1}
if !bc.HasBlock(block.Hash(), block.NumberU64()) {
start := time.Now()
if err := bc.WriteBlockWithoutState(block, externTd); err != nil {
return it.index, nil, nil, err
}
log.Debug("Inserted sidechain block", "number", block.Number(), "hash", block.Hash(),
"diff", block.Difficulty(), "elapsed", common.PrettyDuration(time.Since(start)),
"txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()),
"root", block.Root())
}
}
}
func countTransactions(chain []*types.Block) (c int) {
for _, b := range chain {
c += len(b.Transactions())
// At this point, we've written all sidechain blocks to database. Loop ended
// either on some other error or all were processed. If there was some other
// error, we can ignore the rest of those blocks.
//
// If the externTd was larger than our local TD, we now need to reimport the previous
// blocks to regenerate the required state
localTd := bc.GetTd(bc.CurrentBlock().Hash(), current)
if localTd.Cmp(externTd) > 0 {
log.Info("Sidechain written to disk", "start", it.first().NumberU64(), "end", it.previous().NumberU64(), "sidetd", externTd, "localtd", localTd)
return it.index, nil, nil, err
}
return c
// Gather all the sidechain hashes (full blocks may be memory heavy)
var (
hashes []common.Hash
numbers []uint64
)
parent := bc.GetHeader(it.previous().Hash(), it.previous().NumberU64())
for parent != nil && !bc.HasState(parent.Root) {
hashes = append(hashes, parent.Hash())
numbers = append(numbers, parent.Number.Uint64())
parent = bc.GetHeader(parent.ParentHash, parent.Number.Uint64()-1)
}
if parent == nil {
return it.index, nil, nil, errors.New("missing parent")
}
// Import all the pruned blocks to make the state available
var (
blocks []*types.Block
memory common.StorageSize
)
for i := len(hashes) - 1; i >= 0; i-- {
// Append the next block to our batch
block := bc.GetBlock(hashes[i], numbers[i])
blocks = append(blocks, block)
memory += block.Size()
// If memory use grew too large, import and continue. Sadly we need to discard
// all raised events and logs from notifications since we're too heavy on the
// memory here.
if len(blocks) >= 2048 || memory > 64*1024*1024 {
log.Info("Importing heavy sidechain segment", "blocks", len(blocks), "start", blocks[0].NumberU64(), "end", block.NumberU64())
if _, _, _, err := bc.insertChain(blocks, false); err != nil {
return 0, nil, nil, err
}
blocks, memory = blocks[:0], 0
// If the chain is terminating, stop processing blocks
if atomic.LoadInt32(&bc.procInterrupt) == 1 {
log.Debug("Premature abort during blocks processing")
return 0, nil, nil, nil
}
}
}
if len(blocks) > 0 {
log.Info("Importing sidechain segment", "start", blocks[0].NumberU64(), "end", blocks[len(blocks)-1].NumberU64())
return bc.insertChain(blocks, false)
}
return 0, nil, nil, nil
}
// reorgs takes two blocks, an old chain and a new chain and will reconstruct the blocks and inserts them
@@ -1453,8 +1552,10 @@ func (bc *BlockChain) reportBlock(block *types.Block, receipts types.Receipts, e
bc.addBadBlock(block)
var receiptString string
for _, receipt := range receipts {
receiptString += fmt.Sprintf("\t%v\n", receipt)
for i, receipt := range receipts {
receiptString += fmt.Sprintf("\t %d: cumulative: %v gas: %v contract: %v status: %v tx: %v logs: %v bloom: %x state: %x\n",
i, receipt.CumulativeGasUsed, receipt.GasUsed, receipt.ContractAddress.Hex(),
receipt.Status, receipt.TxHash.Hex(), receipt.Logs, receipt.Bloom, receipt.PostState)
}
log.Error(fmt.Sprintf(`
########## BAD BLOCK #########

143
core/blockchain_insert.go Normal file
View File

@@ -0,0 +1,143 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package core
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/mclock"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
)
// insertStats tracks and reports on block insertion.
type insertStats struct {
queued, processed, ignored int
usedGas uint64
lastIndex int
startTime mclock.AbsTime
}
// statsReportLimit is the time limit during import and export after which we
// always print out progress. This avoids the user wondering what's going on.
const statsReportLimit = 8 * time.Second
// report prints statistics if some number of blocks have been processed
// or more than a few seconds have passed since the last message.
func (st *insertStats) report(chain []*types.Block, index int, cache common.StorageSize) {
// Fetch the timings for the batch
var (
now = mclock.Now()
elapsed = time.Duration(now) - time.Duration(st.startTime)
)
// If we're at the last block of the batch or report period reached, log
if index == len(chain)-1 || elapsed >= statsReportLimit {
// Count the number of transactions in this segment
var txs int
for _, block := range chain[st.lastIndex : index+1] {
txs += len(block.Transactions())
}
end := chain[index]
// Assemble the log context and send it to the logger
context := []interface{}{
"blocks", st.processed, "txs", txs, "mgas", float64(st.usedGas) / 1000000,
"elapsed", common.PrettyDuration(elapsed), "mgasps", float64(st.usedGas) * 1000 / float64(elapsed),
"number", end.Number(), "hash", end.Hash(),
}
if timestamp := time.Unix(end.Time().Int64(), 0); time.Since(timestamp) > time.Minute {
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
}
context = append(context, []interface{}{"cache", cache}...)
if st.queued > 0 {
context = append(context, []interface{}{"queued", st.queued}...)
}
if st.ignored > 0 {
context = append(context, []interface{}{"ignored", st.ignored}...)
}
log.Info("Imported new chain segment", context...)
// Bump the stats reported to the next section
*st = insertStats{startTime: now, lastIndex: index + 1}
}
}
// insertIterator is a helper to assist during chain import.
type insertIterator struct {
chain types.Blocks
results <-chan error
index int
validator Validator
}
// newInsertIterator creates a new iterator based on the given blocks, which are
// assumed to be a contiguous chain.
func newInsertIterator(chain types.Blocks, results <-chan error, validator Validator) *insertIterator {
return &insertIterator{
chain: chain,
results: results,
index: -1,
validator: validator,
}
}
// next returns the next block in the iterator, along with any potential validation
// error for that block. When the end is reached, it will return (nil, nil).
func (it *insertIterator) next() (*types.Block, error) {
if it.index+1 >= len(it.chain) {
it.index = len(it.chain)
return nil, nil
}
it.index++
if err := <-it.results; err != nil {
return it.chain[it.index], err
}
return it.chain[it.index], it.validator.ValidateBody(it.chain[it.index])
}
// current returns the current block that's being processed.
func (it *insertIterator) current() *types.Block {
if it.index < 0 || it.index+1 >= len(it.chain) {
return nil
}
return it.chain[it.index]
}
// previous returns the previous block was being processed, or nil
func (it *insertIterator) previous() *types.Block {
if it.index < 1 {
return nil
}
return it.chain[it.index-1]
}
// first returns the first block in the it.
func (it *insertIterator) first() *types.Block {
return it.chain[0]
}
// remaining returns the number of remaining blocks.
func (it *insertIterator) remaining() int {
return len(it.chain) - it.index
}
// processed returns the number of processed blocks.
func (it *insertIterator) processed() int {
return it.index + 1
}

View File

@@ -579,11 +579,11 @@ func testInsertNonceError(t *testing.T, full bool) {
blockchain.hc.engine = blockchain.engine
failRes, err = blockchain.InsertHeaderChain(headers, 1)
}
// Check that the returned error indicates the failure.
// Check that the returned error indicates the failure
if failRes != failAt {
t.Errorf("test %d: failure index mismatch: have %d, want %d", i, failRes, failAt)
t.Errorf("test %d: failure (%v) index mismatch: have %d, want %d", i, err, failRes, failAt)
}
// Check that all no blocks after the failing block have been inserted.
// Check that all blocks after the failing block have been inserted
for j := 0; j < i-failAt; j++ {
if full {
if block := blockchain.GetBlockByNumber(failNum + uint64(j)); block != nil {
@@ -1345,7 +1345,7 @@ func TestLargeReorgTrieGC(t *testing.T) {
t.Fatalf("failed to insert shared chain: %v", err)
}
if _, err := chain.InsertChain(original); err != nil {
t.Fatalf("failed to insert shared chain: %v", err)
t.Fatalf("failed to insert original chain: %v", err)
}
// Ensure that the state associated with the forking point is pruned away
if node, _ := chain.stateCache.TrieDB().Node(shared[len(shared)-1].Root()); node != nil {

View File

@@ -33,12 +33,11 @@ import (
// BlockGen creates blocks for testing.
// See GenerateChain for a detailed explanation.
type BlockGen struct {
i int
parent *types.Block
chain []*types.Block
chainReader consensus.ChainReader
header *types.Header
statedb *state.StateDB
i int
parent *types.Block
chain []*types.Block
header *types.Header
statedb *state.StateDB
gasPool *GasPool
txs []*types.Transaction
@@ -138,7 +137,7 @@ func (b *BlockGen) AddUncle(h *types.Header) {
// For index -1, PrevBlock returns the parent block given to GenerateChain.
func (b *BlockGen) PrevBlock(index int) *types.Block {
if index >= b.i {
panic("block index out of range")
panic(fmt.Errorf("block index %d out of range (%d,%d)", index, -1, b.i))
}
if index == -1 {
return b.parent
@@ -154,7 +153,8 @@ func (b *BlockGen) OffsetTime(seconds int64) {
if b.header.Time.Cmp(b.parent.Header().Time) <= 0 {
panic("block time out of range")
}
b.header.Difficulty = b.engine.CalcDifficulty(b.chainReader, b.header.Time.Uint64(), b.parent.Header())
chainreader := &fakeChainReader{config: b.config}
b.header.Difficulty = b.engine.CalcDifficulty(chainreader, b.header.Time.Uint64(), b.parent.Header())
}
// GenerateChain creates a chain of n blocks. The first block's
@@ -174,14 +174,10 @@ func GenerateChain(config *params.ChainConfig, parent *types.Block, engine conse
config = params.TestChainConfig
}
blocks, receipts := make(types.Blocks, n), make([]types.Receipts, n)
chainreader := &fakeChainReader{config: config}
genblock := func(i int, parent *types.Block, statedb *state.StateDB) (*types.Block, types.Receipts) {
// TODO(karalabe): This is needed for clique, which depends on multiple blocks.
// It's nonetheless ugly to spin up a blockchain here. Get rid of this somehow.
blockchain, _ := NewBlockChain(db, nil, config, engine, vm.Config{}, nil)
defer blockchain.Stop()
b := &BlockGen{i: i, parent: parent, chain: blocks, chainReader: blockchain, statedb: statedb, config: config, engine: engine}
b.header = makeHeader(b.chainReader, parent, statedb, b.engine)
b := &BlockGen{i: i, chain: blocks, parent: parent, statedb: statedb, config: config, engine: engine}
b.header = makeHeader(chainreader, parent, statedb, b.engine)
// Mutate the state and block according to any hard-fork specs
if daoBlock := config.DAOForkBlock; daoBlock != nil {
@@ -201,7 +197,7 @@ func GenerateChain(config *params.ChainConfig, parent *types.Block, engine conse
}
if b.engine != nil {
// Finalize and seal the block
block, _ := b.engine.Finalize(b.chainReader, b.header, statedb, b.txs, b.uncles, b.receipts)
block, _ := b.engine.Finalize(chainreader, b.header, statedb, b.txs, b.uncles, b.receipts)
// Write state changes to db
root, err := statedb.Commit(config.IsEIP158(b.header.Number))
@@ -269,3 +265,19 @@ func makeBlockChain(parent *types.Block, n int, engine consensus.Engine, db ethd
})
return blocks
}
type fakeChainReader struct {
config *params.ChainConfig
genesis *types.Block
}
// Config returns the chain configuration.
func (cr *fakeChainReader) Config() *params.ChainConfig {
return cr.config
}
func (cr *fakeChainReader) CurrentHeader() *types.Header { return nil }
func (cr *fakeChainReader) GetHeaderByNumber(number uint64) *types.Header { return nil }
func (cr *fakeChainReader) GetHeaderByHash(hash common.Hash) *types.Header { return nil }
func (cr *fakeChainReader) GetHeader(hash common.Hash, number uint64) *types.Header { return nil }
func (cr *fakeChainReader) GetBlock(hash common.Hash, number uint64) *types.Block { return nil }

View File

@@ -271,6 +271,15 @@ func DeleteTd(db DatabaseDeleter, hash common.Hash, number uint64) {
}
}
// HasReceipts verifies the existence of all the transaction receipts belonging
// to a block.
func HasReceipts(db DatabaseReader, hash common.Hash, number uint64) bool {
if has, err := db.Has(blockReceiptsKey(number, hash)); !has || err != nil {
return false
}
return true
}
// ReadReceipts retrieves all the transaction receipts belonging to a block.
func ReadReceipts(db DatabaseReader, hash common.Hash, number uint64) types.Receipts {
// Retrieve the flattened receipt slice
@@ -278,7 +287,7 @@ func ReadReceipts(db DatabaseReader, hash common.Hash, number uint64) types.Rece
if len(data) == 0 {
return nil
}
// Convert the revceipts from their storage form to their internal representation
// Convert the receipts from their storage form to their internal representation
storageReceipts := []*types.ReceiptForStorage{}
if err := rlp.DecodeBytes(data, &storageReceipts); err != nil {
log.Error("Invalid receipt array RLP", "hash", hash, "err", err)

View File

@@ -77,9 +77,8 @@ func ReadPreimage(db DatabaseReader, hash common.Hash) []byte {
return data
}
// WritePreimages writes the provided set of preimages to the database. `number` is the
// current block number, and is used for debug messages only.
func WritePreimages(db DatabaseWriter, number uint64, preimages map[common.Hash][]byte) {
// WritePreimages writes the provided set of preimages to the database.
func WritePreimages(db DatabaseWriter, preimages map[common.Hash][]byte) {
for hash, preimage := range preimages {
if err := db.Put(preimageKey(hash), preimage); err != nil {
log.Crit("Failed to store trie preimage", "err", err)

View File

@@ -35,7 +35,7 @@ var (
// headBlockKey tracks the latest know full block's hash.
headBlockKey = []byte("LastBlock")
// headFastBlockKey tracks the latest known incomplete block's hash duirng fast sync.
// headFastBlockKey tracks the latest known incomplete block's hash during fast sync.
headFastBlockKey = []byte("LastFast")
// fastTrieProgressKey tracks the number of trie entries imported during fast sync.

View File

@@ -72,13 +72,19 @@ type Trie interface {
}
// NewDatabase creates a backing store for state. The returned database is safe for
// concurrent use and retains cached trie nodes in memory. The pool is an optional
// intermediate trie-node memory pool between the low level storage layer and the
// high level trie abstraction.
// concurrent use and retains a few recent expanded trie nodes in memory. To keep
// more historical state in memory, use the NewDatabaseWithCache constructor.
func NewDatabase(db ethdb.Database) Database {
return NewDatabaseWithCache(db, 0)
}
// NewDatabase creates a backing store for state. The returned database is safe for
// concurrent use and retains both a few recent expanded trie nodes in memory, as
// well as a lot of collapsed RLP trie nodes in a large memory cache.
func NewDatabaseWithCache(db ethdb.Database, cache int) Database {
csc, _ := lru.New(codeSizeCacheSize)
return &cachingDB{
db: trie.NewDatabase(db),
db: trie.NewDatabaseWithCache(db, cache),
codeSizeCache: csc,
}
}

View File

@@ -18,10 +18,10 @@
package state
import (
"errors"
"fmt"
"math/big"
"sort"
"sync"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
@@ -44,6 +44,13 @@ var (
emptyCode = crypto.Keccak256Hash(nil)
)
type proofList [][]byte
func (n *proofList) Put(key []byte, value []byte) error {
*n = append(*n, value)
return nil
}
// StateDBs within the ethereum protocol are used to store anything
// within the merkle trie. StateDBs take care of caching and storing
// nested states. It's the general query interface to retrieve:
@@ -79,8 +86,6 @@ type StateDB struct {
journal *journal
validRevisions []revision
nextRevisionId int
lock sync.Mutex
}
// Create a new state from a given trie.
@@ -256,6 +261,24 @@ func (self *StateDB) GetState(addr common.Address, hash common.Hash) common.Hash
return common.Hash{}
}
// GetProof returns the MerkleProof for a given Account
func (self *StateDB) GetProof(a common.Address) ([][]byte, error) {
var proof proofList
err := self.trie.Prove(crypto.Keccak256(a.Bytes()), 0, &proof)
return [][]byte(proof), err
}
// GetProof returns the StorageProof for given key
func (self *StateDB) GetStorageProof(a common.Address, key common.Hash) ([][]byte, error) {
var proof proofList
trie := self.StorageTrie(a)
if trie == nil {
return proof, errors.New("storage trie for requested address does not exist")
}
err := trie.Prove(crypto.Keccak256(key.Bytes()), 0, &proof)
return [][]byte(proof), err
}
// GetCommittedState retrieves a value from the given account's committed storage trie.
func (self *StateDB) GetCommittedState(addr common.Address, hash common.Hash) common.Hash {
stateObject := self.getStateObject(addr)
@@ -470,9 +493,6 @@ func (db *StateDB) ForEachStorage(addr common.Address, cb func(key, value common
// Copy creates a deep, independent copy of the state.
// Snapshots of the copied state cannot be applied to the copy.
func (self *StateDB) Copy() *StateDB {
self.lock.Lock()
defer self.lock.Unlock()
// Copy all the basic fields, initialize the memory ones
state := &StateDB{
db: self.db,

View File

@@ -825,7 +825,7 @@ func (pool *TxPool) addTxs(txs []*types.Transaction, local bool) []error {
// addTxsLocked attempts to queue a batch of transactions if they are valid,
// whilst assuming the transaction pool lock is already held.
func (pool *TxPool) addTxsLocked(txs []*types.Transaction, local bool) []error {
// Add the batch of transaction, tracking the accepted ones
// Add the batch of transactions, tracking the accepted ones
dirty := make(map[common.Address]struct{})
errs := make([]error, len(txs))

View File

@@ -81,8 +81,8 @@ type Header struct {
GasUsed uint64 `json:"gasUsed" gencodec:"required"`
Time *big.Int `json:"timestamp" gencodec:"required"`
Extra []byte `json:"extraData" gencodec:"required"`
MixDigest common.Hash `json:"mixHash" gencodec:"required"`
Nonce BlockNonce `json:"nonce" gencodec:"required"`
MixDigest common.Hash `json:"mixHash"`
Nonce BlockNonce `json:"nonce"`
}
// field type overrides for gencodec

View File

@@ -13,6 +13,7 @@ import (
var _ = (*headerMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (h Header) MarshalJSON() ([]byte, error) {
type Header struct {
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
@@ -28,8 +29,8 @@ func (h Header) MarshalJSON() ([]byte, error) {
GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Time *hexutil.Big `json:"timestamp" gencodec:"required"`
Extra hexutil.Bytes `json:"extraData" gencodec:"required"`
MixDigest common.Hash `json:"mixHash" gencodec:"required"`
Nonce BlockNonce `json:"nonce" gencodec:"required"`
MixDigest common.Hash `json:"mixHash"`
Nonce BlockNonce `json:"nonce"`
Hash common.Hash `json:"hash"`
}
var enc Header
@@ -52,6 +53,7 @@ func (h Header) MarshalJSON() ([]byte, error) {
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (h *Header) UnmarshalJSON(input []byte) error {
type Header struct {
ParentHash *common.Hash `json:"parentHash" gencodec:"required"`
@@ -67,8 +69,8 @@ func (h *Header) UnmarshalJSON(input []byte) error {
GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Time *hexutil.Big `json:"timestamp" gencodec:"required"`
Extra *hexutil.Bytes `json:"extraData" gencodec:"required"`
MixDigest *common.Hash `json:"mixHash" gencodec:"required"`
Nonce *BlockNonce `json:"nonce" gencodec:"required"`
MixDigest *common.Hash `json:"mixHash"`
Nonce *BlockNonce `json:"nonce"`
}
var dec Header
if err := json.Unmarshal(input, &dec); err != nil {
@@ -126,13 +128,11 @@ func (h *Header) UnmarshalJSON(input []byte) error {
return errors.New("missing required field 'extraData' for Header")
}
h.Extra = *dec.Extra
if dec.MixDigest == nil {
return errors.New("missing required field 'mixHash' for Header")
if dec.MixDigest != nil {
h.MixDigest = *dec.MixDigest
}
h.MixDigest = *dec.MixDigest
if dec.Nonce == nil {
return errors.New("missing required field 'nonce' for Header")
if dec.Nonce != nil {
h.Nonce = *dec.Nonce
}
h.Nonce = *dec.Nonce
return nil
}

View File

@@ -136,7 +136,7 @@ func (s EIP155Signer) Sender(tx *Transaction) (common.Address, error) {
return recoverPlain(s.Hash(tx), tx.data.R, tx.data.S, V, true)
}
// WithSignature returns a new transaction with the given signature. This signature
// SignatureValues returns signature values. This signature
// needs to be in the [R || S || V] format where V is 0 or 1.
func (s EIP155Signer) SignatureValues(tx *Transaction, sig []byte) (R, S, V *big.Int, err error) {
R, S, V, err = HomesteadSigner{}.SignatureValues(tx, sig)

View File

@@ -13,20 +13,22 @@ import (
var _ = (*structLogMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (s StructLog) MarshalJSON() ([]byte, error) {
type StructLog struct {
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas math.HexOrDecimal64 `json:"gas"`
GasCost math.HexOrDecimal64 `json:"gasCost"`
Memory hexutil.Bytes `json:"memory"`
MemorySize int `json:"memSize"`
Stack []*math.HexOrDecimal256 `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
Err error `json:"-"`
OpName string `json:"opName"`
ErrorString string `json:"error"`
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas math.HexOrDecimal64 `json:"gas"`
GasCost math.HexOrDecimal64 `json:"gasCost"`
Memory hexutil.Bytes `json:"memory"`
MemorySize int `json:"memSize"`
Stack []*math.HexOrDecimal256 `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
RefundCounter uint64 `json:"refund"`
Err error `json:"-"`
OpName string `json:"opName"`
ErrorString string `json:"error"`
}
var enc StructLog
enc.Pc = s.Pc
@@ -43,24 +45,27 @@ func (s StructLog) MarshalJSON() ([]byte, error) {
}
enc.Storage = s.Storage
enc.Depth = s.Depth
enc.RefundCounter = s.RefundCounter
enc.Err = s.Err
enc.OpName = s.OpName()
enc.ErrorString = s.ErrorString()
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (s *StructLog) UnmarshalJSON(input []byte) error {
type StructLog struct {
Pc *uint64 `json:"pc"`
Op *OpCode `json:"op"`
Gas *math.HexOrDecimal64 `json:"gas"`
GasCost *math.HexOrDecimal64 `json:"gasCost"`
Memory *hexutil.Bytes `json:"memory"`
MemorySize *int `json:"memSize"`
Stack []*math.HexOrDecimal256 `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth *int `json:"depth"`
Err error `json:"-"`
Pc *uint64 `json:"pc"`
Op *OpCode `json:"op"`
Gas *math.HexOrDecimal64 `json:"gas"`
GasCost *math.HexOrDecimal64 `json:"gasCost"`
Memory *hexutil.Bytes `json:"memory"`
MemorySize *int `json:"memSize"`
Stack []*math.HexOrDecimal256 `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth *int `json:"depth"`
RefundCounter *uint64 `json:"refund"`
Err error `json:"-"`
}
var dec StructLog
if err := json.Unmarshal(input, &dec); err != nil {
@@ -96,6 +101,9 @@ func (s *StructLog) UnmarshalJSON(input []byte) error {
if dec.Depth != nil {
s.Depth = *dec.Depth
}
if dec.RefundCounter != nil {
s.RefundCounter = *dec.RefundCounter
}
if dec.Err != nil {
s.Err = dec.Err
}

View File

@@ -124,10 +124,22 @@ func opSmod(pc *uint64, interpreter *EVMInterpreter, contract *Contract, memory
func opExp(pc *uint64, interpreter *EVMInterpreter, contract *Contract, memory *Memory, stack *Stack) ([]byte, error) {
base, exponent := stack.pop(), stack.pop()
stack.push(math.Exp(base, exponent))
interpreter.intPool.put(base, exponent)
// some shortcuts
cmpToOne := exponent.Cmp(big1)
if cmpToOne < 0 { // Exponent is zero
// x ^ 0 == 1
stack.push(base.SetUint64(1))
} else if base.Sign() == 0 {
// 0 ^ y, if y != 0, == 0
stack.push(base.SetUint64(0))
} else if cmpToOne == 0 { // Exponent is one
// x ^ 1 == x
stack.push(base)
} else {
stack.push(math.Exp(base, exponent))
interpreter.intPool.put(base)
}
interpreter.intPool.put(exponent)
return nil, nil
}
@@ -532,7 +544,12 @@ func opExtCodeCopy(pc *uint64, interpreter *EVMInterpreter, contract *Contract,
// this account should be regarded as a non-existent account and zero should be returned.
func opExtCodeHash(pc *uint64, interpreter *EVMInterpreter, contract *Contract, memory *Memory, stack *Stack) ([]byte, error) {
slot := stack.peek()
slot.SetBytes(interpreter.evm.StateDB.GetCodeHash(common.BigToAddress(slot)).Bytes())
address := common.BigToAddress(slot)
if interpreter.evm.StateDB.Empty(address) {
slot.SetUint64(0)
} else {
slot.SetBytes(interpreter.evm.StateDB.GetCodeHash(address).Bytes())
}
return nil, nil
}

View File

@@ -56,16 +56,17 @@ type LogConfig struct {
// StructLog is emitted to the EVM each cycle and lists information about the current internal state
// prior to the execution of the statement.
type StructLog struct {
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas uint64 `json:"gas"`
GasCost uint64 `json:"gasCost"`
Memory []byte `json:"memory"`
MemorySize int `json:"memSize"`
Stack []*big.Int `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
Err error `json:"-"`
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas uint64 `json:"gas"`
GasCost uint64 `json:"gasCost"`
Memory []byte `json:"memory"`
MemorySize int `json:"memSize"`
Stack []*big.Int `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
RefundCounter uint64 `json:"refund"`
Err error `json:"-"`
}
// overrides for gencodec
@@ -177,7 +178,7 @@ func (l *StructLogger) CaptureState(env *EVM, pc uint64, op OpCode, gas, cost ui
storage = l.changedValues[contract.Address()].Copy()
}
// create a new snaptshot of the EVM.
log := StructLog{pc, op, gas, cost, mem, memory.Len(), stck, storage, depth, err}
log := StructLog{pc, op, gas, cost, mem, memory.Len(), stck, storage, depth, env.StateDB.GetRefund(), err}
l.logs = append(l.logs, log)
return nil

View File

@@ -21,6 +21,7 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/params"
)
@@ -41,9 +42,15 @@ func (d *dummyContractRef) SetBalance(*big.Int) {}
func (d *dummyContractRef) SetNonce(uint64) {}
func (d *dummyContractRef) Balance() *big.Int { return new(big.Int) }
type dummyStatedb struct {
state.StateDB
}
func (*dummyStatedb) GetRefund() uint64 { return 1337 }
func TestStoreCapture(t *testing.T) {
var (
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
env = NewEVM(Context{}, &dummyStatedb{}, params.TestChainConfig, Config{})
logger = NewStructLogger(nil)
mem = NewMemory()
stack = newstack()
@@ -51,9 +58,7 @@ func TestStoreCapture(t *testing.T) {
)
stack.push(big.NewInt(1))
stack.push(big.NewInt(0))
var index common.Hash
logger.CaptureState(env, 0, SSTORE, 0, 0, mem, stack, contract, 0, nil)
if len(logger.changedValues[contract.Address()]) == 0 {
t.Fatalf("expected exactly 1 changed value on address %x, got %d", contract.Address(), len(logger.changedValues[contract.Address()]))

View File

@@ -444,16 +444,16 @@ func (api *PrivateDebugAPI) getModifiedAccounts(startBlock, endBlock *types.Bloc
if startBlock.Number().Uint64() >= endBlock.Number().Uint64() {
return nil, fmt.Errorf("start block height (%d) must be less than end block height (%d)", startBlock.Number().Uint64(), endBlock.Number().Uint64())
}
triedb := api.eth.BlockChain().StateCache().TrieDB()
oldTrie, err := trie.NewSecure(startBlock.Root(), trie.NewDatabase(api.eth.chainDb), 0)
oldTrie, err := trie.NewSecure(startBlock.Root(), triedb, 0)
if err != nil {
return nil, err
}
newTrie, err := trie.NewSecure(endBlock.Root(), trie.NewDatabase(api.eth.chainDb), 0)
newTrie, err := trie.NewSecure(endBlock.Root(), triedb, 0)
if err != nil {
return nil, err
}
diff, _ := trie.NewDifferenceIterator(oldTrie.NodeIterator([]byte{}), newTrie.NodeIterator([]byte{}))
iter := trie.NewIterator(diff)

View File

@@ -138,7 +138,7 @@ func (api *PrivateDebugAPI) traceChain(ctx context.Context, start, end *types.Bl
// Ensure we have a valid starting state before doing any work
origin := start.NumberU64()
database := state.NewDatabase(api.eth.ChainDb())
database := state.NewDatabaseWithCache(api.eth.ChainDb(), 16) // Chain tracing will probably start at genesis
if number := start.NumberU64(); number > 0 {
start = api.eth.blockchain.GetBlock(start.ParentHash(), start.NumberU64()-1)
@@ -492,7 +492,7 @@ func (api *PrivateDebugAPI) computeStateDB(block *types.Block, reexec uint64) (*
}
// Otherwise try to reexec blocks until we find a state or reach our limit
origin := block.NumberU64()
database := state.NewDatabase(api.eth.ChainDb())
database := state.NewDatabaseWithCache(api.eth.ChainDb(), 16)
for i := uint64(0); i < reexec; i++ {
block = api.eth.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1)

View File

@@ -154,7 +154,7 @@ func New(ctx *node.ServiceContext, config *Config) (*Ethereum, error) {
EWASMInterpreter: config.EWASMInterpreter,
EVMInterpreter: config.EVMInterpreter,
}
cacheConfig = &core.CacheConfig{Disabled: config.NoPruning, TrieNodeLimit: config.TrieCache, TrieTimeLimit: config.TrieTimeout}
cacheConfig = &core.CacheConfig{Disabled: config.NoPruning, TrieCleanLimit: config.TrieCleanCache, TrieDirtyLimit: config.TrieDirtyCache, TrieTimeLimit: config.TrieTimeout}
)
eth.blockchain, err = core.NewBlockChain(chainDb, cacheConfig, eth.chainConfig, eth.engine, vmConfig, eth.shouldPreserve)
if err != nil {

View File

@@ -43,15 +43,16 @@ var DefaultConfig = Config{
DatasetsInMem: 1,
DatasetsOnDisk: 2,
},
NetworkId: 1,
LightPeers: 100,
DatabaseCache: 768,
TrieCache: 256,
TrieTimeout: 60 * time.Minute,
MinerGasFloor: 8000000,
MinerGasCeil: 8000000,
MinerGasPrice: big.NewInt(params.GWei),
MinerRecommit: 3 * time.Second,
NetworkId: 1,
LightPeers: 100,
DatabaseCache: 512,
TrieCleanCache: 256,
TrieDirtyCache: 256,
TrieTimeout: 60 * time.Minute,
MinerGasFloor: 8000000,
MinerGasCeil: 8000000,
MinerGasPrice: big.NewInt(params.GWei),
MinerRecommit: 3 * time.Second,
TxPool: core.DefaultTxPoolConfig,
GPO: gasprice.Config{
@@ -94,7 +95,8 @@ type Config struct {
SkipBcVersionCheck bool `toml:"-"`
DatabaseHandles int `toml:"-"`
DatabaseCache int
TrieCache int
TrieCleanCache int
TrieDirtyCache int
TrieTimeout time.Duration
// Mining-related options
@@ -122,7 +124,7 @@ type Config struct {
// Miscellaneous options
DocRoot string `toml:"-"`
// Type of the EWASM interpreter ("" for detault)
// Type of the EWASM interpreter ("" for default)
EWASMInterpreter string
// Type of the EVM interpreter ("" for default)
EVMInterpreter string

View File

@@ -99,6 +99,7 @@ type Downloader struct {
mode SyncMode // Synchronisation mode defining the strategy used (per sync cycle)
mux *event.TypeMux // Event multiplexer to announce sync operation events
genesis uint64 // Genesis block number to limit sync to (e.g. light client CHT)
queue *queue // Scheduler for selecting the hashes to download
peers *peerSet // Set of active peers from which download can proceed
stateDB ethdb.Database
@@ -181,6 +182,9 @@ type BlockChain interface {
// HasBlock verifies a block's presence in the local chain.
HasBlock(common.Hash, uint64) bool
// HasFastBlock verifies a fast block's presence in the local chain.
HasFastBlock(common.Hash, uint64) bool
// GetBlockByHash retrieves a block from the local chain.
GetBlockByHash(common.Hash) *types.Block
@@ -430,7 +434,7 @@ func (d *Downloader) syncWithPeer(p *peerConnection, hash common.Hash, td *big.I
}
height := latest.Number.Uint64()
origin, err := d.findAncestor(p, height)
origin, err := d.findAncestor(p, latest)
if err != nil {
return err
}
@@ -587,41 +591,107 @@ func (d *Downloader) fetchHeight(p *peerConnection) (*types.Header, error) {
}
}
// calculateRequestSpan calculates what headers to request from a peer when trying to determine the
// common ancestor.
// It returns parameters to be used for peer.RequestHeadersByNumber:
// from - starting block number
// count - number of headers to request
// skip - number of headers to skip
// and also returns 'max', the last block which is expected to be returned by the remote peers,
// given the (from,count,skip)
func calculateRequestSpan(remoteHeight, localHeight uint64) (int64, int, int, uint64) {
var (
from int
count int
MaxCount = MaxHeaderFetch / 16
)
// requestHead is the highest block that we will ask for. If requestHead is not offset,
// the highest block that we will get is 16 blocks back from head, which means we
// will fetch 14 or 15 blocks unnecessarily in the case the height difference
// between us and the peer is 1-2 blocks, which is most common
requestHead := int(remoteHeight) - 1
if requestHead < 0 {
requestHead = 0
}
// requestBottom is the lowest block we want included in the query
// Ideally, we want to include just below own head
requestBottom := int(localHeight - 1)
if requestBottom < 0 {
requestBottom = 0
}
totalSpan := requestHead - requestBottom
span := 1 + totalSpan/MaxCount
if span < 2 {
span = 2
}
if span > 16 {
span = 16
}
count = 1 + totalSpan/span
if count > MaxCount {
count = MaxCount
}
if count < 2 {
count = 2
}
from = requestHead - (count-1)*span
if from < 0 {
from = 0
}
max := from + (count-1)*span
return int64(from), count, span - 1, uint64(max)
}
// findAncestor tries to locate the common ancestor link of the local chain and
// a remote peers blockchain. In the general case when our node was in sync and
// on the correct chain, checking the top N links should already get us a match.
// In the rare scenario when we ended up on a long reorganisation (i.e. none of
// the head links match), we do a binary search to find the common ancestor.
func (d *Downloader) findAncestor(p *peerConnection, height uint64) (uint64, error) {
func (d *Downloader) findAncestor(p *peerConnection, remoteHeader *types.Header) (uint64, error) {
// Figure out the valid ancestor range to prevent rewrite attacks
floor, ceil := int64(-1), d.lightchain.CurrentHeader().Number.Uint64()
var (
floor = int64(-1)
localHeight uint64
remoteHeight = remoteHeader.Number.Uint64()
)
switch d.mode {
case FullSync:
localHeight = d.blockchain.CurrentBlock().NumberU64()
case FastSync:
localHeight = d.blockchain.CurrentFastBlock().NumberU64()
default:
localHeight = d.lightchain.CurrentHeader().Number.Uint64()
}
p.log.Debug("Looking for common ancestor", "local", localHeight, "remote", remoteHeight)
if localHeight >= MaxForkAncestry {
// We're above the max reorg threshold, find the earliest fork point
floor = int64(localHeight - MaxForkAncestry)
if d.mode == FullSync {
ceil = d.blockchain.CurrentBlock().NumberU64()
} else if d.mode == FastSync {
ceil = d.blockchain.CurrentFastBlock().NumberU64()
// If we're doing a light sync, ensure the floor doesn't go below the CHT, as
// all headers before that point will be missing.
if d.mode == LightSync {
// If we dont know the current CHT position, find it
if d.genesis == 0 {
header := d.lightchain.CurrentHeader()
for header != nil {
d.genesis = header.Number.Uint64()
if floor >= int64(d.genesis)-1 {
break
}
header = d.lightchain.GetHeaderByHash(header.ParentHash)
}
}
// We already know the "genesis" block number, cap floor to that
if floor < int64(d.genesis)-1 {
floor = int64(d.genesis) - 1
}
}
}
if ceil >= MaxForkAncestry {
floor = int64(ceil - MaxForkAncestry)
}
p.log.Debug("Looking for common ancestor", "local", ceil, "remote", height)
from, count, skip, max := calculateRequestSpan(remoteHeight, localHeight)
// Request the topmost blocks to short circuit binary ancestor lookup
head := ceil
if head > height {
head = height
}
from := int64(head) - int64(MaxHeaderFetch)
if from < 0 {
from = 0
}
// Span out with 15 block gaps into the future to catch bad head reports
limit := 2 * MaxHeaderFetch / 16
count := 1 + int((int64(ceil)-from)/16)
if count > limit {
count = limit
}
go p.peer.RequestHeadersByNumber(uint64(from), count, 15, false)
p.log.Trace("Span searching for common ancestor", "count", count, "from", from, "skip", skip)
go p.peer.RequestHeadersByNumber(uint64(from), count, skip, false)
// Wait for the remote response to the head fetch
number, hash := uint64(0), common.Hash{}
@@ -647,9 +717,10 @@ func (d *Downloader) findAncestor(p *peerConnection, height uint64) (uint64, err
return 0, errEmptyHeaderSet
}
// Make sure the peer's reply conforms to the request
for i := 0; i < len(headers); i++ {
if number := headers[i].Number.Int64(); number != from+int64(i)*16 {
p.log.Warn("Head headers broke chain ordering", "index", i, "requested", from+int64(i)*16, "received", number)
for i, header := range headers {
expectNumber := from + int64(i)*int64((skip+1))
if number := header.Number.Int64(); number != expectNumber {
p.log.Warn("Head headers broke chain ordering", "index", i, "requested", expectNumber, "received", number)
return 0, errInvalidChain
}
}
@@ -657,20 +728,24 @@ func (d *Downloader) findAncestor(p *peerConnection, height uint64) (uint64, err
finished = true
for i := len(headers) - 1; i >= 0; i-- {
// Skip any headers that underflow/overflow our requested set
if headers[i].Number.Int64() < from || headers[i].Number.Uint64() > ceil {
if headers[i].Number.Int64() < from || headers[i].Number.Uint64() > max {
continue
}
// Otherwise check if we already know the header or not
h := headers[i].Hash()
n := headers[i].Number.Uint64()
if (d.mode == FullSync && d.blockchain.HasBlock(h, n)) || (d.mode != FullSync && d.lightchain.HasHeader(h, n)) {
number, hash = n, h
// If every header is known, even future ones, the peer straight out lied about its head
if number > height && i == limit-1 {
p.log.Warn("Lied about chain head", "reported", height, "found", number)
return 0, errStallingPeer
}
var known bool
switch d.mode {
case FullSync:
known = d.blockchain.HasBlock(h, n)
case FastSync:
known = d.blockchain.HasFastBlock(h, n)
default:
known = d.lightchain.HasHeader(h, n)
}
if known {
number, hash = n, h
break
}
}
@@ -694,10 +769,12 @@ func (d *Downloader) findAncestor(p *peerConnection, height uint64) (uint64, err
return number, nil
}
// Ancestor not found, we need to binary search over our chain
start, end := uint64(0), head
start, end := uint64(0), remoteHeight
if floor > 0 {
start = uint64(floor)
}
p.log.Trace("Binary searching for common ancestor", "start", start, "end", end)
for start+1 < end {
// Split our chain interval in two, and request the hash to cross check
check := (start + end) / 2
@@ -730,7 +807,17 @@ func (d *Downloader) findAncestor(p *peerConnection, height uint64) (uint64, err
// Modify the search interval based on the response
h := headers[0].Hash()
n := headers[0].Number.Uint64()
if (d.mode == FullSync && !d.blockchain.HasBlock(h, n)) || (d.mode != FullSync && !d.lightchain.HasHeader(h, n)) {
var known bool
switch d.mode {
case FullSync:
known = d.blockchain.HasBlock(h, n)
case FastSync:
known = d.blockchain.HasFastBlock(h, n)
default:
known = d.lightchain.HasHeader(h, n)
}
if !known {
end = check
break
}
@@ -740,6 +827,7 @@ func (d *Downloader) findAncestor(p *peerConnection, height uint64) (uint64, err
return 0, errBadPeer
}
start = check
hash = h
case <-timeout:
p.log.Debug("Waiting for search header timed out", "elapsed", ttl)
@@ -1246,7 +1334,7 @@ func (d *Downloader) processHeaders(origin uint64, pivot uint64, td *big.Int) er
}
// If no headers were retrieved at all, the peer violated its TD promise that it had a
// better chain compared to ours. The only exception is if its promised blocks were
// already imported by other means (e.g. fecher):
// already imported by other means (e.g. fetcher):
//
// R <remote peer>, L <local node>: Both at block 10
// R: Mine block 11, and propagate it to L
@@ -1415,7 +1503,7 @@ func (d *Downloader) processFastSyncContent(latest *types.Header) error {
defer stateSync.Cancel()
go func() {
if err := stateSync.Wait(); err != nil && err != errCancelStateFetch {
d.queue.Close() // wake up WaitResults
d.queue.Close() // wake up Results
}
}()
// Figure out the ideal pivot block. Note, that this goalpost may move if the
@@ -1473,7 +1561,7 @@ func (d *Downloader) processFastSyncContent(latest *types.Header) error {
defer stateSync.Cancel()
go func() {
if err := stateSync.Wait(); err != nil && err != errCancelStateFetch {
d.queue.Close() // wake up WaitResults
d.queue.Close() // wake up Results
}
}()
oldPivot = P

File diff suppressed because it is too large Load Diff

View File

@@ -229,13 +229,6 @@ func (p *peerConnection) SetHeadersIdle(delivered int) {
p.setIdle(p.headerStarted, delivered, &p.headerThroughput, &p.headerIdle)
}
// SetBlocksIdle sets the peer to idle, allowing it to execute new block retrieval
// requests. Its estimated block retrieval throughput is updated with that measured
// just now.
func (p *peerConnection) SetBlocksIdle(delivered int) {
p.setIdle(p.blockStarted, delivered, &p.blockThroughput, &p.blockIdle)
}
// SetBodiesIdle sets the peer to idle, allowing it to execute block body retrieval
// requests. Its estimated body retrieval throughput is updated with that measured
// just now.

View File

@@ -143,7 +143,7 @@ func (q *queue) Reset() {
q.resultOffset = 0
}
// Close marks the end of the sync, unblocking WaitResults.
// Close marks the end of the sync, unblocking Results.
// It may be called even if the queue is already closed.
func (q *queue) Close() {
q.lock.Lock()
@@ -325,7 +325,7 @@ func (q *queue) Schedule(headers []*types.Header, from uint64) []*types.Header {
}
// Make sure no duplicate requests are executed
if _, ok := q.blockTaskPool[hash]; ok {
log.Warn("Header already scheduled for block fetch", "number", header.Number, "hash", hash)
log.Warn("Header already scheduled for block fetch", "number", header.Number, "hash", hash)
continue
}
if _, ok := q.receiptTaskPool[hash]; ok {
@@ -545,7 +545,7 @@ func (q *queue) reserveHeaders(p *peerConnection, count int, taskPool map[common
taskQueue.Push(header, -int64(header.Number.Uint64()))
}
if progress {
// Wake WaitResults, resultCache was modified
// Wake Results, resultCache was modified
q.active.Signal()
}
// Assemble and return the block download request
@@ -664,12 +664,11 @@ func (q *queue) expire(timeout time.Duration, pendPool map[string]*fetchRequest,
}
// Add the peer to the expiry report along the number of failed requests
expiries[id] = len(request.Headers)
// Remove the expired requests from the pending pool directly
delete(pendPool, id)
}
}
// Remove the expired requests from the pending pool
for id := range expiries {
delete(pendPool, id)
}
return expiries
}
@@ -857,7 +856,7 @@ func (q *queue) deliver(id string, taskPool map[common.Hash]*types.Header, taskQ
taskQueue.Push(header, -int64(header.Number.Uint64()))
}
}
// Wake up WaitResults
// Wake up Results
if accepted > 0 {
q.active.Signal()
}

View File

@@ -313,11 +313,12 @@ func (s *stateSync) loop() (err error) {
s.d.dropPeer(req.peer.id)
}
// Process all the received blobs and check for stale delivery
if err = s.process(req); err != nil {
delivered, err := s.process(req)
if err != nil {
log.Warn("Node data write error", "err", err)
return err
}
req.peer.SetNodeDataIdle(len(req.response))
req.peer.SetNodeDataIdle(delivered)
}
}
return nil
@@ -398,9 +399,11 @@ func (s *stateSync) fillTasks(n int, req *stateReq) {
// process iterates over a batch of delivered state data, injecting each item
// into a running state sync, re-queuing any items that were requested but not
// delivered.
func (s *stateSync) process(req *stateReq) error {
// Returns whether the peer actually managed to deliver anything of value,
// and any error that occurred
func (s *stateSync) process(req *stateReq) (int, error) {
// Collect processing stats and update progress if valid data was received
duplicate, unexpected := 0, 0
duplicate, unexpected, successful := 0, 0, 0
defer func(start time.Time) {
if duplicate > 0 || unexpected > 0 {
@@ -410,7 +413,6 @@ func (s *stateSync) process(req *stateReq) error {
// Iterate over all the delivered data and inject one-by-one into the trie
progress := false
for _, blob := range req.response {
prog, hash, err := s.processNodeData(blob)
switch err {
@@ -418,12 +420,13 @@ func (s *stateSync) process(req *stateReq) error {
s.numUncommitted++
s.bytesUncommitted += len(blob)
progress = progress || prog
successful++
case trie.ErrNotRequested:
unexpected++
case trie.ErrAlreadyProcessed:
duplicate++
default:
return fmt.Errorf("invalid state node %s: %v", hash.TerminalString(), err)
return successful, fmt.Errorf("invalid state node %s: %v", hash.TerminalString(), err)
}
if _, ok := req.tasks[hash]; ok {
delete(req.tasks, hash)
@@ -441,12 +444,12 @@ func (s *stateSync) process(req *stateReq) error {
// If we've requested the node too many times already, it may be a malicious
// sync where nobody has the right data. Abort.
if len(task.attempts) >= npeers {
return fmt.Errorf("state node %s failed with all peers (%d tries, %d peers)", hash.TerminalString(), len(task.attempts), npeers)
return successful, fmt.Errorf("state node %s failed with all peers (%d tries, %d peers)", hash.TerminalString(), len(task.attempts), npeers)
}
// Missing item, place into the retry queue.
s.tasks[hash] = task
}
return nil
return successful, nil
}
// processNodeData tries to inject a trie node data blob delivered from a remote

View File

@@ -0,0 +1,221 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"fmt"
"math/big"
"sync"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/params"
)
// Test chain parameters.
var (
testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
testAddress = crypto.PubkeyToAddress(testKey.PublicKey)
testDB = ethdb.NewMemDatabase()
testGenesis = core.GenesisBlockForTesting(testDB, testAddress, big.NewInt(1000000000))
)
// The common prefix of all test chains:
var testChainBase = newTestChain(blockCacheItems+200, testGenesis)
// Different forks on top of the base chain:
var testChainForkLightA, testChainForkLightB, testChainForkHeavy *testChain
func init() {
var forkLen = int(MaxForkAncestry + 50)
var wg sync.WaitGroup
wg.Add(3)
go func() { testChainForkLightA = testChainBase.makeFork(forkLen, false, 1); wg.Done() }()
go func() { testChainForkLightB = testChainBase.makeFork(forkLen, false, 2); wg.Done() }()
go func() { testChainForkHeavy = testChainBase.makeFork(forkLen, true, 3); wg.Done() }()
wg.Wait()
}
type testChain struct {
genesis *types.Block
chain []common.Hash
headerm map[common.Hash]*types.Header
blockm map[common.Hash]*types.Block
receiptm map[common.Hash][]*types.Receipt
tdm map[common.Hash]*big.Int
}
// newTestChain creates a blockchain of the given length.
func newTestChain(length int, genesis *types.Block) *testChain {
tc := new(testChain).copy(length)
tc.genesis = genesis
tc.chain = append(tc.chain, genesis.Hash())
tc.headerm[tc.genesis.Hash()] = tc.genesis.Header()
tc.tdm[tc.genesis.Hash()] = tc.genesis.Difficulty()
tc.blockm[tc.genesis.Hash()] = tc.genesis
tc.generate(length-1, 0, genesis, false)
return tc
}
// makeFork creates a fork on top of the test chain.
func (tc *testChain) makeFork(length int, heavy bool, seed byte) *testChain {
fork := tc.copy(tc.len() + length)
fork.generate(length, seed, tc.headBlock(), heavy)
return fork
}
// shorten creates a copy of the chain with the given length. It panics if the
// length is longer than the number of available blocks.
func (tc *testChain) shorten(length int) *testChain {
if length > tc.len() {
panic(fmt.Errorf("can't shorten test chain to %d blocks, it's only %d blocks long", length, tc.len()))
}
return tc.copy(length)
}
func (tc *testChain) copy(newlen int) *testChain {
cpy := &testChain{
genesis: tc.genesis,
headerm: make(map[common.Hash]*types.Header, newlen),
blockm: make(map[common.Hash]*types.Block, newlen),
receiptm: make(map[common.Hash][]*types.Receipt, newlen),
tdm: make(map[common.Hash]*big.Int, newlen),
}
for i := 0; i < len(tc.chain) && i < newlen; i++ {
hash := tc.chain[i]
cpy.chain = append(cpy.chain, tc.chain[i])
cpy.tdm[hash] = tc.tdm[hash]
cpy.blockm[hash] = tc.blockm[hash]
cpy.headerm[hash] = tc.headerm[hash]
cpy.receiptm[hash] = tc.receiptm[hash]
}
return cpy
}
// generate creates a chain of n blocks starting at and including parent.
// the returned hash chain is ordered head->parent. In addition, every 22th block
// contains a transaction and every 5th an uncle to allow testing correct block
// reassembly.
func (tc *testChain) generate(n int, seed byte, parent *types.Block, heavy bool) {
// start := time.Now()
// defer func() { fmt.Printf("test chain generated in %v\n", time.Since(start)) }()
blocks, receipts := core.GenerateChain(params.TestChainConfig, parent, ethash.NewFaker(), testDB, n, func(i int, block *core.BlockGen) {
block.SetCoinbase(common.Address{seed})
// If a heavy chain is requested, delay blocks to raise difficulty
if heavy {
block.OffsetTime(-1)
}
// Include transactions to the miner to make blocks more interesting.
if parent == tc.genesis && i%22 == 0 {
signer := types.MakeSigner(params.TestChainConfig, block.Number())
tx, err := types.SignTx(types.NewTransaction(block.TxNonce(testAddress), common.Address{seed}, big.NewInt(1000), params.TxGas, nil, nil), signer, testKey)
if err != nil {
panic(err)
}
block.AddTx(tx)
}
// if the block number is a multiple of 5, add a bonus uncle to the block
if i > 0 && i%5 == 0 {
block.AddUncle(&types.Header{
ParentHash: block.PrevBlock(i - 1).Hash(),
Number: big.NewInt(block.Number().Int64() - 1),
})
}
})
// Convert the block-chain into a hash-chain and header/block maps
td := new(big.Int).Set(tc.td(parent.Hash()))
for i, b := range blocks {
td := td.Add(td, b.Difficulty())
hash := b.Hash()
tc.chain = append(tc.chain, hash)
tc.blockm[hash] = b
tc.headerm[hash] = b.Header()
tc.receiptm[hash] = receipts[i]
tc.tdm[hash] = new(big.Int).Set(td)
}
}
// len returns the total number of blocks in the chain.
func (tc *testChain) len() int {
return len(tc.chain)
}
// headBlock returns the head of the chain.
func (tc *testChain) headBlock() *types.Block {
return tc.blockm[tc.chain[len(tc.chain)-1]]
}
// td returns the total difficulty of the given block.
func (tc *testChain) td(hash common.Hash) *big.Int {
return tc.tdm[hash]
}
// headersByHash returns headers in ascending order from the given hash.
func (tc *testChain) headersByHash(origin common.Hash, amount int, skip int) []*types.Header {
num, _ := tc.hashToNumber(origin)
return tc.headersByNumber(num, amount, skip)
}
// headersByNumber returns headers in ascending order from the given number.
func (tc *testChain) headersByNumber(origin uint64, amount int, skip int) []*types.Header {
result := make([]*types.Header, 0, amount)
for num := origin; num < uint64(len(tc.chain)) && len(result) < amount; num += uint64(skip) + 1 {
if header, ok := tc.headerm[tc.chain[int(num)]]; ok {
result = append(result, header)
}
}
return result
}
// receipts returns the receipts of the given block hashes.
func (tc *testChain) receipts(hashes []common.Hash) [][]*types.Receipt {
results := make([][]*types.Receipt, 0, len(hashes))
for _, hash := range hashes {
if receipt, ok := tc.receiptm[hash]; ok {
results = append(results, receipt)
}
}
return results
}
// bodies returns the block bodies of the given block hashes.
func (tc *testChain) bodies(hashes []common.Hash) ([][]*types.Transaction, [][]*types.Header) {
transactions := make([][]*types.Transaction, 0, len(hashes))
uncles := make([][]*types.Header, 0, len(hashes))
for _, hash := range hashes {
if block, ok := tc.blockm[hash]; ok {
transactions = append(transactions, block.Transactions())
uncles = append(uncles, block.Uncles())
}
}
return transactions, uncles
}
func (tc *testChain) hashToNumber(target common.Hash) (uint64, bool) {
for num, hash := range tc.chain {
if hash == target {
return uint64(num), true
}
}
return 0, false
}

View File

@@ -28,7 +28,8 @@ func (c Config) MarshalTOML() (interface{}, error) {
SkipBcVersionCheck bool `toml:"-"`
DatabaseHandles int `toml:"-"`
DatabaseCache int
TrieCache int
TrieCleanCache int
TrieDirtyCache int
TrieTimeout time.Duration
Etherbase common.Address `toml:",omitempty"`
MinerNotify []string `toml:",omitempty"`
@@ -43,6 +44,8 @@ func (c Config) MarshalTOML() (interface{}, error) {
GPO gasprice.Config
EnablePreimageRecording bool
DocRoot string `toml:"-"`
EWASMInterpreter string
EVMInterpreter string
}
var enc Config
enc.Genesis = c.Genesis
@@ -54,7 +57,8 @@ func (c Config) MarshalTOML() (interface{}, error) {
enc.SkipBcVersionCheck = c.SkipBcVersionCheck
enc.DatabaseHandles = c.DatabaseHandles
enc.DatabaseCache = c.DatabaseCache
enc.TrieCache = c.TrieCache
enc.TrieCleanCache = c.TrieCleanCache
enc.TrieDirtyCache = c.TrieDirtyCache
enc.TrieTimeout = c.TrieTimeout
enc.Etherbase = c.Etherbase
enc.MinerNotify = c.MinerNotify
@@ -69,6 +73,8 @@ func (c Config) MarshalTOML() (interface{}, error) {
enc.GPO = c.GPO
enc.EnablePreimageRecording = c.EnablePreimageRecording
enc.DocRoot = c.DocRoot
enc.EWASMInterpreter = c.EWASMInterpreter
enc.EVMInterpreter = c.EVMInterpreter
return &enc, nil
}
@@ -84,7 +90,8 @@ func (c *Config) UnmarshalTOML(unmarshal func(interface{}) error) error {
SkipBcVersionCheck *bool `toml:"-"`
DatabaseHandles *int `toml:"-"`
DatabaseCache *int
TrieCache *int
TrieCleanCache *int
TrieDirtyCache *int
TrieTimeout *time.Duration
Etherbase *common.Address `toml:",omitempty"`
MinerNotify []string `toml:",omitempty"`
@@ -99,6 +106,8 @@ func (c *Config) UnmarshalTOML(unmarshal func(interface{}) error) error {
GPO *gasprice.Config
EnablePreimageRecording *bool
DocRoot *string `toml:"-"`
EWASMInterpreter *string
EVMInterpreter *string
}
var dec Config
if err := unmarshal(&dec); err != nil {
@@ -131,8 +140,11 @@ func (c *Config) UnmarshalTOML(unmarshal func(interface{}) error) error {
if dec.DatabaseCache != nil {
c.DatabaseCache = *dec.DatabaseCache
}
if dec.TrieCache != nil {
c.TrieCache = *dec.TrieCache
if dec.TrieCleanCache != nil {
c.TrieCleanCache = *dec.TrieCleanCache
}
if dec.TrieDirtyCache != nil {
c.TrieDirtyCache = *dec.TrieDirtyCache
}
if dec.TrieTimeout != nil {
c.TrieTimeout = *dec.TrieTimeout
@@ -176,5 +188,11 @@ func (c *Config) UnmarshalTOML(unmarshal func(interface{}) error) error {
if dec.DocRoot != nil {
c.DocRoot = *dec.DocRoot
}
if dec.EWASMInterpreter != nil {
c.EWASMInterpreter = *dec.EWASMInterpreter
}
if dec.EVMInterpreter != nil {
c.EVMInterpreter = *dec.EVMInterpreter
}
return nil
}

View File

@@ -653,12 +653,12 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
trueHead = request.Block.ParentHash()
trueTD = new(big.Int).Sub(request.TD, request.Block.Difficulty())
)
// Update the peers total difficulty if better than the previous
// Update the peer's total difficulty if better than the previous
if _, td := p.Head(); trueTD.Cmp(td) > 0 {
p.SetHead(trueHead, trueTD)
// Schedule a sync if above ours. Note, this will not fire a sync for a gap of
// a singe block (as the true TD is below the propagated block), however this
// a single block (as the true TD is below the propagated block), however this
// scenario should easily be covered by the fetcher.
currentBlock := pm.blockchain.CurrentBlock()
if trueTD.Cmp(pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64())) > 0 {

View File

@@ -585,7 +585,7 @@ func testBroadcastBlock(t *testing.T, totalPeers, broadcastExpected int) {
}
}(peer)
}
timeoutCh := time.NewTimer(time.Millisecond * 100).C
timeout := time.After(300 * time.Millisecond)
var receivedCount int
outer:
for {
@@ -597,7 +597,7 @@ outer:
if receivedCount == totalPeers {
break outer
}
case <-timeoutCh:
case <-timeout:
break outer
}
}

View File

@@ -290,11 +290,12 @@ type Tracer struct {
contractWrapper *contractWrapper // Wrapper around the contract object
dbWrapper *dbWrapper // Wrapper around the VM environment
pcValue *uint // Swappable pc value wrapped by a log accessor
gasValue *uint // Swappable gas value wrapped by a log accessor
costValue *uint // Swappable cost value wrapped by a log accessor
depthValue *uint // Swappable depth value wrapped by a log accessor
errorValue *string // Swappable error value wrapped by a log accessor
pcValue *uint // Swappable pc value wrapped by a log accessor
gasValue *uint // Swappable gas value wrapped by a log accessor
costValue *uint // Swappable cost value wrapped by a log accessor
depthValue *uint // Swappable depth value wrapped by a log accessor
errorValue *string // Swappable error value wrapped by a log accessor
refundValue *uint // Swappable refund value wrapped by a log accessor
ctx map[string]interface{} // Transaction context gathered throughout execution
err error // Error, if one has occurred
@@ -323,6 +324,7 @@ func New(code string) (*Tracer, error) {
gasValue: new(uint),
costValue: new(uint),
depthValue: new(uint),
refundValue: new(uint),
}
// Set up builtins for this environment
tracer.vm.PushGlobalGoFunction("toHex", func(ctx *duktape.Context) int {
@@ -442,6 +444,9 @@ func New(code string) (*Tracer, error) {
tracer.vm.PushGoFunction(func(ctx *duktape.Context) int { ctx.PushUint(*tracer.depthValue); return 1 })
tracer.vm.PutPropString(logObject, "getDepth")
tracer.vm.PushGoFunction(func(ctx *duktape.Context) int { ctx.PushUint(*tracer.refundValue); return 1 })
tracer.vm.PutPropString(logObject, "getRefund")
tracer.vm.PushGoFunction(func(ctx *duktape.Context) int {
if tracer.errorValue != nil {
ctx.PushString(*tracer.errorValue)
@@ -527,6 +532,7 @@ func (jst *Tracer) CaptureState(env *vm.EVM, pc uint64, op vm.OpCode, gas, cost
*jst.gasValue = uint(gas)
*jst.costValue = uint(cost)
*jst.depthValue = uint(depth)
*jst.refundValue = uint(env.StateDB.GetRefund())
jst.errorValue = nil
if err != nil {

View File

@@ -25,6 +25,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/params"
)
@@ -43,8 +44,14 @@ func (account) ReturnGas(*big.Int) {}
func (account) SetCode(common.Hash, []byte) {}
func (account) ForEachStorage(cb func(key, value common.Hash) bool) {}
type dummyStatedb struct {
state.StateDB
}
func (*dummyStatedb) GetRefund() uint64 { return 1337 }
func runTrace(tracer *Tracer) (json.RawMessage, error) {
env := vm.NewEVM(vm.Context{BlockNumber: big.NewInt(1)}, nil, params.TestChainConfig, vm.Config{Debug: true, Tracer: tracer})
env := vm.NewEVM(vm.Context{BlockNumber: big.NewInt(1)}, &dummyStatedb{}, params.TestChainConfig, vm.Config{Debug: true, Tracer: tracer})
contract := vm.NewContract(account{}, account{}, big.NewInt(0), 10000)
contract.Code = []byte{byte(vm.PUSH1), 0x1, byte(vm.PUSH1), 0x1, 0x0}
@@ -126,7 +133,7 @@ func TestHaltBetweenSteps(t *testing.T) {
t.Fatal(err)
}
env := vm.NewEVM(vm.Context{BlockNumber: big.NewInt(1)}, nil, params.TestChainConfig, vm.Config{Debug: true, Tracer: tracer})
env := vm.NewEVM(vm.Context{BlockNumber: big.NewInt(1)}, &dummyStatedb{}, params.TestChainConfig, vm.Config{Debug: true, Tracer: tracer})
contract := vm.NewContract(&account{}, &account{}, big.NewInt(0), 0)
tracer.CaptureState(env, 0, 0, 0, 0, nil, nil, contract, 0, nil)

View File

@@ -365,26 +365,42 @@ func (ec *Client) NonceAt(ctx context.Context, account common.Address, blockNumb
// FilterLogs executes a filter query.
func (ec *Client) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]types.Log, error) {
var result []types.Log
err := ec.c.CallContext(ctx, &result, "eth_getLogs", toFilterArg(q))
arg, err := toFilterArg(q)
if err != nil {
return nil, err
}
err = ec.c.CallContext(ctx, &result, "eth_getLogs", arg)
return result, err
}
// SubscribeFilterLogs subscribes to the results of a streaming filter query.
func (ec *Client) SubscribeFilterLogs(ctx context.Context, q ethereum.FilterQuery, ch chan<- types.Log) (ethereum.Subscription, error) {
return ec.c.EthSubscribe(ctx, ch, "logs", toFilterArg(q))
arg, err := toFilterArg(q)
if err != nil {
return nil, err
}
return ec.c.EthSubscribe(ctx, ch, "logs", arg)
}
func toFilterArg(q ethereum.FilterQuery) interface{} {
func toFilterArg(q ethereum.FilterQuery) (interface{}, error) {
arg := map[string]interface{}{
"fromBlock": toBlockNumArg(q.FromBlock),
"toBlock": toBlockNumArg(q.ToBlock),
"address": q.Addresses,
"topics": q.Topics,
"address": q.Addresses,
"topics": q.Topics,
}
if q.FromBlock == nil {
arg["fromBlock"] = "0x0"
if q.BlockHash != nil {
arg["blockHash"] = *q.BlockHash
if q.FromBlock != nil || q.ToBlock != nil {
return nil, fmt.Errorf("cannot specify both BlockHash and FromBlock/ToBlock")
}
} else {
if q.FromBlock == nil {
arg["fromBlock"] = "0x0"
} else {
arg["fromBlock"] = toBlockNumArg(q.FromBlock)
}
arg["toBlock"] = toBlockNumArg(q.ToBlock)
}
return arg
return arg, nil
}
// Pending State

View File

@@ -16,7 +16,15 @@
package ethclient
import "github.com/ethereum/go-ethereum"
import (
"fmt"
"math/big"
"reflect"
"testing"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
)
// Verify that Client implements the ethereum interfaces.
var (
@@ -32,3 +40,113 @@ var (
// _ = ethereum.PendingStateEventer(&Client{})
_ = ethereum.PendingContractCaller(&Client{})
)
func TestToFilterArg(t *testing.T) {
blockHashErr := fmt.Errorf("cannot specify both BlockHash and FromBlock/ToBlock")
addresses := []common.Address{
common.HexToAddress("0xD36722ADeC3EdCB29c8e7b5a47f352D701393462"),
}
blockHash := common.HexToHash(
"0xeb94bb7d78b73657a9d7a99792413f50c0a45c51fc62bdcb08a53f18e9a2b4eb",
)
for _, testCase := range []struct {
name string
input ethereum.FilterQuery
output interface{}
err error
}{
{
"without BlockHash",
ethereum.FilterQuery{
Addresses: addresses,
FromBlock: big.NewInt(1),
ToBlock: big.NewInt(2),
Topics: [][]common.Hash{},
},
map[string]interface{}{
"address": addresses,
"fromBlock": "0x1",
"toBlock": "0x2",
"topics": [][]common.Hash{},
},
nil,
},
{
"with nil fromBlock and nil toBlock",
ethereum.FilterQuery{
Addresses: addresses,
Topics: [][]common.Hash{},
},
map[string]interface{}{
"address": addresses,
"fromBlock": "0x0",
"toBlock": "latest",
"topics": [][]common.Hash{},
},
nil,
},
{
"with blockhash",
ethereum.FilterQuery{
Addresses: addresses,
BlockHash: &blockHash,
Topics: [][]common.Hash{},
},
map[string]interface{}{
"address": addresses,
"blockHash": blockHash,
"topics": [][]common.Hash{},
},
nil,
},
{
"with blockhash and from block",
ethereum.FilterQuery{
Addresses: addresses,
BlockHash: &blockHash,
FromBlock: big.NewInt(1),
Topics: [][]common.Hash{},
},
nil,
blockHashErr,
},
{
"with blockhash and to block",
ethereum.FilterQuery{
Addresses: addresses,
BlockHash: &blockHash,
ToBlock: big.NewInt(1),
Topics: [][]common.Hash{},
},
nil,
blockHashErr,
},
{
"with blockhash and both from / to block",
ethereum.FilterQuery{
Addresses: addresses,
BlockHash: &blockHash,
FromBlock: big.NewInt(1),
ToBlock: big.NewInt(2),
Topics: [][]common.Hash{},
},
nil,
blockHashErr,
},
} {
t.Run(testCase.name, func(t *testing.T) {
output, err := toFilterArg(testCase.input)
if (testCase.err == nil) != (err == nil) {
t.Fatalf("expected error %v but got %v", testCase.err, err)
}
if testCase.err != nil {
if testCase.err.Error() != err.Error() {
t.Fatalf("expected error %v but got %v", testCase.err, err)
}
} else if !reflect.DeepEqual(testCase.output, output) {
t.Fatalf("expected filter arg %v but got %v", testCase.output, output)
}
})
}
}

View File

@@ -14,6 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build !js
package ethdb
import (
@@ -380,71 +382,3 @@ func (b *ldbBatch) Reset() {
b.b.Reset()
b.size = 0
}
type table struct {
db Database
prefix string
}
// NewTable returns a Database object that prefixes all keys with a given
// string.
func NewTable(db Database, prefix string) Database {
return &table{
db: db,
prefix: prefix,
}
}
func (dt *table) Put(key []byte, value []byte) error {
return dt.db.Put(append([]byte(dt.prefix), key...), value)
}
func (dt *table) Has(key []byte) (bool, error) {
return dt.db.Has(append([]byte(dt.prefix), key...))
}
func (dt *table) Get(key []byte) ([]byte, error) {
return dt.db.Get(append([]byte(dt.prefix), key...))
}
func (dt *table) Delete(key []byte) error {
return dt.db.Delete(append([]byte(dt.prefix), key...))
}
func (dt *table) Close() {
// Do nothing; don't close the underlying DB.
}
type tableBatch struct {
batch Batch
prefix string
}
// NewTableBatch returns a Batch object which prefixes all keys with a given string.
func NewTableBatch(db Database, prefix string) Batch {
return &tableBatch{db.NewBatch(), prefix}
}
func (dt *table) NewBatch() Batch {
return &tableBatch{dt.db.NewBatch(), dt.prefix}
}
func (tb *tableBatch) Put(key, value []byte) error {
return tb.batch.Put(append([]byte(tb.prefix), key...), value)
}
func (tb *tableBatch) Delete(key []byte) error {
return tb.batch.Delete(append([]byte(tb.prefix), key...))
}
func (tb *tableBatch) Write() error {
return tb.batch.Write()
}
func (tb *tableBatch) ValueSize() int {
return tb.batch.ValueSize()
}
func (tb *tableBatch) Reset() {
tb.batch.Reset()
}

68
ethdb/database_js.go Normal file
View File

@@ -0,0 +1,68 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build js
package ethdb
import (
"errors"
)
var errNotSupported = errors.New("ethdb: not supported")
type LDBDatabase struct {
}
// NewLDBDatabase returns a LevelDB wrapped object.
func NewLDBDatabase(file string, cache int, handles int) (*LDBDatabase, error) {
return nil, errNotSupported
}
// Path returns the path to the database directory.
func (db *LDBDatabase) Path() string {
return ""
}
// Put puts the given key / value to the queue
func (db *LDBDatabase) Put(key []byte, value []byte) error {
return errNotSupported
}
func (db *LDBDatabase) Has(key []byte) (bool, error) {
return false, errNotSupported
}
// Get returns the given key if it's present.
func (db *LDBDatabase) Get(key []byte) ([]byte, error) {
return nil, errNotSupported
}
// Delete deletes the key from the queue and database
func (db *LDBDatabase) Delete(key []byte) error {
return errNotSupported
}
func (db *LDBDatabase) Close() {
}
// Meter configures the database metrics collectors and
func (db *LDBDatabase) Meter(prefix string) {
}
func (db *LDBDatabase) NewBatch() Batch {
return nil
}

View File

@@ -1,4 +1,4 @@
// Copyright 2018 The go-ethereum Authors
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
@@ -14,13 +14,12 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package state
// +build js
// Store defines methods required to get, set, delete values for different keys
// and close the underlying resources.
type Store interface {
Get(key string, i interface{}) (err error)
Put(key string, i interface{}) (err error)
Delete(key string) (err error)
Close() error
}
package ethdb_test
import (
"github.com/ethereum/go-ethereum/ethdb"
)
var _ ethdb.Database = &ethdb.LDBDatabase{}

View File

@@ -14,6 +14,8 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build !js
package ethdb_test
import (

51
ethdb/table.go Normal file
View File

@@ -0,0 +1,51 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethdb
type table struct {
db Database
prefix string
}
// NewTable returns a Database object that prefixes all keys with a given
// string.
func NewTable(db Database, prefix string) Database {
return &table{
db: db,
prefix: prefix,
}
}
func (dt *table) Put(key []byte, value []byte) error {
return dt.db.Put(append([]byte(dt.prefix), key...), value)
}
func (dt *table) Has(key []byte) (bool, error) {
return dt.db.Has(append([]byte(dt.prefix), key...))
}
func (dt *table) Get(key []byte) ([]byte, error) {
return dt.db.Get(append([]byte(dt.prefix), key...))
}
func (dt *table) Delete(key []byte) error {
return dt.db.Delete(append([]byte(dt.prefix), key...))
}
func (dt *table) Close() {
// Do nothing; don't close the underlying DB.
}

51
ethdb/table_batch.go Normal file
View File

@@ -0,0 +1,51 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethdb
type tableBatch struct {
batch Batch
prefix string
}
// NewTableBatch returns a Batch object which prefixes all keys with a given string.
func NewTableBatch(db Database, prefix string) Batch {
return &tableBatch{db.NewBatch(), prefix}
}
func (dt *table) NewBatch() Batch {
return &tableBatch{dt.db.NewBatch(), dt.prefix}
}
func (tb *tableBatch) Put(key, value []byte) error {
return tb.batch.Put(append([]byte(tb.prefix), key...), value)
}
func (tb *tableBatch) Delete(key []byte) error {
return tb.batch.Delete(append([]byte(tb.prefix), key...))
}
func (tb *tableBatch) Write() error {
return tb.batch.Write()
}
func (tb *tableBatch) ValueSize() int {
return tb.batch.ValueSize()
}
func (tb *tableBatch) Reset() {
tb.batch.Reset()
}

View File

@@ -141,7 +141,7 @@ func TestMuxConcurrent(t *testing.T) {
}
}
func emptySubscriber(mux *TypeMux, types ...interface{}) {
func emptySubscriber(mux *TypeMux) {
s := mux.Subscribe(testEvent(0))
go func() {
for range s.Chan() {
@@ -182,9 +182,9 @@ func BenchmarkPost1000(b *testing.B) {
func BenchmarkPostConcurrent(b *testing.B) {
var mux = new(TypeMux)
defer mux.Stop()
emptySubscriber(mux, testEvent(0))
emptySubscriber(mux, testEvent(0))
emptySubscriber(mux, testEvent(0))
emptySubscriber(mux)
emptySubscriber(mux)
emptySubscriber(mux)
var wg sync.WaitGroup
poster := func() {

View File

@@ -1,95 +0,0 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package filter implements event filters.
package filter
import "reflect"
type Filter interface {
Compare(Filter) bool
Trigger(data interface{})
}
type FilterEvent struct {
filter Filter
data interface{}
}
type Filters struct {
id int
watchers map[int]Filter
ch chan FilterEvent
quit chan struct{}
}
func New() *Filters {
return &Filters{
ch: make(chan FilterEvent),
watchers: make(map[int]Filter),
quit: make(chan struct{}),
}
}
func (f *Filters) Start() {
go f.loop()
}
func (f *Filters) Stop() {
close(f.quit)
}
func (f *Filters) Notify(filter Filter, data interface{}) {
f.ch <- FilterEvent{filter, data}
}
func (f *Filters) Install(watcher Filter) int {
f.watchers[f.id] = watcher
f.id++
return f.id - 1
}
func (f *Filters) Uninstall(id int) {
delete(f.watchers, id)
}
func (f *Filters) loop() {
out:
for {
select {
case <-f.quit:
break out
case event := <-f.ch:
for _, watcher := range f.watchers {
if reflect.TypeOf(watcher) == reflect.TypeOf(event.filter) {
if watcher.Compare(event.filter) {
watcher.Trigger(event.data)
}
}
}
}
}
}
func (f *Filters) Match(a, b Filter) bool {
return reflect.TypeOf(a) == reflect.TypeOf(b) && a.Compare(b)
}
func (f *Filters) Get(i int) Filter {
return f.watchers[i]
}

View File

@@ -1,60 +0,0 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package filter
import (
"testing"
"time"
)
// Simple test to check if baseline matching/mismatching filtering works.
func TestFilters(t *testing.T) {
fm := New()
fm.Start()
// Register two filters to catch posted data
first := make(chan struct{})
fm.Install(Generic{
Str1: "hello",
Fn: func(data interface{}) {
first <- struct{}{}
},
})
second := make(chan struct{})
fm.Install(Generic{
Str1: "hello1",
Str2: "hello",
Fn: func(data interface{}) {
second <- struct{}{}
},
})
// Post an event that should only match the first filter
fm.Notify(Generic{Str1: "hello"}, true)
fm.Stop()
// Ensure only the mathcing filters fire
select {
case <-first:
case <-time.After(100 * time.Millisecond):
t.Error("matching filter timed out")
}
select {
case <-second:
t.Error("mismatching filter fired")
case <-time.After(100 * time.Millisecond):
}
}

View File

@@ -27,6 +27,7 @@ import (
"regexp"
"strings"
"sync"
"syscall"
"testing"
"text/template"
"time"
@@ -50,6 +51,8 @@ type TestCmd struct {
stdout *bufio.Reader
stdin io.WriteCloser
stderr *testlogger
// Err will contain the process exit error or interrupt signal error
Err error
}
// Run exec's the current binary using name as argv[0] which will trigger the
@@ -182,11 +185,25 @@ func (tt *TestCmd) ExpectExit() {
}
func (tt *TestCmd) WaitExit() {
tt.cmd.Wait()
tt.Err = tt.cmd.Wait()
}
func (tt *TestCmd) Interrupt() {
tt.cmd.Process.Signal(os.Interrupt)
tt.Err = tt.cmd.Process.Signal(os.Interrupt)
}
// ExitStatus exposes the process' OS exit code
// It will only return a valid value after the process has finished.
func (tt *TestCmd) ExitStatus() int {
if tt.Err != nil {
exitErr := tt.Err.(*exec.ExitError)
if exitErr != nil {
if status, ok := exitErr.Sys().(syscall.WaitStatus); ok {
return status.ExitStatus()
}
}
}
return 0
}
// StderrText returns any stderr output written so far.

View File

@@ -328,6 +328,9 @@ func (s *PrivateAccountAPI) UnlockAccount(addr common.Address, password string,
d = time.Duration(*duration) * time.Second
}
err := fetchKeystore(s.am).TimedUnlock(accounts.Account{Address: addr}, password, d)
if err != nil {
log.Warn("Failed account unlock attempt", "address", addr, "err", err)
}
return err == nil, err
}
@@ -336,10 +339,10 @@ func (s *PrivateAccountAPI) LockAccount(addr common.Address) bool {
return fetchKeystore(s.am).Lock(addr) == nil
}
// signTransactions sets defaults and signs the given transaction
// signTransaction sets defaults and signs the given transaction
// NOTE: the caller needs to ensure that the nonceLock is held, if applicable,
// and release it after the transaction has been submitted to the tx pool
func (s *PrivateAccountAPI) signTransaction(ctx context.Context, args SendTxArgs, passwd string) (*types.Transaction, error) {
func (s *PrivateAccountAPI) signTransaction(ctx context.Context, args *SendTxArgs, passwd string) (*types.Transaction, error) {
// Look up the wallet containing the requested signer
account := accounts.Account{Address: args.From}
wallet, err := s.am.Find(account)
@@ -370,8 +373,9 @@ func (s *PrivateAccountAPI) SendTransaction(ctx context.Context, args SendTxArgs
s.nonceLock.LockAddr(args.From)
defer s.nonceLock.UnlockAddr(args.From)
}
signed, err := s.signTransaction(ctx, args, passwd)
signed, err := s.signTransaction(ctx, &args, passwd)
if err != nil {
log.Warn("Failed transaction send attempt", "from", args.From, "to", args.To, "value", args.Value.ToInt(), "err", err)
return common.Hash{}, err
}
return submitTransaction(ctx, s.b, signed)
@@ -393,8 +397,9 @@ func (s *PrivateAccountAPI) SignTransaction(ctx context.Context, args SendTxArgs
if args.Nonce == nil {
return nil, fmt.Errorf("nonce not specified")
}
signed, err := s.signTransaction(ctx, args, passwd)
signed, err := s.signTransaction(ctx, &args, passwd)
if err != nil {
log.Warn("Failed transaction sign attempt", "from", args.From, "to", args.To, "value", args.Value.ToInt(), "err", err)
return nil, err
}
data, err := rlp.EncodeToBytes(signed)
@@ -436,6 +441,7 @@ func (s *PrivateAccountAPI) Sign(ctx context.Context, data hexutil.Bytes, addr c
// Assemble sign the data with the wallet
signature, err := wallet.SignHashWithPassphrase(account, passwd, signHash(data))
if err != nil {
log.Warn("Failed data sign attempt", "address", addr, "err", err)
return nil, err
}
signature[64] += 27 // Transform V from 0/1 to 27/28 according to the yellow paper
@@ -502,6 +508,72 @@ func (s *PublicBlockChainAPI) GetBalance(ctx context.Context, address common.Add
return (*hexutil.Big)(state.GetBalance(address)), state.Error()
}
// Result structs for GetProof
type AccountResult struct {
Address common.Address `json:"address"`
AccountProof []string `json:"accountProof"`
Balance *hexutil.Big `json:"balance"`
CodeHash common.Hash `json:"codeHash"`
Nonce hexutil.Uint64 `json:"nonce"`
StorageHash common.Hash `json:"storageHash"`
StorageProof []StorageResult `json:"storageProof"`
}
type StorageResult struct {
Key string `json:"key"`
Value *hexutil.Big `json:"value"`
Proof []string `json:"proof"`
}
// GetProof returns the Merkle-proof for a given account and optionally some storage keys.
func (s *PublicBlockChainAPI) GetProof(ctx context.Context, address common.Address, storageKeys []string, blockNr rpc.BlockNumber) (*AccountResult, error) {
state, _, err := s.b.StateAndHeaderByNumber(ctx, blockNr)
if state == nil || err != nil {
return nil, err
}
storageTrie := state.StorageTrie(address)
storageHash := types.EmptyRootHash
codeHash := state.GetCodeHash(address)
storageProof := make([]StorageResult, len(storageKeys))
// if we have a storageTrie, (which means the account exists), we can update the storagehash
if storageTrie != nil {
storageHash = storageTrie.Hash()
} else {
// no storageTrie means the account does not exist, so the codeHash is the hash of an empty bytearray.
codeHash = crypto.Keccak256Hash(nil)
}
// create the proof for the storageKeys
for i, key := range storageKeys {
if storageTrie != nil {
proof, storageError := state.GetStorageProof(address, common.HexToHash(key))
if storageError != nil {
return nil, storageError
}
storageProof[i] = StorageResult{key, (*hexutil.Big)(state.GetState(address, common.HexToHash(key)).Big()), common.ToHexArray(proof)}
} else {
storageProof[i] = StorageResult{key, &hexutil.Big{}, []string{}}
}
}
// create the accountProof
accountProof, proofErr := state.GetProof(address)
if proofErr != nil {
return nil, proofErr
}
return &AccountResult{
Address: address,
AccountProof: common.ToHexArray(accountProof),
Balance: (*hexutil.Big)(state.GetBalance(address)),
CodeHash: codeHash,
Nonce: hexutil.Uint64(state.GetNonce(address)),
StorageHash: storageHash,
StorageProof: storageProof,
}, state.Error()
}
// GetBlockByNumber returns the requested block. When blockNr is -1 the chain head is returned. When fullTx is true all
// transactions in the block are returned in full detail, otherwise only the transaction hash is returned.
func (s *PublicBlockChainAPI) GetBlockByNumber(ctx context.Context, blockNr rpc.BlockNumber, fullTx bool) (map[string]interface{}, error) {

View File

@@ -481,6 +481,12 @@ web3._extend({
params: 2,
inputFormatter: [web3._extend.formatters.inputBlockNumberFormatter, web3._extend.utils.toHex]
}),
new web3._extend.Method({
name: 'getProof',
call: 'eth_getProof',
params: 3,
inputFormatter: [web3._extend.formatters.inputAddressFormatter, null, web3._extend.formatters.inputBlockNumberFormatter]
}),
],
properties: [
new web3._extend.Property({

View File

@@ -141,36 +141,39 @@ func (f *lightFetcher) syncLoop() {
s := requesting
requesting = false
var (
rq *distReq
reqID uint64
rq *distReq
reqID uint64
syncing bool
)
if !f.syncing && !(newAnnounce && s) {
rq, reqID = f.nextRequest()
rq, reqID, syncing = f.nextRequest()
}
syncing := f.syncing
f.lock.Unlock()
if rq != nil {
requesting = true
_, ok := <-f.pm.reqDist.queue(rq)
if !ok {
if _, ok := <-f.pm.reqDist.queue(rq); ok {
if syncing {
f.lock.Lock()
f.syncing = true
f.lock.Unlock()
} else {
go func() {
time.Sleep(softRequestTimeout)
f.reqMu.Lock()
req, ok := f.requested[reqID]
if ok {
req.timeout = true
f.requested[reqID] = req
}
f.reqMu.Unlock()
// keep starting new requests while possible
f.requestChn <- false
}()
}
} else {
f.requestChn <- false
}
if !syncing {
go func() {
time.Sleep(softRequestTimeout)
f.reqMu.Lock()
req, ok := f.requested[reqID]
if ok {
req.timeout = true
f.requested[reqID] = req
}
f.reqMu.Unlock()
// keep starting new requests while possible
f.requestChn <- false
}()
}
}
case reqID := <-f.timeoutChn:
f.reqMu.Lock()
@@ -209,6 +212,7 @@ func (f *lightFetcher) syncLoop() {
f.checkSyncedHeaders(p)
f.syncing = false
f.lock.Unlock()
f.requestChn <- false
}
}
}
@@ -405,7 +409,7 @@ func (f *lightFetcher) requestedID(reqID uint64) bool {
// nextRequest selects the peer and announced head to be requested next, amount
// to be downloaded starting from the head backwards is also returned
func (f *lightFetcher) nextRequest() (*distReq, uint64) {
func (f *lightFetcher) nextRequest() (*distReq, uint64, bool) {
var (
bestHash common.Hash
bestAmount uint64
@@ -427,14 +431,12 @@ func (f *lightFetcher) nextRequest() (*distReq, uint64) {
}
}
if bestTd == f.maxConfirmedTd {
return nil, 0
return nil, 0, false
}
f.syncing = bestSyncing
var rq *distReq
reqID := genReqID()
if f.syncing {
if bestSyncing {
rq = &distReq{
getCost: func(dp distPeer) uint64 {
return 0
@@ -500,7 +502,7 @@ func (f *lightFetcher) nextRequest() (*distReq, uint64) {
},
}
}
return rq, reqID
return rq, reqID, bestSyncing
}
// deliverHeaders delivers header download request responses for processing

View File

@@ -683,7 +683,7 @@ func (e *poolEntry) DecodeRLP(s *rlp.Stream) error {
}
func encodePubkey64(pub *ecdsa.PublicKey) []byte {
return crypto.FromECDSAPub(pub)[:1]
return crypto.FromECDSAPub(pub)[1:]
}
func decodePubkey64(b []byte) (*ecdsa.PublicKey, error) {

View File

@@ -159,7 +159,7 @@ func NewChtIndexer(db ethdb.Database, odr OdrBackend, size, confirms uint64) *co
diskdb: db,
odr: odr,
trieTable: trieTable,
triedb: trie.NewDatabase(trieTable),
triedb: trie.NewDatabaseWithCache(trieTable, 1), // Use a tiny cache only to keep memory down
sectionSize: size,
}
return core.NewChainIndexer(db, ethdb.NewTable(db, "chtIndex-"), backend, size, confirms, time.Millisecond*100, "cht")
@@ -281,7 +281,7 @@ func NewBloomTrieIndexer(db ethdb.Database, odr OdrBackend, parentSize, size uin
diskdb: db,
odr: odr,
trieTable: trieTable,
triedb: trie.NewDatabase(trieTable),
triedb: trie.NewDatabaseWithCache(trieTable, 1), // Use a tiny cache only to keep memory down
parentSize: parentSize,
size: size,
}

View File

@@ -108,7 +108,7 @@ func (t *odrTrie) TryGet(key []byte) ([]byte, error) {
func (t *odrTrie) TryUpdate(key, value []byte) error {
key = crypto.Keccak256(key)
return t.do(key, func() error {
return t.trie.TryDelete(key)
return t.trie.TryUpdate(key, value)
})
}

View File

@@ -1,6 +1,8 @@
package metrics
import "sync/atomic"
import (
"sync/atomic"
)
// Counters hold an int64 value that can be incremented and decremented.
type Counter interface {
@@ -20,6 +22,17 @@ func GetOrRegisterCounter(name string, r Registry) Counter {
return r.GetOrRegister(name, NewCounter).(Counter)
}
// GetOrRegisterCounterForced returns an existing Counter or constructs and registers a
// new Counter no matter the global switch is enabled or not.
// Be sure to unregister the counter from the registry once it is of no use to
// allow for garbage collection.
func GetOrRegisterCounterForced(name string, r Registry) Counter {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, NewCounterForced).(Counter)
}
// NewCounter constructs a new StandardCounter.
func NewCounter() Counter {
if !Enabled {
@@ -28,6 +41,12 @@ func NewCounter() Counter {
return &StandardCounter{0}
}
// NewCounterForced constructs a new StandardCounter and returns it no matter if
// the global switch is enabled or not.
func NewCounterForced() Counter {
return &StandardCounter{0}
}
// NewRegisteredCounter constructs and registers a new StandardCounter.
func NewRegisteredCounter(name string, r Registry) Counter {
c := NewCounter()
@@ -38,6 +57,19 @@ func NewRegisteredCounter(name string, r Registry) Counter {
return c
}
// NewRegisteredCounterForced constructs and registers a new StandardCounter
// and launches a goroutine no matter the global switch is enabled or not.
// Be sure to unregister the counter from the registry once it is of no use to
// allow for garbage collection.
func NewRegisteredCounterForced(name string, r Registry) Counter {
c := NewCounterForced()
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// CounterSnapshot is a read-only copy of another Counter.
type CounterSnapshot int64

Some files were not shown because too many files have changed in this diff Show More