Compare commits

..

113 Commits

Author SHA1 Message Date
Péter Szilágyi
a07539fb88 Merge pull request #3671 from karalabe/1.5.9-stable
params: 1.5.9 stable
2017-02-13 18:54:59 +02:00
Péter Szilágyi
6988e5f074 Merge pull request #3670 from karalabe/docker-usb-fix
Dockerfile: support building USB on Alpine, ignore temp files
2017-02-13 18:47:27 +02:00
Péter Szilágyi
aba016da72 params: 1.5.9 stable 2017-02-13 18:38:21 +02:00
Péter Szilágyi
09aef5c0ae Dockerfile: support building USB on Alpine, ignore temp files 2017-02-13 18:31:09 +02:00
Péter Szilágyi
9b161187ec Merge pull request #3649 from ethersphere/swarm-sigterm-fix
cmd/swarm: handle SIGTERM unix signal for clean exit
2017-02-13 18:22:15 +02:00
zelig
8883f36fe3 cmd/swarm: handle SIGTERM unix signal for clean exit 2017-02-13 22:15:14 +06:30
Péter Szilágyi
0850f68fd1 Merge pull request #3668 from obscuren/revert-gas64
Revert "params: core, core/vm, miner: 64bit gas instructions (#3514)"
2017-02-13 17:13:40 +02:00
Jeffrey Wilcke
57f4e90257 Revert "params: core, core/vm, miner: 64bit gas instructions (#3514)"
This reverts commit 8b57c49490.
2017-02-13 15:15:12 +01:00
Péter Szilágyi
f8f428cc18 Merge pull request #3592 from karalabe/hw-wallets
accounts: initial support for Ledger hardware wallets
2017-02-13 15:03:16 +02:00
Viktor Trón
e23e86921b swarm/network: fix chunk integrity checks (#3665)
* swarm/network: integrity on incoming known chunks
* swarm/network: fix integrity check for incoming chunks
* swarm/storage: imrpoved integrity checking on chunks
* dbstore panics on corrupt chunk entry an prompts user to run cleandb
* memstore adds logging for garbage collection
* dbstore refactor item delete. correct partial deletes in Get
* cmd/swarm: added cleandb subcommand
2017-02-13 13:20:50 +01:00
gluk256
65ed6a9def whisper: add tests for mailserver (#3631) 2017-02-13 13:15:20 +01:00
Péter Szilágyi
e99c788155 accounts: ledger and HD review fixes
- Handle a data race where a Ledger drops between list and open
- Prolong Ledger tx confirmation window to 30 days from 1 minute
- Simplify Ledger chainid-signature calculation and validation
- Simplify Ledger USB APDU request chunking algorithm
- Silence keystore account cache notifications for manual actions
- Only enable self derivations if wallet open succeeds
2017-02-13 14:00:12 +02:00
Péter Szilágyi
c7022c1a0c accounts/usbwallet: detect and report in Ledger is in browser mode 2017-02-13 14:00:11 +02:00
Péter Szilágyi
26cd41f0c7 accounts/usbwallet: make wallet responsive while Ledger is busy 2017-02-13 14:00:10 +02:00
Péter Szilágyi
fb19846855 accounts/usbwallet: Ledger teardown on health-check failure 2017-02-13 14:00:10 +02:00
Péter Szilágyi
205ea95802 accounts, cmd, internal, node: implement HD wallet self-derivation 2017-02-13 14:00:09 +02:00
Péter Szilágyi
c5215fdd48 accounts, cmd, internal, mobile, node: canonical account URLs 2017-02-13 14:00:08 +02:00
Péter Szilágyi
fad5eb0a87 accounts, cmd, eth, internal, miner, node: wallets and HD APIs 2017-02-13 14:00:07 +02:00
Péter Szilágyi
b3c0e9d3cc accounts/usbwallet: two phase Ledger refreshes to avoid Windows bug 2017-02-13 14:00:07 +02:00
Péter Szilágyi
470b79385b accounts/usbwallet: support Ledger app version <1.0.2 2017-02-13 14:00:06 +02:00
Péter Szilágyi
1ecf99bd0f accounts/usbwallet: skip support on iOS altogether 2017-02-13 14:00:05 +02:00
Péter Szilágyi
e0fb4d1da9 build: work around CGO linker bug on pre-1.8 Go 2017-02-13 14:00:04 +02:00
Péter Szilágyi
ac2a0e615b accounts/usbwallet: initial support for Ledger wallets 2017-02-13 14:00:04 +02:00
Péter Szilágyi
52bd4e29ff vendor: pull in support for USB devices via libusb/gousb 2017-02-13 14:00:03 +02:00
Péter Szilágyi
833e4d1319 accounts, cmd, eth, internal, mobile, node: split account backends 2017-02-13 14:00:02 +02:00
Martin Holst Swende
564b60520c core: ignore 0x prefix for code in JSON genesis blocks (#3656) 2017-02-13 03:36:50 +01:00
Zahoor Mohamed
085987ff2c cmd/swarm: manifest manipulation commands (#3645) 2017-02-13 03:33:05 +01:00
Péter Szilágyi
aaf9cfd18c Merge pull request #3662 from karalabe/travis-linux-android
travis: split Android off OSX, use native image
2017-02-12 19:25:32 +02:00
Péter Szilágyi
7ff686d6ec travis: split Android off OSX, use native image 2017-02-10 19:24:37 +02:00
Péter Szilágyi
0cc9409fda Merge pull request #3648 from bas-vk/abigen
cmd/abigen: parse contract name as abi identifier
2017-02-10 12:11:43 +02:00
Maksim
6dd27e7cff swarm/storage: release chunk storage after stop swarm (#3651)
closes #3650
2017-02-08 18:01:12 +01:00
Bas van Kervel
d0eeb3ebdc cmd/abigen: parse contract name as abi identifier 2017-02-06 18:16:56 +01:00
Péter Szilágyi
fa99986143 Merge pull request #3641 from karalabe/events-init-once
event: use sync.Once for init for faster/cleaner locking
2017-02-03 14:34:12 +02:00
Diego Siqueira
6ea8eba8ce accounts/abi, internal/jsre/deps: gofmt -w -s (#3636)
Signed-off-by: DiSiqueira <dieg0@live.com>
2017-02-03 13:32:04 +01:00
Péter Szilágyi
9b5c7153c9 event: use sync.Once for init for faster/cleaner locking 2017-02-03 14:04:57 +02:00
Péter Szilágyi
d52b0c32a0 Merge pull request #3635 from holiman/hive_fixes
core/genesis: add support for setting nonce in 'alloc'
2017-02-03 14:00:18 +02:00
Péter Szilágyi
7734ead520 Merge pull request #3605 from fjl/event-feed
event: add new Subscription type and related utilities
2017-02-03 13:56:00 +02:00
Felix Lange
1bed9b3fea event: address review issues (multiple commits)
event: address Feed review issues

event: clarify role of NewSubscription function

event: more Feed review fixes

* take sendLock after dropping f.mu
* add constant for number of special cases

event: fix subscribing/unsubscribing while Send is blocked
2017-02-03 13:37:49 +02:00
Jeffrey Wilcke
8b57c49490 params: core, core/vm, miner: 64bit gas instructions (#3514)
Reworked the EVM gas instructions to use 64bit integers rather than
arbitrary size big ints. All gas operations, be it additions,
multiplications or divisions, are checked and guarded against 64 bit
integer overflows.

In additon, most of the protocol paramaters in the params package have
been converted to uint64 and are now constants rather than variables.

* common/math: added overflow check ops
* core: vmenv, env renamed to evm
* eth, internal/ethapi, les: unmetered eth_call and cancel methods
* core/vm: implemented big.Int pool for evm instructions
* core/vm: unexported intPool methods & verification methods
* core/vm: added memoryGasCost overflow check and test
2017-02-02 15:25:42 +01:00
Brian Schroeder
296450451b state: take write lock in GetNonce (#3625)
We must take a write lock here because `GetNonce` calls
`StateDB.GetStateObject`, which mutates the DB's live set.
2017-02-01 10:55:46 +01:00
Felix Lange
4f5f90222f params, VERSION: v1.5.9-unstable 2017-02-01 02:14:54 +01:00
Felix Lange
f58fb32283 params: v1.5.8-stable 2017-02-01 02:14:04 +01:00
Felix Lange
9c45b4462c Merge pull request #3607 from zsfelfoldi/light-fix2
les: fix private net issues, enable adding peers manually again
2017-02-01 02:03:43 +01:00
gluk256
690f6ea1d7 cmd/wnode, whisper: add whisper CLI tool and mail server (#3580) 2017-01-31 11:16:20 +01:00
Péter Szilágyi
1c140f7382 Merge pull request #3615 from nolash/bzzpathfix_real5
cmd/swarm, swarm/api: bzzr improve + networkid prio
2017-01-30 16:36:30 +02:00
Péter Szilágyi
e5a93bf99a swarm/api/http: add missing copyright header 2017-01-30 15:21:46 +02:00
Péter Szilágyi
f3c368ca73 Merge pull request #3624 from kaneshin/patch-01
cmd/geth, cmd/swarm: Fix to close file handler appropriately
2017-01-30 12:24:41 +02:00
Péter Szilágyi
b8823a8b34 Merge pull request #3623 from kaneshin/patch-1
build: Fix tiny typo
2017-01-29 20:46:55 +02:00
Shintaro Kaneko
355a42f36d cmd/geth, cmd/swarm: Fix to close file handler appropriately 2017-01-30 01:10:19 +09:00
Shintaro Kaneko
658bcbcbdc build: Fix tiny typo 2017-01-30 01:09:00 +09:00
nolash
7669c5b5ec cmd/swarm, swarm/api: bzzr improve + networkid prio
fixes #3444
fixes #3494
networkid override

Added comments to explain why test against 0 appears twice

* Command line overrides saved config, saved config overrides system default

---

fixes #3476
bzzr get with path

Finally a hopefully clean commit for this PR
Added check for empty path to avoid SIGSEGV in path parser and resolver
Added requested tests for empty path and non-existing manifest.
However signature for StartHTTPServer had changed.
Now it's hacked as so:

	StartHttpServer(api.API, &Server{Addr: "127.0.0.1:8504", CorsString: ""})

* Parse url before resolve when path and ENS is supplied, example
* swarm/api/http proxy server test for retrieval of subpath through get
* Removed nil entry assignment on subtrie leaf in recursive key retrieval
* Cleaned up path-or-no-path condition in proxy server get handler
* swarm: processed with gofmt refers to lash/go-ethereum@90daa7a
* swarm: Added public access method Parse alias to parse
* swarm: processed with gofmt References nolash/go-ethereum@2ec3fd7
* Rename parse to Parse, removed alias
2017-01-27 08:18:13 +01:00
Zsolt Felfoldi
a390ca5f30 les, cmd/util: disable topic discovery with --nodiscover 2017-01-27 02:52:45 +01:00
bas-vk
c46c41eae3 core/types: add unittest for tx json serialization (#3609) 2017-01-26 21:16:24 +01:00
Vivek Anand
82aa5b1de6 core: fix a small typo in blockchain.go (#3611) 2017-01-26 16:54:49 +02:00
Zsolt Felfoldi
12379c697a les: remove delayed les server starting 2017-01-26 04:23:53 +01:00
Zsolt Felfoldi
f5348e17f8 les: add unknown peers to server pool instead of rejecting them 2017-01-26 04:23:49 +01:00
Felix Lange
a2b4abd89a rpc: send nil on subscription Err channel when Client is closed
This change makes client subscriptions compatible with the new
Subscription semantics introduced in the previous commit.
2017-01-25 18:44:21 +01:00
Felix Lange
6d5e100d0d event: add new Subscription type and related utilities
This commit introduces a new Subscription type, which is synonymous with
ethereum.Subscription. It also adds a couple of utilities that make
working with Subscriptions easier. The mot complex utility is Feed, a
synchronisation device that implements broadcast subscriptions. Feed is
slightly faster than TypeMux and will replace uses of TypeMux across the
go-ethereum codebase in the future.
2017-01-25 18:44:20 +01:00
Felix Lange
1886d03faa console, internal/web3ext: remove bzz and ens extensions (#3602)
web3.js includes bzz methods and throws an error when the extension
module is reregistered. The ENS RPC API is deprecated and not exposed by
anything.
2017-01-25 16:29:40 +01:00
Felix Lange
9b62facdd4 event: deprecate TypeMux and related types
The Subscription type is gone, all uses are replaced by
*TypeMuxSubscription. This change is prep-work for the
introduction of the new Subscription type in a later commit.

   gorename -from '"github.com/ethereum/go-ethereum/event"::Event' -to TypeMuxEvent
   gorename -from '"github.com/ethereum/go-ethereum/event"::muxsub' -to TypeMuxSubscription
   gofmt -w -r 'Subscription -> *TypeMuxSubscription' ./event/*.go
   find . -name '*.go' -and -not -regex '\./vendor/.*' \| xargs gofmt -w -r 'event.Subscription -> *event.TypeMuxSubscription'
2017-01-25 16:25:57 +01:00
Martin Holst Swende
da92f5b2d6 core/genesis: add support for setting nonce in 'alloc'
This is to be able to set `pre`-state when performing blockchain tests through Hive, we need to be able to set the nonce.
2017-01-24 20:42:47 +01:00
Felix Lange
f1069a30b9 eth/downloader: improve deliverNodeData (#3588)
Commit d3b751e accidentally deleted a crucial 'return' statement,
leading to a crash in case of an issue with node data. This change
improves the fix in PR #3591 by removing the lock entirely.
2017-01-24 13:20:37 +01:00
Péter Szilágyi
2718b42828 Merge pull request #3599 from karalabe/docker-alpine-cacerts
containers/docker: update base images, add CA certs, build internally on Ubuntu
2017-01-24 13:46:56 +02:00
Felix Lange
fc52f2c007 core/types: make Transaction zero value printable (#3595) 2017-01-23 18:51:02 +01:00
Péter Szilágyi
0b9070fe01 containers/docker: update ubuntu images to build, not pull 2017-01-23 12:12:38 +02:00
Péter Szilágyi
c04598f2b0 containers/docker: update to alpine 3.5, add CA certificates 2017-01-23 11:46:15 +02:00
Felix Lange
96778a1c21 crypto/secp256k1: sign with deterministic K (rfc6979) (#3561) 2017-01-22 23:28:47 +01:00
Martin Holst Swende
935d891e9d cmd/evm: added debug flag (back) (#3554)
* evm: added debug flag (back)

* cmd/evm: gofmt
2017-01-22 22:14:09 +01:00
Péter Szilágyi
682875adff accounts/abi/bind, internal/ethapi: binary search gas estimation (#3587)
Gas estimation currently mostly works, but can underestimate for more funky
refunds. This is because various ops (e.g. CALL) need more gas to run than they
actually consume (e.g. 2300 stipend that is refunded if not used). With more
intricate contract interplays, it becomes almost impossible to return a proper
value to the user.

This commit swaps out the simplistic gas estimation to a binary search approach,
honing in on the correct gas use. This does mean that gas estimation needs to
rerun the transaction log(max-price) times to measure whether it fails or not,
but it's a price paid by the transaction issuer, and it should be worth it to
support proper estimates.
2017-01-20 23:39:16 +01:00
bas-vk
0126d01435 types: bugfix invalid V derivation on tx json unmarshal (#3594) 2017-01-20 23:32:16 +01:00
Péter Szilágyi
946db8ba65 internal/guide: initial test suite to ensure guide snippets run ok (#3582) 2017-01-20 11:50:21 +01:00
Péter Szilágyi
7814a8e131 travis: Install Android NDK explicitly, removed from gomobile (#3593)
The Android NDK was recently removed from gomobile, leading to our Android
builds failing. Starting from https://go-review.googlesource.com/#/c/35173/ ,
gomobile requires a locally installed NDK. This PR ensures that travis installs
that too before running the build steps.
2017-01-20 10:33:58 +01:00
Péter Szilágyi
ebc3d232f4 eth/downloader: fix mutex regression causing panics on fail (#3591) 2017-01-20 01:12:14 +01:00
Péter Szilágyi
f087c66f95 Merge pull request #3584 from obscuren/dead-code
core: removal of dead-code
2017-01-18 13:36:21 +02:00
Jeffrey Wilcke
508fdc3496 core: removal of dead-code
Removal of dead code that appeard as if we had a consensus issue. This
however is not the case as the proper error catching happens in the vm
package instead.
2017-01-17 21:50:08 +01:00
Péter Szilágyi
d63752ef4d Merge pull request #3579 from bas-vk/natspec
cmd,eth,les,internal: remove natspec support
2017-01-17 14:38:57 +02:00
Martin Holst Swende
6fb76443b3 core/blockchain: Made logging of reorgs more structured (#3573)
* core: Made logging of reorgs more structured, also always log if reorg is > 63 blocks long

* core/blockchain: go fmt

* core/blockchain: Minor fixes to the reorg reporting
2017-01-17 14:10:26 +02:00
Péter Szilágyi
2eefed84c9 Merge pull request #3581 from karalabe/accounts-polish
accounts, mobile: make account manager API a bit more uniform
2017-01-17 14:09:29 +02:00
Péter Szilágyi
230530f5ea accounts, mobile: make account manager API a bit more uniform 2017-01-17 13:25:36 +02:00
Nick Johnson
17d92233d9 cmd/geth, core: add support for recording SHA3 preimages (#3543) 2017-01-17 12:19:50 +01:00
Bas van Kervel
54a65e6d87 cmd,eth,les,internal: remove natspec support 2017-01-17 12:13:50 +01:00
Felix Lange
26d385c18b params, VERSION: 1.5.8 unstable 2017-01-16 11:12:50 +01:00
Felix Lange
da2a22c384 params: stable 1.5.7 2017-01-16 10:57:02 +01:00
Felföldi Zsolt
0fa9a8929c les: fixed transaction sending deadlock (#3568) 2017-01-16 10:51:29 +01:00
Péter Szilágyi
2a1a531ba3 Merge pull request #3570 from fjl/hexutil-zero-fix
common/hexutil: fix EncodeBig, Big.MarshalJSON
2017-01-16 11:49:17 +02:00
Felix Lange
51f6b6d33f common/hexutil: fix EncodeBig, Big.MarshalJSON
The code was too clever and failed to include zeros on a big.Word
boundary.
2017-01-16 10:32:40 +01:00
Péter Szilágyi
b5a100b859 Merge pull request #3560 from karalabe/ci-misspell
travis, appveyor, build: add source spell checking
2017-01-13 12:16:22 +02:00
Péter Szilágyi
54fcab20e3 appveyor, build: fix review requests 2017-01-13 12:04:55 +02:00
Péter Szilágyi
a2bc90d1d7 build: spellcheck individual packages (Windows path limits) 2017-01-13 11:22:24 +02:00
Péter Szilágyi
c01f8c3d3c accounts/abi: fix comment spelling error 2017-01-13 11:14:47 +02:00
Péter Szilágyi
e4181a7f1b travis, appveyor, build: add source spell checking 2017-01-13 11:14:13 +02:00
Felix Lange
01f6f2d741 common/hexutil: allow empty strings when decoding JSON (#3559) 2017-01-13 09:45:40 +01:00
Felix Lange
c5df37c111 eth: accept leading zeros for nonce parameter of submitWork (#3558) 2017-01-13 00:37:23 +01:00
Felix Lange
e0ceeab0d1 crypto/secp256k1: update to github.com/bitcoin-core/secp256k1 @ 9d560f9 (#3544)
- Use defined constants instead of hard-coding their integer value.
- Allocate secp256k1 structs on the C stack instead of converting []byte
- Remove dead code
2017-01-12 21:29:11 +01:00
Péter Szilágyi
93077c98e4 internal: update web3.js to 0.18.1, embed deps with go-bindata (#3545) 2017-01-12 21:28:35 +01:00
Péter Szilágyi
3dab303826 Merge pull request #3555 from obscuren/unskip-test
tests: unskip test
2017-01-12 12:08:56 +02:00
Jeffrey Wilcke
3160fd24ba tests: unskip test 2017-01-12 10:32:21 +01:00
Péter Szilágyi
ce7822c130 Merge pull request #3553 from bas-vk/rm-olympic-support
core: remove support for Olympic network
2017-01-12 11:26:00 +02:00
Bas van Kervel
745a3adebd core: remove support for Olympic network 2017-01-12 09:50:54 +01:00
Péter Szilágyi
218ec6c085 Merge pull request #3551 from fjl/core-import-log-align
core: improve import log alignment
2017-01-11 14:53:54 +02:00
Nick Johnson
d30d7800e0 ethdb: Implement interface for prefixed operations to the DB (#3536) 2017-01-11 13:26:09 +01:00
Felix Lange
8820d97039 internal/ethapi: fix duration parameter of personal_unlockAccount (#3542) 2017-01-11 13:20:24 +01:00
Péter Szilágyi
b52fde7cf7 Merge pull request #3546 from fjl/deps-update
vendor: update dependencies
2017-01-11 12:14:08 +02:00
Péter Szilágyi
2b4d0b6ff9 Merge pull request #3548 from fjl/geth-fix-bootnodes
cmd/utils: fix comma-separated --bootnodes
2017-01-11 10:59:14 +02:00
Felix Lange
21f1370d2a core: improve import log alignment 2017-01-10 23:14:08 +01:00
Felix Lange
d78f9b834a vendor: update all dependencies except Azure SDK
The Azure SDK doesn't support Go 1.5 anymore. We can't upgrade it until
Go 1.8 comes out.
2017-01-10 22:33:24 +01:00
Felix Lange
445deb7470 cmd/utils: fix comma-separated --bootnodes 2017-01-10 21:13:43 +01:00
Péter Szilágyi
02b67558e8 Merge pull request #3535 from fjl/all-ineffassign
all: fix ineffectual assignments
2017-01-09 23:53:17 +02:00
Péter Szilágyi
91c8f87fb1 Merge pull request #3538 from karalabe/cycle-1.5.7
params, VERSION: start 1.5.7 release cycle
2017-01-09 17:47:35 +02:00
Péter Szilágyi
d056b7fa52 params, VERSION: start 1.5.7 release cycle 2017-01-09 17:45:49 +02:00
Felix Lange
b9b3efb09f all: fix ineffectual assignments and remove uses of crypto.Sha3
go get github.com/gordonklaus/ineffassign
ineffassign .
2017-01-09 16:24:42 +01:00
Felix Lange
0f34d506b5 generators: delete dead code
We don't use this anymore.
2017-01-09 16:12:54 +01:00
Felix Lange
5eccc122e8 build, node: fix go vet nits 2017-01-09 16:12:54 +01:00
473 changed files with 63426 additions and 5433 deletions

3
.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
.git
build/_workspace
build/_bin

View File

@@ -53,26 +53,52 @@ matrix:
- CC=aarch64-linux-gnu-gcc go run build/ci.go install -arch arm64
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the OSX Azure, Android Maven and Azure and iOS CocoaPods and Azure uploads
# This builder does the Android Maven and Azure uploads
- os: linux
dist: precise # Needed for the android tools
addons:
apt:
packages:
- oracle-java8-installer
- oracle-java8-set-default
language: android
android:
components:
- platform-tools
- tools
- android-15
- android-19
- android-24
env:
- azure-android
- maven-android
before_install:
- curl https://storage.googleapis.com/golang/go1.8rc3.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go # Drop post Go 1.8
script:
# Build the Android archive and upload it to Maven Central and Azure
- curl https://dl.google.com/android/repository/android-ndk-r13b-linux-x86_64.zip -o android-ndk-r13b.zip
- unzip -q android-ndk-r13b.zip && rm android-ndk-r13b.zip
- mv android-ndk-r13b $HOME
- export ANDROID_NDK=$HOME/android-ndk-r13b
- mkdir -p $GOPATH/src/github.com/ethereum
- ln -s `pwd` $GOPATH/src/github.com/ethereum
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- os: osx
go: 1.7.4
env:
- azure-osx
- mobile
- azure-ios
- cocoapods-ios
script:
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload gethstore/builds
# Build the Android archive and upload it to Maven Central and Azure
- brew update
- brew install android-sdk maven gpg
- alias gpg="gpg2"
- export ANDROID_HOME=/usr/local/opt/android-sdk
- echo "y" | android update sdk --no-ui --filter `android list sdk | grep "SDK Platform Android" | grep -E 'API 15|API 19|API 24' | awk '{print $1}' | cut -d '-' -f 1 | tr '\n' ','`
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
# Build the iOS framework and upload it to CocoaPods and Azure
- gem uninstall cocoapods -a
- gem install cocoapods --pre
@@ -90,7 +116,7 @@ install:
- go get golang.org/x/tools/cmd/cover
script:
- go run build/ci.go install
- go run build/ci.go test -coverage -vet
- go run build/ci.go test -coverage -vet -misspell
notifications:
webhooks:

View File

@@ -1,11 +1,11 @@
FROM alpine:3.3
FROM alpine:3.5
ADD . /go-ethereum
RUN \
apk add --update git go make gcc musl-dev && \
apk add --update git go make gcc musl-dev linux-headers && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /geth && \
apk del git go make gcc musl-dev && \
apk del git go make gcc musl-dev linux-headers && \
rm -rf /go-ethereum && rm -rf /var/cache/apk/*
EXPOSE 8545

View File

@@ -1 +1 @@
1.5.6
1.5.9

View File

@@ -22,7 +22,7 @@ import (
"io"
"io/ioutil"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
@@ -35,7 +35,7 @@ func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
if err != nil {
return nil, err
}
key, err := accounts.DecryptKey(json, passphrase)
key, err := keystore.DecryptKey(json, passphrase)
if err != nil {
return nil, err
}

View File

@@ -61,7 +61,7 @@ type SimulatedBackend struct {
func NewSimulatedBackend(accounts ...core.GenesisAccount) *SimulatedBackend {
database, _ := ethdb.NewMemDatabase()
core.WriteGenesisBlockForTesting(database, accounts...)
blockchain, _ := core.NewBlockChain(database, chainConfig, new(core.FakePow), new(event.TypeMux))
blockchain, _ := core.NewBlockChain(database, chainConfig, new(core.FakePow), new(event.TypeMux), vm.Config{})
backend := &SimulatedBackend{database: database, blockchain: blockchain}
backend.rollback()
return backend
@@ -201,10 +201,32 @@ func (b *SimulatedBackend) SuggestGasPrice(ctx context.Context) (*big.Int, error
func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMsg) (*big.Int, error) {
b.mu.Lock()
defer b.mu.Unlock()
defer b.pendingState.RevertToSnapshot(b.pendingState.Snapshot())
// Binary search the gas requirement, as it may be higher than the amount used
var lo, hi uint64
if call.Gas != nil {
hi = call.Gas.Uint64()
} else {
hi = b.pendingBlock.GasLimit().Uint64()
}
for lo+1 < hi {
// Take a guess at the gas, and check transaction validity
mid := (hi + lo) / 2
call.Gas = new(big.Int).SetUint64(mid)
snapshot := b.pendingState.Snapshot()
_, gas, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
return gas, err
b.pendingState.RevertToSnapshot(snapshot)
// If the transaction became invalid or used all the gas (failed), raise the gas limit
if err != nil || gas.Cmp(call.Gas) == 0 {
lo = mid
continue
}
// Otherwise assume the transaction succeeded, lower the gas limit
hi = mid
}
return new(big.Int).SetUint64(hi), nil
}
// callContract implemens common code between normal and pending contract calls.

View File

@@ -170,7 +170,7 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
if value == nil {
value = new(big.Int)
}
nonce := uint64(0)
var nonce uint64
if opts.Nonce == nil {
nonce, err = c.transactor.PendingNonceAt(ensureContext(opts.Context), opts.From)
if err != nil {

View File

@@ -365,6 +365,49 @@ var bindTests = []struct {
}
`,
},
// Tests that gas estimation works for contracts with weird gas mechanics too.
{
`FunkyGasPattern`,
`
contract FunkyGasPattern {
string public field;
function SetField(string value) {
// This check will screw gas estimation! Good, good!
if (msg.gas < 100000) {
throw;
}
field = value;
}
}
`,
`606060405261021c806100126000396000f3606060405260e060020a600035046323fcf32a81146100265780634f28bf0e1461007b575b005b6040805160206004803580820135601f8101849004840285018401909552848452610024949193602493909291840191908190840183828082843750949650505050505050620186a05a101561014e57610002565b6100db60008054604080516020601f600260001961010060018816150201909516949094049384018190048102820181019092528281529291908301828280156102145780601f106101e957610100808354040283529160200191610214565b60405180806020018281038252838181518152602001915080519060200190808383829060006004602084601f0104600302600f01f150905090810190601f16801561013b5780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b505050565b8060006000509080519060200190828054600181600116156101000203166002900490600052602060002090601f016020900481019282601f106101b557805160ff19168380011785555b506101499291505b808211156101e557600081556001016101a1565b82800160010185558215610199579182015b828111156101995782518260005055916020019190600101906101c7565b5090565b820191906000526020600020905b8154815290600101906020018083116101f757829003601f168201915b50505050508156`,
`[{"constant":false,"inputs":[{"name":"value","type":"string"}],"name":"SetField","outputs":[],"type":"function"},{"constant":true,"inputs":[],"name":"field","outputs":[{"name":"","type":"string"}],"type":"function"}]`,
`
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAccount{Address: auth.From, Balance: big.NewInt(10000000000)})
// Deploy a funky gas pattern contract
_, _, limiter, err := DeployFunkyGasPattern(auth, sim)
if err != nil {
t.Fatalf("Failed to deploy funky contract: %v", err)
}
sim.Commit()
// Set the field with automatic estimation and check that it succeeds
auth.GasLimit = nil
if _, err := limiter.SetField(auth, "automatic"); err != nil {
t.Fatalf("Failed to call automatically gased transaction: %v", err)
}
sim.Commit()
if field, _ := limiter.Field(nil); field != "automatic" {
t.Fatalf("Field mismatch: have %v, want %v", field, "automatic")
}
`,
},
}
// Tests that packages generated by the binder can be successfully compiled and

View File

@@ -91,7 +91,7 @@ func NewType(t string) (typ Type, err error) {
}
typ.Elem = &sliceType
typ.stringKind = sliceType.stringKind + t[len(res[1]):]
// Altough we know that this is an array, we cannot return
// Although we know that this is an array, we cannot return
// as we don't know the type of the element, however, if it
// is still an array, then don't determine the type.
if typ.Elem.IsArray || typ.Elem.IsSlice {

View File

@@ -1,350 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package accounts implements encrypted storage of secp256k1 private keys.
//
// Keys are stored as encrypted JSON files according to the Web3 Secret Storage specification.
// See https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition for more information.
package accounts
import (
"crypto/ecdsa"
crand "crypto/rand"
"encoding/json"
"errors"
"fmt"
"os"
"path/filepath"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
var (
ErrLocked = errors.New("account is locked")
ErrNoMatch = errors.New("no key for given address or file")
ErrDecrypt = errors.New("could not decrypt key with given passphrase")
)
// Account represents a stored key.
// When used as an argument, it selects a unique key file to act on.
type Account struct {
Address common.Address // Ethereum account address derived from the key
// File contains the key file name.
// When Acccount is used as an argument to select a key, File can be left blank to
// select just by address or set to the basename or absolute path of a file in the key
// directory. Accounts returned by Manager will always contain an absolute path.
File string
}
func (acc *Account) MarshalJSON() ([]byte, error) {
return []byte(`"` + acc.Address.Hex() + `"`), nil
}
func (acc *Account) UnmarshalJSON(raw []byte) error {
return json.Unmarshal(raw, &acc.Address)
}
// Manager manages a key storage directory on disk.
type Manager struct {
cache *addrCache
keyStore keyStore
mu sync.RWMutex
unlocked map[common.Address]*unlocked
}
type unlocked struct {
*Key
abort chan struct{}
}
// NewManager creates a manager for the given directory.
func NewManager(keydir string, scryptN, scryptP int) *Manager {
keydir, _ = filepath.Abs(keydir)
am := &Manager{keyStore: &keyStorePassphrase{keydir, scryptN, scryptP}}
am.init(keydir)
return am
}
// NewPlaintextManager creates a manager for the given directory.
// Deprecated: Use NewManager.
func NewPlaintextManager(keydir string) *Manager {
keydir, _ = filepath.Abs(keydir)
am := &Manager{keyStore: &keyStorePlain{keydir}}
am.init(keydir)
return am
}
func (am *Manager) init(keydir string) {
am.unlocked = make(map[common.Address]*unlocked)
am.cache = newAddrCache(keydir)
// TODO: In order for this finalizer to work, there must be no references
// to am. addrCache doesn't keep a reference but unlocked keys do,
// so the finalizer will not trigger until all timed unlocks have expired.
runtime.SetFinalizer(am, func(m *Manager) {
m.cache.close()
})
}
// HasAddress reports whether a key with the given address is present.
func (am *Manager) HasAddress(addr common.Address) bool {
return am.cache.hasAddress(addr)
}
// Accounts returns all key files present in the directory.
func (am *Manager) Accounts() []Account {
return am.cache.accounts()
}
// DeleteAccount deletes the key matched by account if the passphrase is correct.
// If a contains no filename, the address must match a unique key.
func (am *Manager) DeleteAccount(a Account, passphrase string) error {
// Decrypting the key isn't really necessary, but we do
// it anyway to check the password and zero out the key
// immediately afterwards.
a, key, err := am.getDecryptedKey(a, passphrase)
if key != nil {
zeroKey(key.PrivateKey)
}
if err != nil {
return err
}
// The order is crucial here. The key is dropped from the
// cache after the file is gone so that a reload happening in
// between won't insert it into the cache again.
err = os.Remove(a.File)
if err == nil {
am.cache.delete(a)
}
return err
}
// Sign calculates a ECDSA signature for the given hash. The produced signature
// is in the [R || S || V] format where V is 0 or 1.
func (am *Manager) Sign(addr common.Address, hash []byte) ([]byte, error) {
am.mu.RLock()
defer am.mu.RUnlock()
unlockedKey, found := am.unlocked[addr]
if !found {
return nil, ErrLocked
}
return crypto.Sign(hash, unlockedKey.PrivateKey)
}
// SignWithPassphrase signs hash if the private key matching the given address
// can be decrypted with the given passphrase. The produced signature is in the
// [R || S || V] format where V is 0 or 1.
func (am *Manager) SignWithPassphrase(a Account, passphrase string, hash []byte) (signature []byte, err error) {
_, key, err := am.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
defer zeroKey(key.PrivateKey)
return crypto.Sign(hash, key.PrivateKey)
}
// Unlock unlocks the given account indefinitely.
func (am *Manager) Unlock(a Account, passphrase string) error {
return am.TimedUnlock(a, passphrase, 0)
}
// Lock removes the private key with the given address from memory.
func (am *Manager) Lock(addr common.Address) error {
am.mu.Lock()
if unl, found := am.unlocked[addr]; found {
am.mu.Unlock()
am.expire(addr, unl, time.Duration(0)*time.Nanosecond)
} else {
am.mu.Unlock()
}
return nil
}
// TimedUnlock unlocks the given account with the passphrase. The account
// stays unlocked for the duration of timeout. A timeout of 0 unlocks the account
// until the program exits. The account must match a unique key file.
//
// If the account address is already unlocked for a duration, TimedUnlock extends or
// shortens the active unlock timeout. If the address was previously unlocked
// indefinitely the timeout is not altered.
func (am *Manager) TimedUnlock(a Account, passphrase string, timeout time.Duration) error {
a, key, err := am.getDecryptedKey(a, passphrase)
if err != nil {
return err
}
am.mu.Lock()
defer am.mu.Unlock()
u, found := am.unlocked[a.Address]
if found {
if u.abort == nil {
// The address was unlocked indefinitely, so unlocking
// it with a timeout would be confusing.
zeroKey(key.PrivateKey)
return nil
} else {
// Terminate the expire goroutine and replace it below.
close(u.abort)
}
}
if timeout > 0 {
u = &unlocked{Key: key, abort: make(chan struct{})}
go am.expire(a.Address, u, timeout)
} else {
u = &unlocked{Key: key}
}
am.unlocked[a.Address] = u
return nil
}
// Find resolves the given account into a unique entry in the keystore.
func (am *Manager) Find(a Account) (Account, error) {
am.cache.maybeReload()
am.cache.mu.Lock()
a, err := am.cache.find(a)
am.cache.mu.Unlock()
return a, err
}
func (am *Manager) getDecryptedKey(a Account, auth string) (Account, *Key, error) {
a, err := am.Find(a)
if err != nil {
return a, nil, err
}
key, err := am.keyStore.GetKey(a.Address, a.File, auth)
return a, key, err
}
func (am *Manager) expire(addr common.Address, u *unlocked, timeout time.Duration) {
t := time.NewTimer(timeout)
defer t.Stop()
select {
case <-u.abort:
// just quit
case <-t.C:
am.mu.Lock()
// only drop if it's still the same key instance that dropLater
// was launched with. we can check that using pointer equality
// because the map stores a new pointer every time the key is
// unlocked.
if am.unlocked[addr] == u {
zeroKey(u.PrivateKey)
delete(am.unlocked, addr)
}
am.mu.Unlock()
}
}
// NewAccount generates a new key and stores it into the key directory,
// encrypting it with the passphrase.
func (am *Manager) NewAccount(passphrase string) (Account, error) {
_, account, err := storeNewKey(am.keyStore, crand.Reader, passphrase)
if err != nil {
return Account{}, err
}
// Add the account to the cache immediately rather
// than waiting for file system notifications to pick it up.
am.cache.add(account)
return account, nil
}
// AccountByIndex returns the ith account.
func (am *Manager) AccountByIndex(i int) (Account, error) {
accounts := am.Accounts()
if i < 0 || i >= len(accounts) {
return Account{}, fmt.Errorf("account index %d out of range [0, %d]", i, len(accounts)-1)
}
return accounts[i], nil
}
// Export exports as a JSON key, encrypted with newPassphrase.
func (am *Manager) Export(a Account, passphrase, newPassphrase string) (keyJSON []byte, err error) {
_, key, err := am.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
var N, P int
if store, ok := am.keyStore.(*keyStorePassphrase); ok {
N, P = store.scryptN, store.scryptP
} else {
N, P = StandardScryptN, StandardScryptP
}
return EncryptKey(key, newPassphrase, N, P)
}
// Import stores the given encrypted JSON key into the key directory.
func (am *Manager) Import(keyJSON []byte, passphrase, newPassphrase string) (Account, error) {
key, err := DecryptKey(keyJSON, passphrase)
if key != nil && key.PrivateKey != nil {
defer zeroKey(key.PrivateKey)
}
if err != nil {
return Account{}, err
}
return am.importKey(key, newPassphrase)
}
// ImportECDSA stores the given key into the key directory, encrypting it with the passphrase.
func (am *Manager) ImportECDSA(priv *ecdsa.PrivateKey, passphrase string) (Account, error) {
key := newKeyFromECDSA(priv)
if am.cache.hasAddress(key.Address) {
return Account{}, fmt.Errorf("account already exists")
}
return am.importKey(key, passphrase)
}
func (am *Manager) importKey(key *Key, passphrase string) (Account, error) {
a := Account{Address: key.Address, File: am.keyStore.JoinPath(keyFileName(key.Address))}
if err := am.keyStore.StoreKey(a.File, key, passphrase); err != nil {
return Account{}, err
}
am.cache.add(a)
return a, nil
}
// Update changes the passphrase of an existing account.
func (am *Manager) Update(a Account, passphrase, newPassphrase string) error {
a, key, err := am.getDecryptedKey(a, passphrase)
if err != nil {
return err
}
return am.keyStore.StoreKey(a.File, key, newPassphrase)
}
// ImportPreSaleKey decrypts the given Ethereum presale wallet and stores
// a key file in the key directory. The key file is encrypted with the same passphrase.
func (am *Manager) ImportPreSaleKey(keyJSON []byte, passphrase string) (Account, error) {
a, _, err := importPreSaleKey(am.keyStore, keyJSON, passphrase)
if err != nil {
return a, err
}
am.cache.add(a)
return a, nil
}
// zeroKey zeroes a private key in memory.
func zeroKey(k *ecdsa.PrivateKey) {
b := k.D.Bits()
for i := range b {
b[i] = 0
}
}

155
accounts/accounts.go Normal file
View File

@@ -0,0 +1,155 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package accounts implements high level Ethereum account management.
package accounts
import (
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
)
// Account represents an Ethereum account located at a specific location defined
// by the optional URL field.
type Account struct {
Address common.Address `json:"address"` // Ethereum account address derived from the key
URL URL `json:"url"` // Optional resource locator within a backend
}
// Wallet represents a software or hardware wallet that might contain one or more
// accounts (derived from the same seed).
type Wallet interface {
// URL retrieves the canonical path under which this wallet is reachable. It is
// user by upper layers to define a sorting order over all wallets from multiple
// backends.
URL() URL
// Status returns a textual status to aid the user in the current state of the
// wallet.
Status() string
// Open initializes access to a wallet instance. It is not meant to unlock or
// decrypt account keys, rather simply to establish a connection to hardware
// wallets and/or to access derivation seeds.
//
// The passphrase parameter may or may not be used by the implementation of a
// particular wallet instance. The reason there is no passwordless open method
// is to strive towards a uniform wallet handling, oblivious to the different
// backend providers.
//
// Please note, if you open a wallet, you must close it to release any allocated
// resources (especially important when working with hardware wallets).
Open(passphrase string) error
// Close releases any resources held by an open wallet instance.
Close() error
// Accounts retrieves the list of signing accounts the wallet is currently aware
// of. For hierarchical deterministic wallets, the list will not be exhaustive,
// rather only contain the accounts explicitly pinned during account derivation.
Accounts() []Account
// Contains returns whether an account is part of this particular wallet or not.
Contains(account Account) bool
// Derive attempts to explicitly derive a hierarchical deterministic account at
// the specified derivation path. If requested, the derived account will be added
// to the wallet's tracked account list.
Derive(path DerivationPath, pin bool) (Account, error)
// SelfDerive sets a base account derivation path from which the wallet attempts
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//
// You can disable automatic account discovery by calling SelfDerive with a nil
// chain state reader.
SelfDerive(base DerivationPath, chain ethereum.ChainStateReader)
// SignHash requests the wallet to sign the given hash.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
//
// If the wallet requires additional authentication to sign the request (e.g.
// a password to decrypt the account, or a PIN code o verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignHashWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
SignHash(account Account, hash []byte) ([]byte, error)
// SignTx requests the wallet to sign the given transaction.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
//
// If the wallet requires additional authentication to sign the request (e.g.
// a password to decrypt the account, or a PIN code o verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignTxWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
SignTx(account Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error)
// SignHashWithPassphrase requests the wallet to sign the given hash with the
// given passphrase as extra authentication information.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
SignHashWithPassphrase(account Account, passphrase string, hash []byte) ([]byte, error)
// SignTxWithPassphrase requests the wallet to sign the given transaction, with the
// given passphrase as extra authentication information.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
SignTxWithPassphrase(account Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error)
}
// Backend is a "wallet provider" that may contain a batch of accounts they can
// sign transactions with and upon request, do so.
type Backend interface {
// Wallets retrieves the list of wallets the backend is currently aware of.
//
// The returned wallets are not opened by default. For software HD wallets this
// means that no base seeds are decrypted, and for hardware wallets that no actual
// connection is established.
//
// The resulting wallet list will be sorted alphabetically based on its internal
// URL assigned by the backend. Since wallets (especially hardware) may come and
// go, the same wallet might appear at a different positions in the list during
// subsequent retrievals.
Wallets() []Wallet
// Subscribe creates an async subscription to receive notifications when the
// backend detects the arrival or departure of a wallet.
Subscribe(sink chan<- WalletEvent) event.Subscription
}
// WalletEvent is an event fired by an account backend when a wallet arrival or
// departure is detected.
type WalletEvent struct {
Wallet Wallet // Wallet instance arrived or departed
Arrive bool // Whether the wallet was added or removed
}

View File

@@ -1,218 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"io/ioutil"
"os"
"runtime"
"strings"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
)
var testSigData = make([]byte, 32)
func TestManager(t *testing.T) {
dir, am := tmpManager(t, true)
defer os.RemoveAll(dir)
a, err := am.NewAccount("foo")
if err != nil {
t.Fatal(err)
}
if !strings.HasPrefix(a.File, dir) {
t.Errorf("account file %s doesn't have dir prefix", a.File)
}
stat, err := os.Stat(a.File)
if err != nil {
t.Fatalf("account file %s doesn't exist (%v)", a.File, err)
}
if runtime.GOOS != "windows" && stat.Mode() != 0600 {
t.Fatalf("account file has wrong mode: got %o, want %o", stat.Mode(), 0600)
}
if !am.HasAddress(a.Address) {
t.Errorf("HasAccount(%x) should've returned true", a.Address)
}
if err := am.Update(a, "foo", "bar"); err != nil {
t.Errorf("Update error: %v", err)
}
if err := am.DeleteAccount(a, "bar"); err != nil {
t.Errorf("DeleteAccount error: %v", err)
}
if common.FileExist(a.File) {
t.Errorf("account file %s should be gone after DeleteAccount", a.File)
}
if am.HasAddress(a.Address) {
t.Errorf("HasAccount(%x) should've returned true after DeleteAccount", a.Address)
}
}
func TestSign(t *testing.T) {
dir, am := tmpManager(t, true)
defer os.RemoveAll(dir)
pass := "" // not used but required by API
a1, err := am.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
if err := am.Unlock(a1, ""); err != nil {
t.Fatal(err)
}
if _, err := am.Sign(a1.Address, testSigData); err != nil {
t.Fatal(err)
}
}
func TestSignWithPassphrase(t *testing.T) {
dir, am := tmpManager(t, true)
defer os.RemoveAll(dir)
pass := "passwd"
acc, err := am.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
if _, unlocked := am.unlocked[acc.Address]; unlocked {
t.Fatal("expected account to be locked")
}
_, err = am.SignWithPassphrase(acc, pass, testSigData)
if err != nil {
t.Fatal(err)
}
if _, unlocked := am.unlocked[acc.Address]; unlocked {
t.Fatal("expected account to be locked")
}
if _, err = am.SignWithPassphrase(acc, "invalid passwd", testSigData); err == nil {
t.Fatal("expected SignHash to fail with invalid password")
}
}
func TestTimedUnlock(t *testing.T) {
dir, am := tmpManager(t, true)
defer os.RemoveAll(dir)
pass := "foo"
a1, err := am.NewAccount(pass)
// Signing without passphrase fails because account is locked
_, err = am.Sign(a1.Address, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked before unlocking, got ", err)
}
// Signing with passphrase works
if err = am.TimedUnlock(a1, pass, 100*time.Millisecond); err != nil {
t.Fatal(err)
}
// Signing without passphrase works because account is temp unlocked
_, err = am.Sign(a1.Address, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// Signing fails again after automatic locking
time.Sleep(250 * time.Millisecond)
_, err = am.Sign(a1.Address, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked timeout expired, got ", err)
}
}
func TestOverrideUnlock(t *testing.T) {
dir, am := tmpManager(t, false)
defer os.RemoveAll(dir)
pass := "foo"
a1, err := am.NewAccount(pass)
// Unlock indefinitely.
if err = am.TimedUnlock(a1, pass, 5*time.Minute); err != nil {
t.Fatal(err)
}
// Signing without passphrase works because account is temp unlocked
_, err = am.Sign(a1.Address, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// reset unlock to a shorter period, invalidates the previous unlock
if err = am.TimedUnlock(a1, pass, 100*time.Millisecond); err != nil {
t.Fatal(err)
}
// Signing without passphrase still works because account is temp unlocked
_, err = am.Sign(a1.Address, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// Signing fails again after automatic locking
time.Sleep(250 * time.Millisecond)
_, err = am.Sign(a1.Address, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked timeout expired, got ", err)
}
}
// This test should fail under -race if signing races the expiration goroutine.
func TestSignRace(t *testing.T) {
dir, am := tmpManager(t, false)
defer os.RemoveAll(dir)
// Create a test account.
a1, err := am.NewAccount("")
if err != nil {
t.Fatal("could not create the test account", err)
}
if err := am.TimedUnlock(a1, "", 15*time.Millisecond); err != nil {
t.Fatal("could not unlock the test account", err)
}
end := time.Now().Add(500 * time.Millisecond)
for time.Now().Before(end) {
if _, err := am.Sign(a1.Address, testSigData); err == ErrLocked {
return
} else if err != nil {
t.Errorf("Sign error: %v", err)
return
}
time.Sleep(1 * time.Millisecond)
}
t.Errorf("Account did not lock within the timeout")
}
func tmpManager(t *testing.T, encrypted bool) (string, *Manager) {
d, err := ioutil.TempDir("", "eth-keystore-test")
if err != nil {
t.Fatal(err)
}
new := NewPlaintextManager
if encrypted {
new = func(kd string) *Manager { return NewManager(kd, veryLightScryptN, veryLightScryptP) }
}
return d, new(d)
}

68
accounts/errors.go Normal file
View File

@@ -0,0 +1,68 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"errors"
"fmt"
)
// ErrUnknownAccount is returned for any requested operation for which no backend
// provides the specified account.
var ErrUnknownAccount = errors.New("unknown account")
// ErrUnknownWallet is returned for any requested operation for which no backend
// provides the specified wallet.
var ErrUnknownWallet = errors.New("unknown wallet")
// ErrNotSupported is returned when an operation is requested from an account
// backend that it does not support.
var ErrNotSupported = errors.New("not supported")
// ErrInvalidPassphrase is returned when a decryption operation receives a bad
// passphrase.
var ErrInvalidPassphrase = errors.New("invalid passphrase")
// ErrWalletAlreadyOpen is returned if a wallet is attempted to be opened the
// secodn time.
var ErrWalletAlreadyOpen = errors.New("wallet already open")
// ErrWalletClosed is returned if a wallet is attempted to be opened the
// secodn time.
var ErrWalletClosed = errors.New("wallet closed")
// AuthNeededError is returned by backends for signing requests where the user
// is required to provide further authentication before signing can succeed.
//
// This usually means either that a password needs to be supplied, or perhaps a
// one time PIN code displayed by some hardware device.
type AuthNeededError struct {
Needed string // Extra authentication the user needs to provide
}
// NewAuthNeededError creates a new authentication error with the extra details
// about the needed fields set.
func NewAuthNeededError(needed string) error {
return &AuthNeededError{
Needed: needed,
}
}
// Error implements the standard error interfacel.
func (err *AuthNeededError) Error() string {
return fmt.Sprintf("authentication needed: %s", err.Needed)
}

130
accounts/hd.go Normal file
View File

@@ -0,0 +1,130 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"errors"
"fmt"
"math"
"math/big"
"strings"
)
// DefaultRootDerivationPath is the root path to which custom derivation endpoints
// are appended. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
var DefaultRootDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0}
// DefaultBaseDerivationPath is the base path from which custom derivation endpoints
// are incremented. As such, the first account will be at m/44'/60'/0'/0, the second
// at m/44'/60'/0'/1, etc.
var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
// DerivationPath represents the computer friendly version of a hierarchical
// deterministic wallet account derivaion path.
//
// The BIP-32 spec https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki
// defines derivation paths to be of the form:
//
// m / purpose' / coin_type' / account' / change / address_index
//
// The BIP-44 spec https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki
// defines that the `purpose` be 44' (or 0x8000002C) for crypto currencies, and
// SLIP-44 https://github.com/satoshilabs/slips/blob/master/slip-0044.md assigns
// the `coin_type` 60' (or 0x8000003C) to Ethereum.
//
// The root path for Ethereum is m/44'/60'/0'/0 according to the specification
// from https://github.com/ethereum/EIPs/issues/84, albeit it's not set in stone
// yet whether accounts should increment the last component or the children of
// that. We will go with the simpler approach of incrementing the last component.
type DerivationPath []uint32
// ParseDerivationPath converts a user specified derivation path string to the
// internal binary representation.
//
// Full derivation paths need to start with the `m/` prefix, relative derivation
// paths (which will get appended to the default root path) must not have prefixes
// in front of the first element. Whitespace is ignored.
func ParseDerivationPath(path string) (DerivationPath, error) {
var result DerivationPath
// Handle absolute or relative paths
components := strings.Split(path, "/")
switch {
case len(components) == 0:
return nil, errors.New("empty derivation path")
case strings.TrimSpace(components[0]) == "":
return nil, errors.New("ambiguous path: use 'm/' prefix for absolute paths, or no leading '/' for relative ones")
case strings.TrimSpace(components[0]) == "m":
components = components[1:]
default:
result = append(result, DefaultRootDerivationPath...)
}
// All remaining components are relative, append one by one
if len(components) == 0 {
return nil, errors.New("empty derivation path") // Empty relative paths
}
for _, component := range components {
// Ignore any user added whitespace
component = strings.TrimSpace(component)
var value uint32
// Handle hardened paths
if strings.HasSuffix(component, "'") {
value = 0x80000000
component = strings.TrimSpace(strings.TrimSuffix(component, "'"))
}
// Handle the non hardened component
bigval, ok := new(big.Int).SetString(component, 0)
if !ok {
return nil, fmt.Errorf("invalid component: %s", component)
}
max := math.MaxUint32 - value
if bigval.Sign() < 0 || bigval.Cmp(big.NewInt(int64(max))) > 0 {
if value == 0 {
return nil, fmt.Errorf("component %v out of allowed range [0, %d]", bigval, max)
}
return nil, fmt.Errorf("component %v out of allowed hardened range [0, %d]", bigval, max)
}
value += uint32(bigval.Uint64())
// Append and repeat
result = append(result, value)
}
return result, nil
}
// String implements the stringer interface, converting a binary derivation path
// to its canonical representation.
func (path DerivationPath) String() string {
result := "m"
for _, component := range path {
var hardened bool
if component >= 0x80000000 {
component -= 0x80000000
hardened = true
}
result = fmt.Sprintf("%s/%d", result, component)
if hardened {
result += "'"
}
}
return result
}

79
accounts/hd_test.go Normal file
View File

@@ -0,0 +1,79 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"reflect"
"testing"
)
// Tests that HD derivation paths can be correctly parsed into our internal binary
// representation.
func TestHDPathParsing(t *testing.T) {
tests := []struct {
input string
output DerivationPath
}{
// Plain absolute derivation paths
{"m/44'/60'/0'/0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/44'/60'/0'/128", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"m/44'/60'/0'/0'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"m/44'/60'/0'/128'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"m/2147483692/2147483708/2147483648/0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/2147483692/2147483708/2147483648/2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Plain relative derivation paths
{"0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"128", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"0'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"128'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Hexadecimal absolute derivation paths
{"m/0x2C'/0x3c'/0x00'/0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/0x2C'/0x3c'/0x00'/0x80", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"m/0x2C'/0x3c'/0x00'/0x00'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"m/0x2C'/0x3c'/0x00'/0x80'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"m/0x8000002C/0x8000003c/0x80000000/0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/0x8000002C/0x8000003c/0x80000000/0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Hexadecimal relative derivation paths
{"0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"0x80", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"0x00'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"0x80'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Weird inputs just to ensure they work
{" m / 44 '\n/\n 60 \n\n\t' /\n0 ' /\t\t 0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
// Invaid derivation paths
{"", nil}, // Empty relative derivation path
{"m", nil}, // Empty absolute derivation path
{"m/", nil}, // Missing last derivation component
{"/44'/60'/0'/0", nil}, // Absolute path without m prefix, might be user error
{"m/2147483648'", nil}, // Overflows 32 bit integer
{"m/-1'", nil}, // Cannot contain negative number
}
for i, tt := range tests {
if path, err := ParseDerivationPath(tt.input); !reflect.DeepEqual(path, tt.output) {
t.Errorf("test %d: parse mismatch: have %v (%v), want %v", i, path, err, tt.output)
} else if path == nil && err == nil {
t.Errorf("test %d: nil path and error: %v", i, err)
}
}
}

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"bufio"
@@ -28,6 +28,7 @@ import (
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/logger"
"github.com/ethereum/go-ethereum/logger/glog"
@@ -38,23 +39,23 @@ import (
// exist yet, the code will attempt to create a watcher at most this often.
const minReloadInterval = 2 * time.Second
type accountsByFile []Account
type accountsByURL []accounts.Account
func (s accountsByFile) Len() int { return len(s) }
func (s accountsByFile) Less(i, j int) bool { return s[i].File < s[j].File }
func (s accountsByFile) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s accountsByURL) Len() int { return len(s) }
func (s accountsByURL) Less(i, j int) bool { return s[i].URL.Cmp(s[j].URL) < 0 }
func (s accountsByURL) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// AmbiguousAddrError is returned when attempting to unlock
// an address for which more than one file exists.
type AmbiguousAddrError struct {
Addr common.Address
Matches []Account
Matches []accounts.Account
}
func (err *AmbiguousAddrError) Error() string {
files := ""
for i, a := range err.Matches {
files += a.File
files += a.URL.Path
if i < len(err.Matches)-1 {
files += ", "
}
@@ -62,60 +63,63 @@ func (err *AmbiguousAddrError) Error() string {
return fmt.Sprintf("multiple keys match address (%s)", files)
}
// addrCache is a live index of all accounts in the keystore.
type addrCache struct {
// accountCache is a live index of all accounts in the keystore.
type accountCache struct {
keydir string
watcher *watcher
mu sync.Mutex
all accountsByFile
byAddr map[common.Address][]Account
all accountsByURL
byAddr map[common.Address][]accounts.Account
throttle *time.Timer
notify chan struct{}
}
func newAddrCache(keydir string) *addrCache {
ac := &addrCache{
func newAccountCache(keydir string) (*accountCache, chan struct{}) {
ac := &accountCache{
keydir: keydir,
byAddr: make(map[common.Address][]Account),
byAddr: make(map[common.Address][]accounts.Account),
notify: make(chan struct{}, 1),
}
ac.watcher = newWatcher(ac)
return ac
return ac, ac.notify
}
func (ac *addrCache) accounts() []Account {
func (ac *accountCache) accounts() []accounts.Account {
ac.maybeReload()
ac.mu.Lock()
defer ac.mu.Unlock()
cpy := make([]Account, len(ac.all))
cpy := make([]accounts.Account, len(ac.all))
copy(cpy, ac.all)
return cpy
}
func (ac *addrCache) hasAddress(addr common.Address) bool {
func (ac *accountCache) hasAddress(addr common.Address) bool {
ac.maybeReload()
ac.mu.Lock()
defer ac.mu.Unlock()
return len(ac.byAddr[addr]) > 0
}
func (ac *addrCache) add(newAccount Account) {
func (ac *accountCache) add(newAccount accounts.Account) {
ac.mu.Lock()
defer ac.mu.Unlock()
i := sort.Search(len(ac.all), func(i int) bool { return ac.all[i].File >= newAccount.File })
i := sort.Search(len(ac.all), func(i int) bool { return ac.all[i].URL.Cmp(newAccount.URL) >= 0 })
if i < len(ac.all) && ac.all[i] == newAccount {
return
}
// newAccount is not in the cache.
ac.all = append(ac.all, Account{})
ac.all = append(ac.all, accounts.Account{})
copy(ac.all[i+1:], ac.all[i:])
ac.all[i] = newAccount
ac.byAddr[newAccount.Address] = append(ac.byAddr[newAccount.Address], newAccount)
}
// note: removed needs to be unique here (i.e. both File and Address must be set).
func (ac *addrCache) delete(removed Account) {
func (ac *accountCache) delete(removed accounts.Account) {
ac.mu.Lock()
defer ac.mu.Unlock()
ac.all = removeAccount(ac.all, removed)
if ba := removeAccount(ac.byAddr[removed.Address], removed); len(ba) == 0 {
delete(ac.byAddr, removed.Address)
@@ -124,7 +128,7 @@ func (ac *addrCache) delete(removed Account) {
}
}
func removeAccount(slice []Account, elem Account) []Account {
func removeAccount(slice []accounts.Account, elem accounts.Account) []accounts.Account {
for i := range slice {
if slice[i] == elem {
return append(slice[:i], slice[i+1:]...)
@@ -134,43 +138,44 @@ func removeAccount(slice []Account, elem Account) []Account {
}
// find returns the cached account for address if there is a unique match.
// The exact matching rules are explained by the documentation of Account.
// The exact matching rules are explained by the documentation of accounts.Account.
// Callers must hold ac.mu.
func (ac *addrCache) find(a Account) (Account, error) {
func (ac *accountCache) find(a accounts.Account) (accounts.Account, error) {
// Limit search to address candidates if possible.
matches := ac.all
if (a.Address != common.Address{}) {
matches = ac.byAddr[a.Address]
}
if a.File != "" {
if a.URL.Path != "" {
// If only the basename is specified, complete the path.
if !strings.ContainsRune(a.File, filepath.Separator) {
a.File = filepath.Join(ac.keydir, a.File)
if !strings.ContainsRune(a.URL.Path, filepath.Separator) {
a.URL.Path = filepath.Join(ac.keydir, a.URL.Path)
}
for i := range matches {
if matches[i].File == a.File {
if matches[i].URL == a.URL {
return matches[i], nil
}
}
if (a.Address == common.Address{}) {
return Account{}, ErrNoMatch
return accounts.Account{}, ErrNoMatch
}
}
switch len(matches) {
case 1:
return matches[0], nil
case 0:
return Account{}, ErrNoMatch
return accounts.Account{}, ErrNoMatch
default:
err := &AmbiguousAddrError{Addr: a.Address, Matches: make([]Account, len(matches))}
err := &AmbiguousAddrError{Addr: a.Address, Matches: make([]accounts.Account, len(matches))}
copy(err.Matches, matches)
return Account{}, err
return accounts.Account{}, err
}
}
func (ac *addrCache) maybeReload() {
func (ac *accountCache) maybeReload() {
ac.mu.Lock()
defer ac.mu.Unlock()
if ac.watcher.running {
return // A watcher is running and will keep the cache up-to-date.
}
@@ -188,18 +193,22 @@ func (ac *addrCache) maybeReload() {
ac.throttle.Reset(minReloadInterval)
}
func (ac *addrCache) close() {
func (ac *accountCache) close() {
ac.mu.Lock()
ac.watcher.close()
if ac.throttle != nil {
ac.throttle.Stop()
}
if ac.notify != nil {
close(ac.notify)
ac.notify = nil
}
ac.mu.Unlock()
}
// reload caches addresses of existing accounts.
// Callers must hold ac.mu.
func (ac *addrCache) reload() {
func (ac *accountCache) reload() {
accounts, err := ac.scan()
if err != nil && glog.V(logger.Debug) {
glog.Errorf("can't load keys: %v", err)
@@ -212,10 +221,14 @@ func (ac *addrCache) reload() {
for _, a := range accounts {
ac.byAddr[a.Address] = append(ac.byAddr[a.Address], a)
}
select {
case ac.notify <- struct{}{}:
default:
}
glog.V(logger.Debug).Infof("reloaded keys, cache has %d accounts", len(ac.all))
}
func (ac *addrCache) scan() ([]Account, error) {
func (ac *accountCache) scan() ([]accounts.Account, error) {
files, err := ioutil.ReadDir(ac.keydir)
if err != nil {
return nil, err
@@ -223,7 +236,7 @@ func (ac *addrCache) scan() ([]Account, error) {
var (
buf = new(bufio.Reader)
addrs []Account
addrs []accounts.Account
keyJSON struct {
Address string `json:"address"`
}
@@ -250,7 +263,7 @@ func (ac *addrCache) scan() ([]Account, error) {
case (addr == common.Address{}):
glog.V(logger.Debug).Infof("can't decode key %s: missing or zero address", path)
default:
addrs = append(addrs, Account{Address: addr, File: path})
addrs = append(addrs, accounts.Account{Address: addr, URL: accounts.URL{Scheme: KeyStoreScheme, Path: path}})
}
fd.Close()
}

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"fmt"
@@ -28,23 +28,24 @@ import (
"github.com/cespare/cp"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
)
var (
cachetestDir, _ = filepath.Abs(filepath.Join("testdata", "keystore"))
cachetestAccounts = []Account{
cachetestAccounts = []accounts.Account{
{
Address: common.HexToAddress("7ef5a6135f1fd6a02593eedc869c6d41d934aef8"),
File: filepath.Join(cachetestDir, "UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(cachetestDir, "UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8")},
},
{
Address: common.HexToAddress("f466859ead1932d743d622cb74fc058882e8648a"),
File: filepath.Join(cachetestDir, "aaa"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(cachetestDir, "aaa")},
},
{
Address: common.HexToAddress("289d485d9771714cce91d3393d764e1311907acc"),
File: filepath.Join(cachetestDir, "zzz"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(cachetestDir, "zzz")},
},
}
)
@@ -52,29 +53,36 @@ var (
func TestWatchNewFile(t *testing.T) {
t.Parallel()
dir, am := tmpManager(t, false)
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Ensure the watcher is started before adding any files.
am.Accounts()
ks.Accounts()
time.Sleep(200 * time.Millisecond)
// Move in the files.
wantAccounts := make([]Account, len(cachetestAccounts))
wantAccounts := make([]accounts.Account, len(cachetestAccounts))
for i := range cachetestAccounts {
a := cachetestAccounts[i]
a.File = filepath.Join(dir, filepath.Base(a.File))
wantAccounts[i] = a
if err := cp.CopyFile(a.File, cachetestAccounts[i].File); err != nil {
wantAccounts[i] = accounts.Account{
Address: cachetestAccounts[i].Address,
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, filepath.Base(cachetestAccounts[i].URL.Path))},
}
if err := cp.CopyFile(wantAccounts[i].URL.Path, cachetestAccounts[i].URL.Path); err != nil {
t.Fatal(err)
}
}
// am should see the accounts.
var list []Account
// ks should see the accounts.
var list []accounts.Account
for d := 200 * time.Millisecond; d < 5*time.Second; d *= 2 {
list = am.Accounts()
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
t.Fatalf("wasn't notified of new accounts")
}
return
}
time.Sleep(d)
@@ -85,12 +93,12 @@ func TestWatchNewFile(t *testing.T) {
func TestWatchNoDir(t *testing.T) {
t.Parallel()
// Create am but not the directory that it watches.
// Create ks but not the directory that it watches.
rand.Seed(time.Now().UnixNano())
dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-watch-test-%d-%d", os.Getpid(), rand.Int()))
am := NewManager(dir, LightScryptN, LightScryptP)
ks := NewKeyStore(dir, LightScryptN, LightScryptP)
list := am.Accounts()
list := ks.Accounts()
if len(list) > 0 {
t.Error("initial account list not empty:", list)
}
@@ -100,16 +108,22 @@ func TestWatchNoDir(t *testing.T) {
os.MkdirAll(dir, 0700)
defer os.RemoveAll(dir)
file := filepath.Join(dir, "aaa")
if err := cp.CopyFile(file, cachetestAccounts[0].File); err != nil {
if err := cp.CopyFile(file, cachetestAccounts[0].URL.Path); err != nil {
t.Fatal(err)
}
// am should see the account.
wantAccounts := []Account{cachetestAccounts[0]}
wantAccounts[0].File = file
// ks should see the account.
wantAccounts := []accounts.Account{cachetestAccounts[0]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
for d := 200 * time.Millisecond; d < 8*time.Second; d *= 2 {
list = am.Accounts()
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
t.Fatalf("wasn't notified of new accounts")
}
return
}
time.Sleep(d)
@@ -118,7 +132,7 @@ func TestWatchNoDir(t *testing.T) {
}
func TestCacheInitialReload(t *testing.T) {
cache := newAddrCache(cachetestDir)
cache, _ := newAccountCache(cachetestDir)
accounts := cache.accounts()
if !reflect.DeepEqual(accounts, cachetestAccounts) {
t.Fatalf("got initial accounts: %swant %s", spew.Sdump(accounts), spew.Sdump(cachetestAccounts))
@@ -126,55 +140,55 @@ func TestCacheInitialReload(t *testing.T) {
}
func TestCacheAddDeleteOrder(t *testing.T) {
cache := newAddrCache("testdata/no-such-dir")
cache, _ := newAccountCache("testdata/no-such-dir")
cache.watcher.running = true // prevent unexpected reloads
accounts := []Account{
accs := []accounts.Account{
{
Address: common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"),
File: "-309830980",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "-309830980"},
},
{
Address: common.HexToAddress("2cac1adea150210703ba75ed097ddfe24e14f213"),
File: "ggg",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "ggg"},
},
{
Address: common.HexToAddress("8bda78331c916a08481428e4b07c96d3e916d165"),
File: "zzzzzz-the-very-last-one.keyXXX",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "zzzzzz-the-very-last-one.keyXXX"},
},
{
Address: common.HexToAddress("d49ff4eeb0b2686ed89c0fc0f2b6ea533ddbbd5e"),
File: "SOMETHING.key",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "SOMETHING.key"},
},
{
Address: common.HexToAddress("7ef5a6135f1fd6a02593eedc869c6d41d934aef8"),
File: "UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8"},
},
{
Address: common.HexToAddress("f466859ead1932d743d622cb74fc058882e8648a"),
File: "aaa",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "aaa"},
},
{
Address: common.HexToAddress("289d485d9771714cce91d3393d764e1311907acc"),
File: "zzz",
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "zzz"},
},
}
for _, a := range accounts {
for _, a := range accs {
cache.add(a)
}
// Add some of them twice to check that they don't get reinserted.
cache.add(accounts[0])
cache.add(accounts[2])
cache.add(accs[0])
cache.add(accs[2])
// Check that the account list is sorted by filename.
wantAccounts := make([]Account, len(accounts))
copy(wantAccounts, accounts)
sort.Sort(accountsByFile(wantAccounts))
wantAccounts := make([]accounts.Account, len(accs))
copy(wantAccounts, accs)
sort.Sort(accountsByURL(wantAccounts))
list := cache.accounts()
if !reflect.DeepEqual(list, wantAccounts) {
t.Fatalf("got accounts: %s\nwant %s", spew.Sdump(accounts), spew.Sdump(wantAccounts))
t.Fatalf("got accounts: %s\nwant %s", spew.Sdump(accs), spew.Sdump(wantAccounts))
}
for _, a := range accounts {
for _, a := range accs {
if !cache.hasAddress(a.Address) {
t.Errorf("expected hasAccount(%x) to return true", a.Address)
}
@@ -184,13 +198,13 @@ func TestCacheAddDeleteOrder(t *testing.T) {
}
// Delete a few keys from the cache.
for i := 0; i < len(accounts); i += 2 {
for i := 0; i < len(accs); i += 2 {
cache.delete(wantAccounts[i])
}
cache.delete(Account{Address: common.HexToAddress("fd9bd350f08ee3c0c19b85a8e16114a11a60aa4e"), File: "something"})
cache.delete(accounts.Account{Address: common.HexToAddress("fd9bd350f08ee3c0c19b85a8e16114a11a60aa4e"), URL: accounts.URL{Scheme: KeyStoreScheme, Path: "something"}})
// Check content again after deletion.
wantAccountsAfterDelete := []Account{
wantAccountsAfterDelete := []accounts.Account{
wantAccounts[1],
wantAccounts[3],
wantAccounts[5],
@@ -211,63 +225,63 @@ func TestCacheAddDeleteOrder(t *testing.T) {
func TestCacheFind(t *testing.T) {
dir := filepath.Join("testdata", "dir")
cache := newAddrCache(dir)
cache, _ := newAccountCache(dir)
cache.watcher.running = true // prevent unexpected reloads
accounts := []Account{
accs := []accounts.Account{
{
Address: common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"),
File: filepath.Join(dir, "a.key"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "a.key")},
},
{
Address: common.HexToAddress("2cac1adea150210703ba75ed097ddfe24e14f213"),
File: filepath.Join(dir, "b.key"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "b.key")},
},
{
Address: common.HexToAddress("d49ff4eeb0b2686ed89c0fc0f2b6ea533ddbbd5e"),
File: filepath.Join(dir, "c.key"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "c.key")},
},
{
Address: common.HexToAddress("d49ff4eeb0b2686ed89c0fc0f2b6ea533ddbbd5e"),
File: filepath.Join(dir, "c2.key"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "c2.key")},
},
}
for _, a := range accounts {
for _, a := range accs {
cache.add(a)
}
nomatchAccount := Account{
nomatchAccount := accounts.Account{
Address: common.HexToAddress("f466859ead1932d743d622cb74fc058882e8648a"),
File: filepath.Join(dir, "something"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "something")},
}
tests := []struct {
Query Account
WantResult Account
Query accounts.Account
WantResult accounts.Account
WantError error
}{
// by address
{Query: Account{Address: accounts[0].Address}, WantResult: accounts[0]},
{Query: accounts.Account{Address: accs[0].Address}, WantResult: accs[0]},
// by file
{Query: Account{File: accounts[0].File}, WantResult: accounts[0]},
{Query: accounts.Account{URL: accs[0].URL}, WantResult: accs[0]},
// by basename
{Query: Account{File: filepath.Base(accounts[0].File)}, WantResult: accounts[0]},
{Query: accounts.Account{URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Base(accs[0].URL.Path)}}, WantResult: accs[0]},
// by file and address
{Query: accounts[0], WantResult: accounts[0]},
{Query: accs[0], WantResult: accs[0]},
// ambiguous address, tie resolved by file
{Query: accounts[2], WantResult: accounts[2]},
{Query: accs[2], WantResult: accs[2]},
// ambiguous address error
{
Query: Account{Address: accounts[2].Address},
Query: accounts.Account{Address: accs[2].Address},
WantError: &AmbiguousAddrError{
Addr: accounts[2].Address,
Matches: []Account{accounts[2], accounts[3]},
Addr: accs[2].Address,
Matches: []accounts.Account{accs[2], accs[3]},
},
},
// no match error
{Query: nomatchAccount, WantError: ErrNoMatch},
{Query: Account{File: nomatchAccount.File}, WantError: ErrNoMatch},
{Query: Account{File: filepath.Base(nomatchAccount.File)}, WantError: ErrNoMatch},
{Query: Account{Address: nomatchAccount.Address}, WantError: ErrNoMatch},
{Query: accounts.Account{URL: nomatchAccount.URL}, WantError: ErrNoMatch},
{Query: accounts.Account{URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Base(nomatchAccount.URL.Path)}}, WantError: ErrNoMatch},
{Query: accounts.Account{Address: nomatchAccount.Address}, WantError: ErrNoMatch},
}
for i, test := range tests {
a, err := cache.find(test.Query)

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"bytes"
@@ -29,6 +29,7 @@ import (
"strings"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/crypto/secp256k1"
@@ -175,13 +176,13 @@ func newKey(rand io.Reader) (*Key, error) {
return newKeyFromECDSA(privateKeyECDSA), nil
}
func storeNewKey(ks keyStore, rand io.Reader, auth string) (*Key, Account, error) {
func storeNewKey(ks keyStore, rand io.Reader, auth string) (*Key, accounts.Account, error) {
key, err := newKey(rand)
if err != nil {
return nil, Account{}, err
return nil, accounts.Account{}, err
}
a := Account{Address: key.Address, File: ks.JoinPath(keyFileName(key.Address))}
if err := ks.StoreKey(a.File, key, auth); err != nil {
a := accounts.Account{Address: key.Address, URL: accounts.URL{Scheme: KeyStoreScheme, Path: ks.JoinPath(keyFileName(key.Address))}}
if err := ks.StoreKey(a.URL.Path, key, auth); err != nil {
zeroKey(key.PrivateKey)
return nil, a, err
}

View File

@@ -0,0 +1,494 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package keystore implements encrypted storage of secp256k1 private keys.
//
// Keys are stored as encrypted JSON files according to the Web3 Secret Storage specification.
// See https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition for more information.
package keystore
import (
"crypto/ecdsa"
crand "crypto/rand"
"errors"
"fmt"
"math/big"
"os"
"path/filepath"
"reflect"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/event"
)
var (
ErrLocked = accounts.NewAuthNeededError("password or unlock")
ErrNoMatch = errors.New("no key for given address or file")
ErrDecrypt = errors.New("could not decrypt key with given passphrase")
)
// KeyStoreType is the reflect type of a keystore backend.
var KeyStoreType = reflect.TypeOf(&KeyStore{})
// KeyStoreScheme is the protocol scheme prefixing account and wallet URLs.
var KeyStoreScheme = "keystore"
// Maximum time between wallet refreshes (if filesystem notifications don't work).
const walletRefreshCycle = 3 * time.Second
// KeyStore manages a key storage directory on disk.
type KeyStore struct {
storage keyStore // Storage backend, might be cleartext or encrypted
cache *accountCache // In-memory account cache over the filesystem storage
changes chan struct{} // Channel receiving change notifications from the cache
unlocked map[common.Address]*unlocked // Currently unlocked account (decrypted private keys)
wallets []accounts.Wallet // Wallet wrappers around the individual key files
updateFeed event.Feed // Event feed to notify wallet additions/removals
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
mu sync.RWMutex
}
type unlocked struct {
*Key
abort chan struct{}
}
// NewKeyStore creates a keystore for the given directory.
func NewKeyStore(keydir string, scryptN, scryptP int) *KeyStore {
keydir, _ = filepath.Abs(keydir)
ks := &KeyStore{storage: &keyStorePassphrase{keydir, scryptN, scryptP}}
ks.init(keydir)
return ks
}
// NewPlaintextKeyStore creates a keystore for the given directory.
// Deprecated: Use NewKeyStore.
func NewPlaintextKeyStore(keydir string) *KeyStore {
keydir, _ = filepath.Abs(keydir)
ks := &KeyStore{storage: &keyStorePlain{keydir}}
ks.init(keydir)
return ks
}
func (ks *KeyStore) init(keydir string) {
// Lock the mutex since the account cache might call back with events
ks.mu.Lock()
defer ks.mu.Unlock()
// Initialize the set of unlocked keys and the account cache
ks.unlocked = make(map[common.Address]*unlocked)
ks.cache, ks.changes = newAccountCache(keydir)
// TODO: In order for this finalizer to work, there must be no references
// to ks. addressCache doesn't keep a reference but unlocked keys do,
// so the finalizer will not trigger until all timed unlocks have expired.
runtime.SetFinalizer(ks, func(m *KeyStore) {
m.cache.close()
})
// Create the initial list of wallets from the cache
accs := ks.cache.accounts()
ks.wallets = make([]accounts.Wallet, len(accs))
for i := 0; i < len(accs); i++ {
ks.wallets[i] = &keystoreWallet{account: accs[i], keystore: ks}
}
}
// Wallets implements accounts.Backend, returning all single-key wallets from the
// keystore directory.
func (ks *KeyStore) Wallets() []accounts.Wallet {
// Make sure the list of wallets is in sync with the account cache
ks.refreshWallets()
ks.mu.RLock()
defer ks.mu.RUnlock()
cpy := make([]accounts.Wallet, len(ks.wallets))
copy(cpy, ks.wallets)
return cpy
}
// refreshWallets retrieves the current account list and based on that does any
// necessary wallet refreshes.
func (ks *KeyStore) refreshWallets() {
// Retrieve the current list of accounts
ks.mu.Lock()
accs := ks.cache.accounts()
// Transform the current list of wallets into the new one
wallets := make([]accounts.Wallet, 0, len(accs))
events := []accounts.WalletEvent{}
for _, account := range accs {
// Drop wallets while they were in front of the next account
for len(ks.wallets) > 0 && ks.wallets[0].URL().Cmp(account.URL) < 0 {
events = append(events, accounts.WalletEvent{Wallet: ks.wallets[0], Arrive: false})
ks.wallets = ks.wallets[1:]
}
// If there are no more wallets or the account is before the next, wrap new wallet
if len(ks.wallets) == 0 || ks.wallets[0].URL().Cmp(account.URL) > 0 {
wallet := &keystoreWallet{account: account, keystore: ks}
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: true})
wallets = append(wallets, wallet)
continue
}
// If the account is the same as the first wallet, keep it
if ks.wallets[0].Accounts()[0] == account {
wallets = append(wallets, ks.wallets[0])
ks.wallets = ks.wallets[1:]
continue
}
}
// Drop any leftover wallets and set the new batch
for _, wallet := range ks.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: false})
}
ks.wallets = wallets
ks.mu.Unlock()
// Fire all wallet events and return
for _, event := range events {
ks.updateFeed.Send(event)
}
}
// Subscribe implements accounts.Backend, creating an async subscription to
// receive notifications on the addition or removal of keystore wallets.
func (ks *KeyStore) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
// We need the mutex to reliably start/stop the update loop
ks.mu.Lock()
defer ks.mu.Unlock()
// Subscribe the caller and track the subscriber count
sub := ks.updateScope.Track(ks.updateFeed.Subscribe(sink))
// Subscribers require an active notification loop, start it
if !ks.updating {
ks.updating = true
go ks.updater()
}
return sub
}
// updater is responsible for maintaining an up-to-date list of wallets stored in
// the keystore, and for firing wallet addition/removal events. It listens for
// account change events from the underlying account cache, and also periodically
// forces a manual refresh (only triggers for systems where the filesystem notifier
// is not running).
func (ks *KeyStore) updater() {
for {
// Wait for an account update or a refresh timeout
select {
case <-ks.changes:
case <-time.After(walletRefreshCycle):
}
// Run the wallet refresher
ks.refreshWallets()
// If all our subscribers left, stop the updater
ks.mu.Lock()
if ks.updateScope.Count() == 0 {
ks.updating = false
ks.mu.Unlock()
return
}
ks.mu.Unlock()
}
}
// HasAddress reports whether a key with the given address is present.
func (ks *KeyStore) HasAddress(addr common.Address) bool {
return ks.cache.hasAddress(addr)
}
// Accounts returns all key files present in the directory.
func (ks *KeyStore) Accounts() []accounts.Account {
return ks.cache.accounts()
}
// Delete deletes the key matched by account if the passphrase is correct.
// If the account contains no filename, the address must match a unique key.
func (ks *KeyStore) Delete(a accounts.Account, passphrase string) error {
// Decrypting the key isn't really necessary, but we do
// it anyway to check the password and zero out the key
// immediately afterwards.
a, key, err := ks.getDecryptedKey(a, passphrase)
if key != nil {
zeroKey(key.PrivateKey)
}
if err != nil {
return err
}
// The order is crucial here. The key is dropped from the
// cache after the file is gone so that a reload happening in
// between won't insert it into the cache again.
err = os.Remove(a.URL.Path)
if err == nil {
ks.cache.delete(a)
ks.refreshWallets()
}
return err
}
// SignHash calculates a ECDSA signature for the given hash. The produced
// signature is in the [R || S || V] format where V is 0 or 1.
func (ks *KeyStore) SignHash(a accounts.Account, hash []byte) ([]byte, error) {
// Look up the key to sign with and abort if it cannot be found
ks.mu.RLock()
defer ks.mu.RUnlock()
unlockedKey, found := ks.unlocked[a.Address]
if !found {
return nil, ErrLocked
}
// Sign the hash using plain ECDSA operations
return crypto.Sign(hash, unlockedKey.PrivateKey)
}
// SignTx signs the given transaction with the requested account.
func (ks *KeyStore) SignTx(a accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Look up the key to sign with and abort if it cannot be found
ks.mu.RLock()
defer ks.mu.RUnlock()
unlockedKey, found := ks.unlocked[a.Address]
if !found {
return nil, ErrLocked
}
// Depending on the presence of the chain ID, sign with EIP155 or homestead
if chainID != nil {
return types.SignTx(tx, types.NewEIP155Signer(chainID), unlockedKey.PrivateKey)
}
return types.SignTx(tx, types.HomesteadSigner{}, unlockedKey.PrivateKey)
}
// SignHashWithPassphrase signs hash if the private key matching the given address
// can be decrypted with the given passphrase. The produced signature is in the
// [R || S || V] format where V is 0 or 1.
func (ks *KeyStore) SignHashWithPassphrase(a accounts.Account, passphrase string, hash []byte) (signature []byte, err error) {
_, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
defer zeroKey(key.PrivateKey)
return crypto.Sign(hash, key.PrivateKey)
}
// SignTxWithPassphrase signs the transaction if the private key matching the
// given address can be decrypted with the given passphrase.
func (ks *KeyStore) SignTxWithPassphrase(a accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
_, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
defer zeroKey(key.PrivateKey)
// Depending on the presence of the chain ID, sign with EIP155 or homestead
if chainID != nil {
return types.SignTx(tx, types.NewEIP155Signer(chainID), key.PrivateKey)
}
return types.SignTx(tx, types.HomesteadSigner{}, key.PrivateKey)
}
// Unlock unlocks the given account indefinitely.
func (ks *KeyStore) Unlock(a accounts.Account, passphrase string) error {
return ks.TimedUnlock(a, passphrase, 0)
}
// Lock removes the private key with the given address from memory.
func (ks *KeyStore) Lock(addr common.Address) error {
ks.mu.Lock()
if unl, found := ks.unlocked[addr]; found {
ks.mu.Unlock()
ks.expire(addr, unl, time.Duration(0)*time.Nanosecond)
} else {
ks.mu.Unlock()
}
return nil
}
// TimedUnlock unlocks the given account with the passphrase. The account
// stays unlocked for the duration of timeout. A timeout of 0 unlocks the account
// until the program exits. The account must match a unique key file.
//
// If the account address is already unlocked for a duration, TimedUnlock extends or
// shortens the active unlock timeout. If the address was previously unlocked
// indefinitely the timeout is not altered.
func (ks *KeyStore) TimedUnlock(a accounts.Account, passphrase string, timeout time.Duration) error {
a, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return err
}
ks.mu.Lock()
defer ks.mu.Unlock()
u, found := ks.unlocked[a.Address]
if found {
if u.abort == nil {
// The address was unlocked indefinitely, so unlocking
// it with a timeout would be confusing.
zeroKey(key.PrivateKey)
return nil
}
// Terminate the expire goroutine and replace it below.
close(u.abort)
}
if timeout > 0 {
u = &unlocked{Key: key, abort: make(chan struct{})}
go ks.expire(a.Address, u, timeout)
} else {
u = &unlocked{Key: key}
}
ks.unlocked[a.Address] = u
return nil
}
// Find resolves the given account into a unique entry in the keystore.
func (ks *KeyStore) Find(a accounts.Account) (accounts.Account, error) {
ks.cache.maybeReload()
ks.cache.mu.Lock()
a, err := ks.cache.find(a)
ks.cache.mu.Unlock()
return a, err
}
func (ks *KeyStore) getDecryptedKey(a accounts.Account, auth string) (accounts.Account, *Key, error) {
a, err := ks.Find(a)
if err != nil {
return a, nil, err
}
key, err := ks.storage.GetKey(a.Address, a.URL.Path, auth)
return a, key, err
}
func (ks *KeyStore) expire(addr common.Address, u *unlocked, timeout time.Duration) {
t := time.NewTimer(timeout)
defer t.Stop()
select {
case <-u.abort:
// just quit
case <-t.C:
ks.mu.Lock()
// only drop if it's still the same key instance that dropLater
// was launched with. we can check that using pointer equality
// because the map stores a new pointer every time the key is
// unlocked.
if ks.unlocked[addr] == u {
zeroKey(u.PrivateKey)
delete(ks.unlocked, addr)
}
ks.mu.Unlock()
}
}
// NewAccount generates a new key and stores it into the key directory,
// encrypting it with the passphrase.
func (ks *KeyStore) NewAccount(passphrase string) (accounts.Account, error) {
_, account, err := storeNewKey(ks.storage, crand.Reader, passphrase)
if err != nil {
return accounts.Account{}, err
}
// Add the account to the cache immediately rather
// than waiting for file system notifications to pick it up.
ks.cache.add(account)
ks.refreshWallets()
return account, nil
}
// Export exports as a JSON key, encrypted with newPassphrase.
func (ks *KeyStore) Export(a accounts.Account, passphrase, newPassphrase string) (keyJSON []byte, err error) {
_, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
var N, P int
if store, ok := ks.storage.(*keyStorePassphrase); ok {
N, P = store.scryptN, store.scryptP
} else {
N, P = StandardScryptN, StandardScryptP
}
return EncryptKey(key, newPassphrase, N, P)
}
// Import stores the given encrypted JSON key into the key directory.
func (ks *KeyStore) Import(keyJSON []byte, passphrase, newPassphrase string) (accounts.Account, error) {
key, err := DecryptKey(keyJSON, passphrase)
if key != nil && key.PrivateKey != nil {
defer zeroKey(key.PrivateKey)
}
if err != nil {
return accounts.Account{}, err
}
return ks.importKey(key, newPassphrase)
}
// ImportECDSA stores the given key into the key directory, encrypting it with the passphrase.
func (ks *KeyStore) ImportECDSA(priv *ecdsa.PrivateKey, passphrase string) (accounts.Account, error) {
key := newKeyFromECDSA(priv)
if ks.cache.hasAddress(key.Address) {
return accounts.Account{}, fmt.Errorf("account already exists")
}
return ks.importKey(key, passphrase)
}
func (ks *KeyStore) importKey(key *Key, passphrase string) (accounts.Account, error) {
a := accounts.Account{Address: key.Address, URL: accounts.URL{Scheme: KeyStoreScheme, Path: ks.storage.JoinPath(keyFileName(key.Address))}}
if err := ks.storage.StoreKey(a.URL.Path, key, passphrase); err != nil {
return accounts.Account{}, err
}
ks.cache.add(a)
ks.refreshWallets()
return a, nil
}
// Update changes the passphrase of an existing account.
func (ks *KeyStore) Update(a accounts.Account, passphrase, newPassphrase string) error {
a, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return err
}
return ks.storage.StoreKey(a.URL.Path, key, newPassphrase)
}
// ImportPreSaleKey decrypts the given Ethereum presale wallet and stores
// a key file in the key directory. The key file is encrypted with the same passphrase.
func (ks *KeyStore) ImportPreSaleKey(keyJSON []byte, passphrase string) (accounts.Account, error) {
a, _, err := importPreSaleKey(ks.storage, keyJSON, passphrase)
if err != nil {
return a, err
}
ks.cache.add(a)
ks.refreshWallets()
return a, nil
}
// zeroKey zeroes a private key in memory.
func zeroKey(k *ecdsa.PrivateKey) {
b := k.D.Bits()
for i := range b {
b[i] = 0
}
}

View File

@@ -23,7 +23,7 @@ The crypto is documented at https://github.com/ethereum/wiki/wiki/Web3-Secret-St
*/
package accounts
package keystore
import (
"bytes"

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"io/ioutil"

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"encoding/json"

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"crypto/rand"
@@ -30,7 +30,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
)
func tmpKeyStore(t *testing.T, encrypted bool) (dir string, ks keyStore) {
func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) {
d, err := ioutil.TempDir("", "geth-keystore-test")
if err != nil {
t.Fatal(err)
@@ -44,7 +44,7 @@ func tmpKeyStore(t *testing.T, encrypted bool) (dir string, ks keyStore) {
}
func TestKeyStorePlain(t *testing.T) {
dir, ks := tmpKeyStore(t, false)
dir, ks := tmpKeyStoreIface(t, false)
defer os.RemoveAll(dir)
pass := "" // not used but required by API
@@ -52,7 +52,7 @@ func TestKeyStorePlain(t *testing.T) {
if err != nil {
t.Fatal(err)
}
k2, err := ks.GetKey(k1.Address, account.File, pass)
k2, err := ks.GetKey(k1.Address, account.URL.Path, pass)
if err != nil {
t.Fatal(err)
}
@@ -65,7 +65,7 @@ func TestKeyStorePlain(t *testing.T) {
}
func TestKeyStorePassphrase(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
pass := "foo"
@@ -73,7 +73,7 @@ func TestKeyStorePassphrase(t *testing.T) {
if err != nil {
t.Fatal(err)
}
k2, err := ks.GetKey(k1.Address, account.File, pass)
k2, err := ks.GetKey(k1.Address, account.URL.Path, pass)
if err != nil {
t.Fatal(err)
}
@@ -86,7 +86,7 @@ func TestKeyStorePassphrase(t *testing.T) {
}
func TestKeyStorePassphraseDecryptionFail(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
pass := "foo"
@@ -94,13 +94,13 @@ func TestKeyStorePassphraseDecryptionFail(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if _, err = ks.GetKey(k1.Address, account.File, "bar"); err != ErrDecrypt {
if _, err = ks.GetKey(k1.Address, account.URL.Path, "bar"); err != ErrDecrypt {
t.Fatalf("wrong error for invalid passphrase\ngot %q\nwant %q", err, ErrDecrypt)
}
}
func TestImportPreSaleKey(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
// file content of a presale key file generated with:
@@ -115,8 +115,8 @@ func TestImportPreSaleKey(t *testing.T) {
if account.Address != common.HexToAddress("d4584b5f6229b7be90727b0fc8c6b91bb427821f") {
t.Errorf("imported account has wrong address %x", account.Address)
}
if !strings.HasPrefix(account.File, dir) {
t.Errorf("imported account file not in keystore directory: %q", account.File)
if !strings.HasPrefix(account.URL.Path, dir) {
t.Errorf("imported account file not in keystore directory: %q", account.URL)
}
}
@@ -142,19 +142,19 @@ func TestV3_PBKDF2_1(t *testing.T) {
func TestV3_PBKDF2_2(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
testDecryptV3(tests["test1"], t)
}
func TestV3_PBKDF2_3(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
testDecryptV3(tests["python_generated_test_with_odd_iv"], t)
}
func TestV3_PBKDF2_4(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
testDecryptV3(tests["evilnonce"], t)
}
@@ -166,7 +166,7 @@ func TestV3_Scrypt_1(t *testing.T) {
func TestV3_Scrypt_2(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("../tests/files/KeyStoreTests/basic_tests.json", t)
tests := loadKeyStoreTestV3("../../tests/files/KeyStoreTests/basic_tests.json", t)
testDecryptV3(tests["test2"], t)
}

View File

@@ -0,0 +1,365 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"io/ioutil"
"math/rand"
"os"
"runtime"
"sort"
"strings"
"testing"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/event"
)
var testSigData = make([]byte, 32)
func TestKeyStore(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
a, err := ks.NewAccount("foo")
if err != nil {
t.Fatal(err)
}
if !strings.HasPrefix(a.URL.Path, dir) {
t.Errorf("account file %s doesn't have dir prefix", a.URL)
}
stat, err := os.Stat(a.URL.Path)
if err != nil {
t.Fatalf("account file %s doesn't exist (%v)", a.URL, err)
}
if runtime.GOOS != "windows" && stat.Mode() != 0600 {
t.Fatalf("account file has wrong mode: got %o, want %o", stat.Mode(), 0600)
}
if !ks.HasAddress(a.Address) {
t.Errorf("HasAccount(%x) should've returned true", a.Address)
}
if err := ks.Update(a, "foo", "bar"); err != nil {
t.Errorf("Update error: %v", err)
}
if err := ks.Delete(a, "bar"); err != nil {
t.Errorf("Delete error: %v", err)
}
if common.FileExist(a.URL.Path) {
t.Errorf("account file %s should be gone after Delete", a.URL)
}
if ks.HasAddress(a.Address) {
t.Errorf("HasAccount(%x) should've returned true after Delete", a.Address)
}
}
func TestSign(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "" // not used but required by API
a1, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
if err := ks.Unlock(a1, ""); err != nil {
t.Fatal(err)
}
if _, err := ks.SignHash(accounts.Account{Address: a1.Address}, testSigData); err != nil {
t.Fatal(err)
}
}
func TestSignWithPassphrase(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "passwd"
acc, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
if _, unlocked := ks.unlocked[acc.Address]; unlocked {
t.Fatal("expected account to be locked")
}
_, err = ks.SignHashWithPassphrase(acc, pass, testSigData)
if err != nil {
t.Fatal(err)
}
if _, unlocked := ks.unlocked[acc.Address]; unlocked {
t.Fatal("expected account to be locked")
}
if _, err = ks.SignHashWithPassphrase(acc, "invalid passwd", testSigData); err == nil {
t.Fatal("expected SignHashWithPassphrase to fail with invalid password")
}
}
func TestTimedUnlock(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "foo"
a1, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
// Signing without passphrase fails because account is locked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked before unlocking, got ", err)
}
// Signing with passphrase works
if err = ks.TimedUnlock(a1, pass, 100*time.Millisecond); err != nil {
t.Fatal(err)
}
// Signing without passphrase works because account is temp unlocked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// Signing fails again after automatic locking
time.Sleep(250 * time.Millisecond)
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked timeout expired, got ", err)
}
}
func TestOverrideUnlock(t *testing.T) {
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
pass := "foo"
a1, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
// Unlock indefinitely.
if err = ks.TimedUnlock(a1, pass, 5*time.Minute); err != nil {
t.Fatal(err)
}
// Signing without passphrase works because account is temp unlocked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// reset unlock to a shorter period, invalidates the previous unlock
if err = ks.TimedUnlock(a1, pass, 100*time.Millisecond); err != nil {
t.Fatal(err)
}
// Signing without passphrase still works because account is temp unlocked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// Signing fails again after automatic locking
time.Sleep(250 * time.Millisecond)
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked timeout expired, got ", err)
}
}
// This test should fail under -race if signing races the expiration goroutine.
func TestSignRace(t *testing.T) {
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Create a test account.
a1, err := ks.NewAccount("")
if err != nil {
t.Fatal("could not create the test account", err)
}
if err := ks.TimedUnlock(a1, "", 15*time.Millisecond); err != nil {
t.Fatal("could not unlock the test account", err)
}
end := time.Now().Add(500 * time.Millisecond)
for time.Now().Before(end) {
if _, err := ks.SignHash(accounts.Account{Address: a1.Address}, testSigData); err == ErrLocked {
return
} else if err != nil {
t.Errorf("Sign error: %v", err)
return
}
time.Sleep(1 * time.Millisecond)
}
t.Errorf("Account did not lock within the timeout")
}
// Tests that the wallet notifier loop starts and stops correctly based on the
// addition and removal of wallet event subscriptions.
func TestWalletNotifierLifecycle(t *testing.T) {
// Create a temporary kesytore to test with
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Ensure that the notification updater is not running yet
time.Sleep(250 * time.Millisecond)
ks.mu.RLock()
updating := ks.updating
ks.mu.RUnlock()
if updating {
t.Errorf("wallet notifier running without subscribers")
}
// Subscribe to the wallet feed and ensure the updater boots up
updates := make(chan accounts.WalletEvent)
subs := make([]event.Subscription, 2)
for i := 0; i < len(subs); i++ {
// Create a new subscription
subs[i] = ks.Subscribe(updates)
// Ensure the notifier comes online
time.Sleep(250 * time.Millisecond)
ks.mu.RLock()
updating = ks.updating
ks.mu.RUnlock()
if !updating {
t.Errorf("sub %d: wallet notifier not running after subscription", i)
}
}
// Unsubscribe and ensure the updater terminates eventually
for i := 0; i < len(subs); i++ {
// Close an existing subscription
subs[i].Unsubscribe()
// Ensure the notifier shuts down at and only at the last close
for k := 0; k < int(walletRefreshCycle/(250*time.Millisecond))+2; k++ {
ks.mu.RLock()
updating = ks.updating
ks.mu.RUnlock()
if i < len(subs)-1 && !updating {
t.Fatalf("sub %d: event notifier stopped prematurely", i)
}
if i == len(subs)-1 && !updating {
return
}
time.Sleep(250 * time.Millisecond)
}
}
t.Errorf("wallet notifier didn't terminate after unsubscribe")
}
// Tests that wallet notifications and correctly fired when accounts are added
// or deleted from the keystore.
func TestWalletNotifications(t *testing.T) {
// Create a temporary kesytore to test with
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Subscribe to the wallet feed
updates := make(chan accounts.WalletEvent, 1)
sub := ks.Subscribe(updates)
defer sub.Unsubscribe()
// Randomly add and remove account and make sure events and wallets are in sync
live := make(map[common.Address]accounts.Account)
for i := 0; i < 1024; i++ {
// Execute a creation or deletion and ensure event arrival
if create := len(live) == 0 || rand.Int()%4 > 0; create {
// Add a new account and ensure wallet notifications arrives
account, err := ks.NewAccount("")
if err != nil {
t.Fatalf("failed to create test account: %v", err)
}
select {
case event := <-updates:
if !event.Arrive {
t.Errorf("departure event on account creation")
}
if event.Wallet.Accounts()[0] != account {
t.Errorf("account mismatch on created wallet: have %v, want %v", event.Wallet.Accounts()[0], account)
}
default:
t.Errorf("wallet arrival event not fired on account creation")
}
live[account.Address] = account
} else {
// Select a random account to delete (crude, but works)
var account accounts.Account
for _, a := range live {
account = a
break
}
// Remove an account and ensure wallet notifiaction arrives
if err := ks.Delete(account, ""); err != nil {
t.Fatalf("failed to delete test account: %v", err)
}
select {
case event := <-updates:
if event.Arrive {
t.Errorf("arrival event on account deletion")
}
if event.Wallet.Accounts()[0] != account {
t.Errorf("account mismatch on deleted wallet: have %v, want %v", event.Wallet.Accounts()[0], account)
}
default:
t.Errorf("wallet departure event not fired on account creation")
}
delete(live, account.Address)
}
// Retrieve the list of wallets and ensure it matches with our required live set
liveList := make([]accounts.Account, 0, len(live))
for _, account := range live {
liveList = append(liveList, account)
}
sort.Sort(accountsByURL(liveList))
wallets := ks.Wallets()
if len(liveList) != len(wallets) {
t.Errorf("wallet list doesn't match required accounts: have %v, want %v", wallets, liveList)
} else {
for j, wallet := range wallets {
if accs := wallet.Accounts(); len(accs) != 1 {
t.Errorf("wallet %d: contains invalid number of accounts: have %d, want 1", j, len(accs))
} else if accs[0] != liveList[j] {
t.Errorf("wallet %d: account mismatch: have %v, want %v", j, accs[0], liveList[j])
}
}
}
}
}
func tmpKeyStore(t *testing.T, encrypted bool) (string, *KeyStore) {
d, err := ioutil.TempDir("", "eth-keystore-test")
if err != nil {
t.Fatal(err)
}
new := NewPlaintextKeyStore
if encrypted {
new = func(kd string) *KeyStore { return NewKeyStore(kd, veryLightScryptN, veryLightScryptP) }
}
return d, new(d)
}

View File

@@ -0,0 +1,139 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/core/types"
)
// keystoreWallet implements the accounts.Wallet interface for the original
// keystore.
type keystoreWallet struct {
account accounts.Account // Single account contained in this wallet
keystore *KeyStore // Keystore where the account originates from
}
// URL implements accounts.Wallet, returning the URL of the account within.
func (w *keystoreWallet) URL() accounts.URL {
return w.account.URL
}
// Status implements accounts.Wallet, always returning "open", since there is no
// concept of open/close for plain keystore accounts.
func (w *keystoreWallet) Status() string {
w.keystore.mu.RLock()
defer w.keystore.mu.RUnlock()
if _, ok := w.keystore.unlocked[w.account.Address]; ok {
return "Unlocked"
}
return "Locked"
}
// Open implements accounts.Wallet, but is a noop for plain wallets since there
// is no connection or decryption step necessary to access the list of accounts.
func (w *keystoreWallet) Open(passphrase string) error { return nil }
// Close implements accounts.Wallet, but is a noop for plain wallets since is no
// meaningful open operation.
func (w *keystoreWallet) Close() error { return nil }
// Accounts implements accounts.Wallet, returning an account list consisting of
// a single account that the plain kestore wallet contains.
func (w *keystoreWallet) Accounts() []accounts.Account {
return []accounts.Account{w.account}
}
// Contains implements accounts.Wallet, returning whether a particular account is
// or is not wrapped by this wallet instance.
func (w *keystoreWallet) Contains(account accounts.Account) bool {
return account.Address == w.account.Address && (account.URL == (accounts.URL{}) || account.URL == w.account.URL)
}
// Derive implements accounts.Wallet, but is a noop for plain wallets since there
// is no notion of hierarchical account derivation for plain keystore accounts.
func (w *keystoreWallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Account, error) {
return accounts.Account{}, accounts.ErrNotSupported
}
// SelfDerive implements accounts.Wallet, but is a noop for plain wallets since
// there is no notion of hierarchical account derivation for plain keystore accounts.
func (w *keystoreWallet) SelfDerive(base accounts.DerivationPath, chain ethereum.ChainStateReader) {}
// SignHash implements accounts.Wallet, attempting to sign the given hash with
// the given account. If the wallet does not wrap this particular account, an
// error is returned to avoid account leakage (even though in theory we may be
// able to sign via our shared keystore backend).
func (w *keystoreWallet) SignHash(account accounts.Account, hash []byte) ([]byte, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignHash(account, hash)
}
// SignTx implements accounts.Wallet, attempting to sign the given transaction
// with the given account. If the wallet does not wrap this particular account,
// an error is returned to avoid account leakage (even though in theory we may
// be able to sign via our shared keystore backend).
func (w *keystoreWallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignTx(account, tx, chainID)
}
// SignHashWithPassphrase implements accounts.Wallet, attempting to sign the
// given hash with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignHashWithPassphrase(account, passphrase, hash)
}
// SignTxWithPassphrase implements accounts.Wallet, attempting to sign the given
// transaction with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignTxWithPassphrase(account, passphrase, tx, chainID)
}

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
package keystore
import (
"crypto/aes"
@@ -22,22 +22,24 @@ import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/crypto"
"github.com/pborman/uuid"
"golang.org/x/crypto/pbkdf2"
)
// creates a Key and stores that in the given KeyStore by decrypting a presale key JSON
func importPreSaleKey(keyStore keyStore, keyJSON []byte, password string) (Account, *Key, error) {
func importPreSaleKey(keyStore keyStore, keyJSON []byte, password string) (accounts.Account, *Key, error) {
key, err := decryptPreSaleKey(keyJSON, password)
if err != nil {
return Account{}, nil, err
return accounts.Account{}, nil, err
}
key.Id = uuid.NewRandom()
a := Account{Address: key.Address, File: keyStore.JoinPath(keyFileName(key.Address))}
err = keyStore.StoreKey(a.File, key, password)
a := accounts.Account{Address: key.Address, URL: accounts.URL{Scheme: KeyStoreScheme, Path: keyStore.JoinPath(keyFileName(key.Address))}}
err = keyStore.StoreKey(a.URL.Path, key, password)
return a, key, err
}
@@ -53,6 +55,9 @@ func decryptPreSaleKey(fileContent []byte, password string) (key *Key, err error
return nil, err
}
encSeedBytes, err := hex.DecodeString(preSaleKeyStruct.EncSeed)
if err != nil {
return nil, errors.New("invalid hex in encSeed")
}
iv := encSeedBytes[:16]
cipherText := encSeedBytes[16:]
/*

View File

@@ -16,7 +16,7 @@
// +build darwin,!ios freebsd linux,!arm64 netbsd solaris
package accounts
package keystore
import (
"time"
@@ -27,14 +27,14 @@ import (
)
type watcher struct {
ac *addrCache
ac *accountCache
starting bool
running bool
ev chan notify.EventInfo
quit chan struct{}
}
func newWatcher(ac *addrCache) *watcher {
func newWatcher(ac *accountCache) *watcher {
return &watcher{
ac: ac,
ev: make(chan notify.EventInfo, 10),

View File

@@ -19,10 +19,10 @@
// This is the fallback implementation of directory watching.
// It is used on unsupported platforms.
package accounts
package keystore
type watcher struct{ running bool }
func newWatcher(*addrCache) *watcher { return new(watcher) }
func newWatcher(*accountCache) *watcher { return new(watcher) }
func (*watcher) start() {}
func (*watcher) close() {}

198
accounts/manager.go Normal file
View File

@@ -0,0 +1,198 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"reflect"
"sort"
"sync"
"github.com/ethereum/go-ethereum/event"
)
// Manager is an overarching account manager that can communicate with various
// backends for signing transactions.
type Manager struct {
backends map[reflect.Type][]Backend // Index of backends currently registered
updaters []event.Subscription // Wallet update subscriptions for all backends
updates chan WalletEvent // Subscription sink for backend wallet changes
wallets []Wallet // Cache of all wallets from all registered backends
feed event.Feed // Wallet feed notifying of arrivals/departures
quit chan chan error
lock sync.RWMutex
}
// NewManager creates a generic account manager to sign transaction via various
// supported backends.
func NewManager(backends ...Backend) *Manager {
// Subscribe to wallet notifications from all backends
updates := make(chan WalletEvent, 4*len(backends))
subs := make([]event.Subscription, len(backends))
for i, backend := range backends {
subs[i] = backend.Subscribe(updates)
}
// Retrieve the initial list of wallets from the backends and sort by URL
var wallets []Wallet
for _, backend := range backends {
wallets = merge(wallets, backend.Wallets()...)
}
// Assemble the account manager and return
am := &Manager{
backends: make(map[reflect.Type][]Backend),
updaters: subs,
updates: updates,
wallets: wallets,
quit: make(chan chan error),
}
for _, backend := range backends {
kind := reflect.TypeOf(backend)
am.backends[kind] = append(am.backends[kind], backend)
}
go am.update()
return am
}
// Close terminates the account manager's internal notification processes.
func (am *Manager) Close() error {
errc := make(chan error)
am.quit <- errc
return <-errc
}
// update is the wallet event loop listening for notifications from the backends
// and updating the cache of wallets.
func (am *Manager) update() {
// Close all subscriptions when the manager terminates
defer func() {
am.lock.Lock()
for _, sub := range am.updaters {
sub.Unsubscribe()
}
am.updaters = nil
am.lock.Unlock()
}()
// Loop until termination
for {
select {
case event := <-am.updates:
// Wallet event arrived, update local cache
am.lock.Lock()
if event.Arrive {
am.wallets = merge(am.wallets, event.Wallet)
} else {
am.wallets = drop(am.wallets, event.Wallet)
}
am.lock.Unlock()
// Notify any listeners of the event
am.feed.Send(event)
case errc := <-am.quit:
// Manager terminating, return
errc <- nil
return
}
}
}
// Backends retrieves the backend(s) with the given type from the account manager.
func (am *Manager) Backends(kind reflect.Type) []Backend {
return am.backends[kind]
}
// Wallets returns all signer accounts registered under this account manager.
func (am *Manager) Wallets() []Wallet {
am.lock.RLock()
defer am.lock.RUnlock()
cpy := make([]Wallet, len(am.wallets))
copy(cpy, am.wallets)
return cpy
}
// Wallet retrieves the wallet associated with a particular URL.
func (am *Manager) Wallet(url string) (Wallet, error) {
am.lock.RLock()
defer am.lock.RUnlock()
parsed, err := parseURL(url)
if err != nil {
return nil, err
}
for _, wallet := range am.Wallets() {
if wallet.URL() == parsed {
return wallet, nil
}
}
return nil, ErrUnknownWallet
}
// Find attempts to locate the wallet corresponding to a specific account. Since
// accounts can be dynamically added to and removed from wallets, this method has
// a linear runtime in the number of wallets.
func (am *Manager) Find(account Account) (Wallet, error) {
am.lock.RLock()
defer am.lock.RUnlock()
for _, wallet := range am.wallets {
if wallet.Contains(account) {
return wallet, nil
}
}
return nil, ErrUnknownAccount
}
// Subscribe creates an async subscription to receive notifications when the
// manager detects the arrival or departure of a wallet from any of its backends.
func (am *Manager) Subscribe(sink chan<- WalletEvent) event.Subscription {
return am.feed.Subscribe(sink)
}
// merge is a sorted analogue of append for wallets, where the ordering of the
// origin list is preserved by inserting new wallets at the correct position.
//
// The original slice is assumed to be already sorted by URL.
func merge(slice []Wallet, wallets ...Wallet) []Wallet {
for _, wallet := range wallets {
n := sort.Search(len(slice), func(i int) bool { return slice[i].URL().Cmp(wallet.URL()) >= 0 })
if n == len(slice) {
slice = append(slice, wallet)
continue
}
slice = append(slice[:n], append([]Wallet{wallet}, slice[n:]...)...)
}
return slice
}
// drop is the couterpart of merge, which looks up wallets from within the sorted
// cache and removes the ones specified.
func drop(slice []Wallet, wallets ...Wallet) []Wallet {
for _, wallet := range wallets {
n := sort.Search(len(slice), func(i int) bool { return slice[i].URL().Cmp(wallet.URL()) >= 0 })
if n == len(slice) {
// Wallet not found, may happen during startup
continue
}
slice = append(slice[:n], slice[n+1:]...)
}
return slice
}

79
accounts/url.go Normal file
View File

@@ -0,0 +1,79 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"encoding/json"
"errors"
"fmt"
"strings"
)
// URL represents the canonical identification URL of a wallet or account.
//
// It is a simplified version of url.URL, with the important limitations (which
// are considered features here) that it contains value-copyable components only,
// as well as that it doesn't do any URL encoding/decoding of special characters.
//
// The former is important to allow an account to be copied without leaving live
// references to the original version, whereas the latter is important to ensure
// one single canonical form opposed to many allowed ones by the RFC 3986 spec.
//
// As such, these URLs should not be used outside of the scope of an Ethereum
// wallet or account.
type URL struct {
Scheme string // Protocol scheme to identify a capable account backend
Path string // Path for the backend to identify a unique entity
}
// parseURL converts a user supplied URL into the accounts specific structure.
func parseURL(url string) (URL, error) {
parts := strings.Split(url, "://")
if len(parts) != 2 || parts[0] == "" {
return URL{}, errors.New("protocol scheme missing")
}
return URL{
Scheme: parts[0],
Path: parts[1],
}, nil
}
// String implements the stringer interface.
func (u URL) String() string {
if u.Scheme != "" {
return fmt.Sprintf("%s://%s", u.Scheme, u.Path)
}
return u.Path
}
// MarshalJSON implements the json.Marshaller interface.
func (u URL) MarshalJSON() ([]byte, error) {
return json.Marshal(u.String())
}
// Cmp compares x and y and returns:
//
// -1 if x < y
// 0 if x == y
// +1 if x > y
//
func (u URL) Cmp(url URL) int {
if u.Scheme == url.Scheme {
return strings.Compare(u.Path, url.Path)
}
return strings.Compare(u.Scheme, url.Scheme)
}

View File

@@ -0,0 +1,209 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Ledger hardware
// wallets. The wire protocol spec can be found in the Ledger Blue GitHub repo:
// https://raw.githubusercontent.com/LedgerHQ/blue-app-eth/master/doc/ethapp.asc
// +build !ios
package usbwallet
import (
"fmt"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event"
"github.com/karalabe/gousb/usb"
)
// LedgerScheme is the protocol scheme prefixing account and wallet URLs.
var LedgerScheme = "ledger"
// ledgerDeviceIDs are the known device IDs that Ledger wallets use.
var ledgerDeviceIDs = []deviceID{
{Vendor: 0x2c97, Product: 0x0000}, // Ledger Blue
{Vendor: 0x2c97, Product: 0x0001}, // Ledger Nano S
}
// Maximum time between wallet refreshes (if USB hotplug notifications don't work).
const ledgerRefreshCycle = time.Second
// Minimum time between wallet refreshes to avoid USB trashing.
const ledgerRefreshThrottling = 500 * time.Millisecond
// LedgerHub is a accounts.Backend that can find and handle Ledger hardware wallets.
type LedgerHub struct {
ctx *usb.Context // Context interfacing with a libusb instance
refreshed time.Time // Time instance when the list of wallets was last refreshed
wallets []accounts.Wallet // List of Ledger devices currently tracking
updateFeed event.Feed // Event feed to notify wallet additions/removals
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
quit chan chan error
lock sync.RWMutex
}
// NewLedgerHub creates a new hardware wallet manager for Ledger devices.
func NewLedgerHub() (*LedgerHub, error) {
// Initialize the USB library to access Ledgers through
ctx, err := usb.NewContext()
if err != nil {
return nil, err
}
// Create the USB hub, start and return it
hub := &LedgerHub{
ctx: ctx,
quit: make(chan chan error),
}
hub.refreshWallets()
return hub, nil
}
// Wallets implements accounts.Backend, returning all the currently tracked USB
// devices that appear to be Ledger hardware wallets.
func (hub *LedgerHub) Wallets() []accounts.Wallet {
// Make sure the list of wallets is up to date
hub.refreshWallets()
hub.lock.RLock()
defer hub.lock.RUnlock()
cpy := make([]accounts.Wallet, len(hub.wallets))
copy(cpy, hub.wallets)
return cpy
}
// refreshWallets scans the USB devices attached to the machine and updates the
// list of wallets based on the found devices.
func (hub *LedgerHub) refreshWallets() {
// Don't scan the USB like crazy it the user fetches wallets in a loop
hub.lock.RLock()
elapsed := time.Since(hub.refreshed)
hub.lock.RUnlock()
if elapsed < ledgerRefreshThrottling {
return
}
// Retrieve the current list of Ledger devices
var devIDs []deviceID
var busIDs []uint16
hub.ctx.ListDevices(func(desc *usb.Descriptor) bool {
// Gather Ledger devices, don't connect any just yet
for _, id := range ledgerDeviceIDs {
if desc.Vendor == id.Vendor && desc.Product == id.Product {
devIDs = append(devIDs, deviceID{Vendor: desc.Vendor, Product: desc.Product})
busIDs = append(busIDs, uint16(desc.Bus)<<8+uint16(desc.Address))
return false
}
}
// Not ledger, ignore and don't connect either
return false
})
// Transform the current list of wallets into the new one
hub.lock.Lock()
wallets := make([]accounts.Wallet, 0, len(devIDs))
events := []accounts.WalletEvent{}
for i := 0; i < len(devIDs); i++ {
devID, busID := devIDs[i], busIDs[i]
url := accounts.URL{Scheme: LedgerScheme, Path: fmt.Sprintf("%03d:%03d", busID>>8, busID&0xff)}
// Drop wallets in front of the next device or those that failed for some reason
for len(hub.wallets) > 0 && (hub.wallets[0].URL().Cmp(url) < 0 || hub.wallets[0].(*ledgerWallet).failed()) {
events = append(events, accounts.WalletEvent{Wallet: hub.wallets[0], Arrive: false})
hub.wallets = hub.wallets[1:]
}
// If there are no more wallets or the device is before the next, wrap new wallet
if len(hub.wallets) == 0 || hub.wallets[0].URL().Cmp(url) > 0 {
wallet := &ledgerWallet{context: hub.ctx, hardwareID: devID, locationID: busID, url: &url}
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: true})
wallets = append(wallets, wallet)
continue
}
// If the device is the same as the first wallet, keep it
if hub.wallets[0].URL().Cmp(url) == 0 {
wallets = append(wallets, hub.wallets[0])
hub.wallets = hub.wallets[1:]
continue
}
}
// Drop any leftover wallets and set the new batch
for _, wallet := range hub.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Arrive: false})
}
hub.refreshed = time.Now()
hub.wallets = wallets
hub.lock.Unlock()
// Fire all wallet events and return
for _, event := range events {
hub.updateFeed.Send(event)
}
}
// Subscribe implements accounts.Backend, creating an async subscription to
// receive notifications on the addition or removal of Ledger wallets.
func (hub *LedgerHub) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
// We need the mutex to reliably start/stop the update loop
hub.lock.Lock()
defer hub.lock.Unlock()
// Subscribe the caller and track the subscriber count
sub := hub.updateScope.Track(hub.updateFeed.Subscribe(sink))
// Subscribers require an active notification loop, start it
if !hub.updating {
hub.updating = true
go hub.updater()
}
return sub
}
// updater is responsible for maintaining an up-to-date list of wallets stored in
// the keystore, and for firing wallet addition/removal events. It listens for
// account change events from the underlying account cache, and also periodically
// forces a manual refresh (only triggers for systems where the filesystem notifier
// is not running).
func (hub *LedgerHub) updater() {
for {
// Wait for a USB hotplug event (not supported yet) or a refresh timeout
select {
//case <-hub.changes: // reenable on hutplug implementation
case <-time.After(ledgerRefreshCycle):
}
// Run the wallet refresher
hub.refreshWallets()
// If all our subscribers left, stop the updater
hub.lock.Lock()
if hub.updateScope.Count() == 0 {
hub.updating = false
hub.lock.Unlock()
return
}
hub.lock.Unlock()
}
}

View File

@@ -0,0 +1,945 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Ledger hardware
// wallets. The wire protocol spec can be found in the Ledger Blue GitHub repo:
// https://raw.githubusercontent.com/LedgerHQ/blue-app-eth/master/doc/ethapp.asc
// +build !ios
package usbwallet
import (
"encoding/binary"
"encoding/hex"
"errors"
"fmt"
"io"
"math/big"
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/logger"
"github.com/ethereum/go-ethereum/logger/glog"
"github.com/ethereum/go-ethereum/rlp"
"github.com/karalabe/gousb/usb"
"golang.org/x/net/context"
)
// Maximum time between wallet health checks to detect USB unplugs.
const ledgerHeartbeatCycle = time.Second
// Minimum time to wait between self derivation attempts, even it the user is
// requesting accounts like crazy.
const ledgerSelfDeriveThrottling = time.Second
// ledgerOpcode is an enumeration encoding the supported Ledger opcodes.
type ledgerOpcode byte
// ledgerParam1 is an enumeration encoding the supported Ledger parameters for
// specific opcodes. The same parameter values may be reused between opcodes.
type ledgerParam1 byte
// ledgerParam2 is an enumeration encoding the supported Ledger parameters for
// specific opcodes. The same parameter values may be reused between opcodes.
type ledgerParam2 byte
const (
ledgerOpRetrieveAddress ledgerOpcode = 0x02 // Returns the public key and Ethereum address for a given BIP 32 path
ledgerOpSignTransaction ledgerOpcode = 0x04 // Signs an Ethereum transaction after having the user validate the parameters
ledgerOpGetConfiguration ledgerOpcode = 0x06 // Returns specific wallet application configuration
ledgerP1DirectlyFetchAddress ledgerParam1 = 0x00 // Return address directly from the wallet
ledgerP1ConfirmFetchAddress ledgerParam1 = 0x01 // Require a user confirmation before returning the address
ledgerP1InitTransactionData ledgerParam1 = 0x00 // First transaction data block for signing
ledgerP1ContTransactionData ledgerParam1 = 0x80 // Subsequent transaction data block for signing
ledgerP2DiscardAddressChainCode ledgerParam2 = 0x00 // Do not return the chain code along with the address
ledgerP2ReturnAddressChainCode ledgerParam2 = 0x01 // Require a user confirmation before returning the address
)
// errReplyInvalidHeader is the error message returned by a Ledfer data exchange
// if the device replies with a mismatching header. This usually means the device
// is in browser mode.
var errReplyInvalidHeader = errors.New("invalid reply header")
// ledgerWallet represents a live USB Ledger hardware wallet.
type ledgerWallet struct {
context *usb.Context // USB context to interface libusb through
hardwareID deviceID // USB identifiers to identify this device type
locationID uint16 // USB bus and address to identify this device instance
url *accounts.URL // Textual URL uniquely identifying this wallet
device *usb.Device // USB device advertising itself as a Ledger wallet
input usb.Endpoint // Input endpoint to send data to this device
output usb.Endpoint // Output endpoint to receive data from this device
failure error // Any failure that would make the device unusable
version [3]byte // Current version of the Ledger Ethereum app (zero if app is offline)
browser bool // Flag whether the Ledger is in browser mode (reply channel mismatch)
accounts []accounts.Account // List of derive accounts pinned on the Ledger
paths map[common.Address]accounts.DerivationPath // Known derivation paths for signing operations
deriveNextPath accounts.DerivationPath // Next derivation path for account auto-discovery
deriveNextAddr common.Address // Next derived account address for auto-discovery
deriveChain ethereum.ChainStateReader // Blockchain state reader to discover used account with
deriveReq chan chan struct{} // Channel to request a self-derivation on
deriveQuit chan chan error // Channel to terminate the self-deriver with
healthQuit chan chan error
// Locking a hardware wallet is a bit special. Since hardware devices are lower
// performing, any communication with them might take a non negligible amount of
// time. Worse still, waiting for user confirmation can take arbitrarily long,
// but exclusive communication must be upheld during. Locking the entire wallet
// in the mean time however would stall any parts of the system that don't want
// to communicate, just read some state (e.g. list the accounts).
//
// As such, a hardware wallet needs two locks to function correctly. A state
// lock can be used to protect the wallet's software-side internal state, which
// must not be held exlusively during hardware communication. A communication
// lock can be used to achieve exclusive access to the device itself, this one
// however should allow "skipping" waiting for operations that might want to
// use the device, but can live without too (e.g. account self-derivation).
//
// Since we have two locks, it's important to know how to properly use them:
// - Communication requires the `device` to not change, so obtaining the
// commsLock should be done after having a stateLock.
// - Communication must not disable read access to the wallet state, so it
// must only ever hold a *read* lock to stateLock.
commsLock chan struct{} // Mutex (buf=1) for the USB comms without keeping the state locked
stateLock sync.RWMutex // Protects read and write access to the wallet struct fields
}
// URL implements accounts.Wallet, returning the URL of the Ledger device.
func (w *ledgerWallet) URL() accounts.URL {
return *w.url // Immutable, no need for a lock
}
// Status implements accounts.Wallet, always whether the Ledger is opened, closed
// or whether the Ethereum app was not started on it.
func (w *ledgerWallet) Status() string {
w.stateLock.RLock() // No device communication, state lock is enough
defer w.stateLock.RUnlock()
if w.failure != nil {
return fmt.Sprintf("Failed: %v", w.failure)
}
if w.device == nil {
return "Closed"
}
if w.browser {
return "Ethereum app in browser mode"
}
if w.offline() {
return "Ethereum app offline"
}
return fmt.Sprintf("Ethereum app v%d.%d.%d online", w.version[0], w.version[1], w.version[2])
}
// offline returns whether the wallet and the Ethereum app is offline or not.
//
// The method assumes that the state lock is held!
func (w *ledgerWallet) offline() bool {
return w.version == [3]byte{0, 0, 0}
}
// failed returns if the USB device wrapped by the wallet failed for some reason.
// This is used by the device scanner to report failed wallets as departed.
//
// The method assumes that the state lock is *not* held!
func (w *ledgerWallet) failed() bool {
w.stateLock.RLock() // No device communication, state lock is enough
defer w.stateLock.RUnlock()
return w.failure != nil
}
// Open implements accounts.Wallet, attempting to open a USB connection to the
// Ledger hardware wallet. The Ledger does not require a user passphrase, so that
// parameter is silently discarded.
func (w *ledgerWallet) Open(passphrase string) error {
w.stateLock.Lock() // State lock is enough since there's no connection yet at this point
defer w.stateLock.Unlock()
// If the wallet was already opened, don't try to open again
if w.device != nil {
return accounts.ErrWalletAlreadyOpen
}
// Otherwise iterate over all USB devices and find this again (no way to directly do this)
// Iterate over all attached devices and fetch those seemingly Ledger
devices, err := w.context.ListDevices(func(desc *usb.Descriptor) bool {
// Only open this single specific device
return desc.Vendor == w.hardwareID.Vendor && desc.Product == w.hardwareID.Product &&
uint16(desc.Bus)<<8+uint16(desc.Address) == w.locationID
})
if err != nil {
return err
}
if len(devices) == 0 {
return accounts.ErrUnknownWallet
}
// Device opened, attach to the input and output endpoints
device := devices[0]
var invalid string
switch {
case len(device.Descriptor.Configs) == 0:
invalid = "no endpoint config available"
case len(device.Descriptor.Configs[0].Interfaces) == 0:
invalid = "no endpoint interface available"
case len(device.Descriptor.Configs[0].Interfaces[0].Setups) == 0:
invalid = "no endpoint setup available"
case len(device.Descriptor.Configs[0].Interfaces[0].Setups[0].Endpoints) < 2:
invalid = "not enough IO endpoints available"
}
if invalid != "" {
device.Close()
return fmt.Errorf("ledger wallet [%s] invalid: %s", w.url, invalid)
}
// Open the input and output endpoints to the device
input, err := device.OpenEndpoint(
device.Descriptor.Configs[0].Config,
device.Descriptor.Configs[0].Interfaces[0].Number,
device.Descriptor.Configs[0].Interfaces[0].Setups[0].Number,
device.Descriptor.Configs[0].Interfaces[0].Setups[0].Endpoints[1].Address,
)
if err != nil {
device.Close()
return fmt.Errorf("ledger wallet [%s] input open failed: %v", w.url, err)
}
output, err := device.OpenEndpoint(
device.Descriptor.Configs[0].Config,
device.Descriptor.Configs[0].Interfaces[0].Number,
device.Descriptor.Configs[0].Interfaces[0].Setups[0].Number,
device.Descriptor.Configs[0].Interfaces[0].Setups[0].Endpoints[0].Address,
)
if err != nil {
device.Close()
return fmt.Errorf("ledger wallet [%s] output open failed: %v", w.url, err)
}
// Wallet seems to be successfully opened, guess if the Ethereum app is running
w.device, w.input, w.output = device, input, output
w.commsLock = make(chan struct{}, 1)
w.commsLock <- struct{}{} // Enable lock
w.paths = make(map[common.Address]accounts.DerivationPath)
w.deriveReq = make(chan chan struct{})
w.deriveQuit = make(chan chan error)
w.healthQuit = make(chan chan error)
defer func() {
go w.heartbeat()
go w.selfDerive()
}()
if _, err = w.ledgerDerive(accounts.DefaultBaseDerivationPath); err != nil {
// Ethereum app is not running or in browser mode, nothing more to do, return
if err == errReplyInvalidHeader {
w.browser = true
}
return nil
}
// Try to resolve the Ethereum app's version, will fail prior to v1.0.2
if w.version, err = w.ledgerVersion(); err != nil {
w.version = [3]byte{1, 0, 0} // Assume worst case, can't verify if v1.0.0 or v1.0.1
}
return nil
}
// heartbeat is a health check loop for the Ledger wallets to periodically verify
// whether they are still present or if they malfunctioned. It is needed because:
// - libusb on Windows doesn't support hotplug, so we can't detect USB unplugs
// - communication timeout on the Ledger requires a device power cycle to fix
func (w *ledgerWallet) heartbeat() {
glog.V(logger.Debug).Infof("%s health-check started", w.url.String())
defer glog.V(logger.Debug).Infof("%s health-check stopped", w.url.String())
// Execute heartbeat checks until termination or error
var (
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until termination is requested or the heartbeat cycle arrives
select {
case errc = <-w.healthQuit:
// Termination requested
continue
case <-time.After(ledgerHeartbeatCycle):
// Heartbeat time
}
// Execute a tiny data exchange to see responsiveness
w.stateLock.RLock()
if w.device == nil {
// Terminated while waiting for the lock
w.stateLock.RUnlock()
continue
}
<-w.commsLock // Don't lock state while resolving version
_, err = w.ledgerVersion()
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
if err == usb.ERROR_IO || err == usb.ERROR_NO_DEVICE {
w.stateLock.Lock() // Lock state to tear the wallet down
w.failure = err
w.close()
w.stateLock.Unlock()
}
// Ignore uninteresting errors
err = nil
}
// In case of error, wait for termination
if err != nil {
glog.V(logger.Debug).Infof("%s health-check failed: %v", w.url.String(), err)
errc = <-w.healthQuit
}
errc <- err
}
// Close implements accounts.Wallet, closing the USB connection to the Ledger.
func (w *ledgerWallet) Close() error {
// Ensure the wallet was opened
w.stateLock.RLock()
hQuit, dQuit := w.healthQuit, w.deriveQuit
w.stateLock.RUnlock()
// Terminate the health checks
var herr error
if hQuit != nil {
errc := make(chan error)
hQuit <- errc
herr = <-errc // Save for later, we *must* close the USB
}
// Terminate the self-derivations
var derr error
if dQuit != nil {
errc := make(chan error)
dQuit <- errc
derr = <-errc // Save for later, we *must* close the USB
}
// Terminate the device connection
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.healthQuit = nil
w.deriveQuit = nil
w.deriveReq = nil
if err := w.close(); err != nil {
return err
}
if herr != nil {
return herr
}
return derr
}
// close is the internal wallet closer that terminates the USB connection and
// resets all the fields to their defaults.
//
// Note, close assumes the state lock is held!
func (w *ledgerWallet) close() error {
// Allow duplicate closes, especially for health-check failures
if w.device == nil {
return nil
}
// Close the device, clear everything, then return
err := w.device.Close()
w.device, w.input, w.output = nil, nil, nil
w.browser, w.version = false, [3]byte{}
w.accounts, w.paths = nil, nil
return err
}
// Accounts implements accounts.Wallet, returning the list of accounts pinned to
// the Ledger hardware wallet. If self-derivation was enabled, the account list
// is periodically expanded based on current chain state.
func (w *ledgerWallet) Accounts() []accounts.Account {
// Attempt self-derivation if it's running
reqc := make(chan struct{}, 1)
select {
case w.deriveReq <- reqc:
// Self-derivation request accepted, wait for it
<-reqc
default:
// Self-derivation offline, throttled or busy, skip
}
// Return whatever account list we ended up with
w.stateLock.RLock()
defer w.stateLock.RUnlock()
cpy := make([]accounts.Account, len(w.accounts))
copy(cpy, w.accounts)
return cpy
}
// selfDerive is an account derivation loop that upon request attempts to find
// new non-zero accounts.
func (w *ledgerWallet) selfDerive() {
glog.V(logger.Debug).Infof("%s self-derivation started", w.url.String())
defer glog.V(logger.Debug).Infof("%s self-derivation stopped", w.url.String())
// Execute self-derivations until termination or error
var (
reqc chan struct{}
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until either derivation or termination is requested
select {
case errc = <-w.deriveQuit:
// Termination requested
continue
case reqc = <-w.deriveReq:
// Account discovery requested
}
// Derivation needs a chain and device access, skip if either unavailable
w.stateLock.RLock()
if w.device == nil || w.deriveChain == nil || w.offline() {
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
select {
case <-w.commsLock:
default:
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
// Device lock obtained, derive the next batch of accounts
var (
accs []accounts.Account
paths []accounts.DerivationPath
nextAddr = w.deriveNextAddr
nextPath = w.deriveNextPath
context = context.Background()
)
for empty := false; !empty; {
// Retrieve the next derived Ethereum account
if nextAddr == (common.Address{}) {
if nextAddr, err = w.ledgerDerive(nextPath); err != nil {
glog.V(logger.Warn).Infof("%s self-derivation failed: %v", w.url.String(), err)
break
}
}
// Check the account's status against the current chain state
var (
balance *big.Int
nonce uint64
)
balance, err = w.deriveChain.BalanceAt(context, nextAddr, nil)
if err != nil {
glog.V(logger.Warn).Infof("%s self-derivation balance retrieval failed: %v", w.url.String(), err)
break
}
nonce, err = w.deriveChain.NonceAt(context, nextAddr, nil)
if err != nil {
glog.V(logger.Warn).Infof("%s self-derivation nonce retrieval failed: %v", w.url.String(), err)
break
}
// If the next account is empty, stop self-derivation, but add it nonetheless
if balance.BitLen() == 0 && nonce == 0 {
empty = true
}
// We've just self-derived a new account, start tracking it locally
path := make(accounts.DerivationPath, len(nextPath))
copy(path[:], nextPath[:])
paths = append(paths, path)
account := accounts.Account{
Address: nextAddr,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
accs = append(accs, account)
// Display a log message to the user for new (or previously empty accounts)
if _, known := w.paths[nextAddr]; !known || (!empty && nextAddr == w.deriveNextAddr) {
glog.V(logger.Info).Infof("%s discovered %s (balance %22v, nonce %4d) at %s", w.url.String(), nextAddr.Hex(), balance, nonce, path)
}
// Fetch the next potential account
if !empty {
nextAddr = common.Address{}
nextPath[len(nextPath)-1]++
}
}
// Self derivation complete, release device lock
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// Insert any accounts successfully derived
w.stateLock.Lock()
for i := 0; i < len(accs); i++ {
if _, ok := w.paths[accs[i].Address]; !ok {
w.accounts = append(w.accounts, accs[i])
w.paths[accs[i].Address] = paths[i]
}
}
// Shift the self-derivation forward
// TODO(karalabe): don't overwrite changes from wallet.SelfDerive
w.deriveNextAddr = nextAddr
w.deriveNextPath = nextPath
w.stateLock.Unlock()
// Notify the user of termination and loop after a bit of time (to avoid trashing)
reqc <- struct{}{}
if err == nil {
select {
case errc = <-w.deriveQuit:
// Termination requested, abort
case <-time.After(ledgerSelfDeriveThrottling):
// Waited enough, willing to self-derive again
}
}
}
// In case of error, wait for termination
if err != nil {
glog.V(logger.Debug).Infof("%s self-derivation failed: %s", w.url.String(), err)
errc = <-w.deriveQuit
}
errc <- err
}
// Contains implements accounts.Wallet, returning whether a particular account is
// or is not pinned into this Ledger instance. Although we could attempt to resolve
// unpinned accounts, that would be an non-negligible hardware operation.
func (w *ledgerWallet) Contains(account accounts.Account) bool {
w.stateLock.RLock()
defer w.stateLock.RUnlock()
_, exists := w.paths[account.Address]
return exists
}
// Derive implements accounts.Wallet, deriving a new account at the specific
// derivation path. If pin is set to true, the account will be added to the list
// of tracked accounts.
func (w *ledgerWallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Account, error) {
// Try to derive the actual account and update its URL if successful
w.stateLock.RLock() // Avoid device disappearing during derivation
if w.device == nil || w.offline() {
w.stateLock.RUnlock()
return accounts.Account{}, accounts.ErrWalletClosed
}
<-w.commsLock // Avoid concurrent hardware access
address, err := w.ledgerDerive(path)
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// If an error occurred or no pinning was requested, return
if err != nil {
return accounts.Account{}, err
}
account := accounts.Account{
Address: address,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
if !pin {
return account, nil
}
// Pinning needs to modify the state
w.stateLock.Lock()
defer w.stateLock.Unlock()
if _, ok := w.paths[address]; !ok {
w.accounts = append(w.accounts, account)
w.paths[address] = path
}
return account, nil
}
// SelfDerive implements accounts.Wallet, trying to discover accounts that the
// user used previously (based on the chain state), but ones that he/she did not
// explicitly pin to the wallet manually. To avoid chain head monitoring, self
// derivation only runs during account listing (and even then throttled).
func (w *ledgerWallet) SelfDerive(base accounts.DerivationPath, chain ethereum.ChainStateReader) {
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.deriveNextPath = make(accounts.DerivationPath, len(base))
copy(w.deriveNextPath[:], base[:])
w.deriveNextAddr = common.Address{}
w.deriveChain = chain
}
// SignHash implements accounts.Wallet, however signing arbitrary data is not
// supported for Ledger wallets, so this method will always return an error.
func (w *ledgerWallet) SignHash(acc accounts.Account, hash []byte) ([]byte, error) {
return nil, accounts.ErrNotSupported
}
// SignTx implements accounts.Wallet. It sends the transaction over to the Ledger
// wallet to request a confirmation from the user. It returns either the signed
// transaction or a failure if the user denied the transaction.
//
// Note, if the version of the Ethereum application running on the Ledger wallet is
// too old to sign EIP-155 transactions, but such is requested nonetheless, an error
// will be returned opposed to silently signing in Homestead mode.
func (w *ledgerWallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
w.stateLock.RLock() // Comms have own mutex, this is for the state fields
defer w.stateLock.RUnlock()
// If the wallet is closed, or the Ethereum app doesn't run, abort
if w.device == nil || w.offline() {
return nil, accounts.ErrWalletClosed
}
// Make sure the requested account is contained within
path, ok := w.paths[account.Address]
if !ok {
return nil, accounts.ErrUnknownAccount
}
// Ensure the wallet is capable of signing the given transaction
if chainID != nil && w.version[0] <= 1 && w.version[1] <= 0 && w.version[2] <= 2 {
return nil, fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least", w.version[0], w.version[1], w.version[2])
}
// All infos gathered and metadata checks out, request signing
<-w.commsLock
defer func() { w.commsLock <- struct{}{} }()
return w.ledgerSign(path, account.Address, tx, chainID)
}
// SignHashWithPassphrase implements accounts.Wallet, however signing arbitrary
// data is not supported for Ledger wallets, so this method will always return
// an error.
func (w *ledgerWallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
return nil, accounts.ErrNotSupported
}
// SignTxWithPassphrase implements accounts.Wallet, attempting to sign the given
// transaction with the given account using passphrase as extra authentication.
// Since the Ledger does not support extra passphrases, it is silently ignored.
func (w *ledgerWallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
return w.SignTx(account, tx, chainID)
}
// ledgerVersion retrieves the current version of the Ethereum wallet app running
// on the Ledger wallet.
//
// The version retrieval protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+----+---
// E0 | 06 | 00 | 00 | 00 | 04
//
// With no input data, and the output data being:
//
// Description | Length
// ---------------------------------------------------+--------
// Flags 01: arbitrary data signature enabled by user | 1 byte
// Application major version | 1 byte
// Application minor version | 1 byte
// Application patch version | 1 byte
func (w *ledgerWallet) ledgerVersion() ([3]byte, error) {
// Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpGetConfiguration, 0, 0, nil)
if err != nil {
return [3]byte{}, err
}
if len(reply) != 4 {
return [3]byte{}, errors.New("reply not of correct size")
}
// Cache the version for future reference
var version [3]byte
copy(version[:], reply[1:])
return version, nil
}
// ledgerDerive retrieves the currently active Ethereum address from a Ledger
// wallet at the specified derivation path.
//
// The address derivation protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+---
// E0 | 02 | 00 return address
// 01 display address and confirm before returning
// | 00: do not return the chain code
// | 01: return the chain code
// | var | 00
//
// Where the input data is:
//
// Description | Length
// -------------------------------------------------+--------
// Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes
// ... | 4 bytes
// Last derivation index (big endian) | 4 bytes
//
// And the output data is:
//
// Description | Length
// ------------------------+-------------------
// Public Key length | 1 byte
// Uncompressed Public Key | arbitrary
// Ethereum address length | 1 byte
// Ethereum address | 40 bytes hex ascii
// Chain code if requested | 32 bytes
func (w *ledgerWallet) ledgerDerive(derivationPath []uint32) (common.Address, error) {
// Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath))
path[0] = byte(len(derivationPath))
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpRetrieveAddress, ledgerP1DirectlyFetchAddress, ledgerP2DiscardAddressChainCode, path)
if err != nil {
return common.Address{}, err
}
// Discard the public key, we don't need that for now
if len(reply) < 1 || len(reply) < 1+int(reply[0]) {
return common.Address{}, errors.New("reply lacks public key entry")
}
reply = reply[1+int(reply[0]):]
// Extract the Ethereum hex address string
if len(reply) < 1 || len(reply) < 1+int(reply[0]) {
return common.Address{}, errors.New("reply lacks address entry")
}
hexstr := reply[1 : 1+int(reply[0])]
// Decode the hex sting into an Ethereum address and return
var address common.Address
hex.Decode(address[:], hexstr)
return address, nil
}
// ledgerSign sends the transaction to the Ledger wallet, and waits for the user
// to confirm or deny the transaction.
//
// The transaction signing protocol is defined as follows:
//
// CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+---
// E0 | 04 | 00: first transaction data block
// 80: subsequent transaction data block
// | 00 | variable | variable
//
// Where the input for the first transaction block (first 255 bytes) is:
//
// Description | Length
// -------------------------------------------------+----------
// Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes
// ... | 4 bytes
// Last derivation index (big endian) | 4 bytes
// RLP transaction chunk | arbitrary
//
// And the input for subsequent transaction blocks (first 255 bytes) are:
//
// Description | Length
// ----------------------+----------
// RLP transaction chunk | arbitrary
//
// And the output data is:
//
// Description | Length
// ------------+---------
// signature V | 1 byte
// signature R | 32 bytes
// signature S | 32 bytes
func (w *ledgerWallet) ledgerSign(derivationPath []uint32, address common.Address, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// We need to modify the timeouts to account for user feedback
defer func(old time.Duration) { w.device.ReadTimeout = old }(w.device.ReadTimeout)
w.device.ReadTimeout = time.Hour * 24 * 30 // Timeout requires a Ledger power cycle, only if you must
// Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath))
path[0] = byte(len(derivationPath))
for i, component := range derivationPath {
binary.BigEndian.PutUint32(path[1+4*i:], component)
}
// Create the transaction RLP based on whether legacy or EIP155 signing was requeste
var (
txrlp []byte
err error
)
if chainID == nil {
if txrlp, err = rlp.EncodeToBytes([]interface{}{tx.Nonce(), tx.GasPrice(), tx.Gas(), tx.To(), tx.Value(), tx.Data()}); err != nil {
return nil, err
}
} else {
if txrlp, err = rlp.EncodeToBytes([]interface{}{tx.Nonce(), tx.GasPrice(), tx.Gas(), tx.To(), tx.Value(), tx.Data(), chainID, big.NewInt(0), big.NewInt(0)}); err != nil {
return nil, err
}
}
payload := append(path, txrlp...)
// Send the request and wait for the response
var (
op = ledgerP1InitTransactionData
reply []byte
)
for len(payload) > 0 {
// Calculate the size of the next data chunk
chunk := 255
if chunk > len(payload) {
chunk = len(payload)
}
// Send the chunk over, ensuring it's processed correctly
reply, err = w.ledgerExchange(ledgerOpSignTransaction, op, 0, payload[:chunk])
if err != nil {
return nil, err
}
// Shift the payload and ensure subsequent chunks are marked as such
payload = payload[chunk:]
op = ledgerP1ContTransactionData
}
// Extract the Ethereum signature and do a sanity validation
if len(reply) != 65 {
return nil, errors.New("reply lacks signature")
}
signature := append(reply[1:], reply[0])
// Create the correct signer and signature transform based on the chain ID
var signer types.Signer
if chainID == nil {
signer = new(types.HomesteadSigner)
} else {
signer = types.NewEIP155Signer(chainID)
signature[64] = signature[64] - byte(chainID.Uint64()*2+35)
}
// Inject the final signature into the transaction and sanity check the sender
signed, err := tx.WithSignature(signer, signature)
if err != nil {
return nil, err
}
sender, err := types.Sender(signer, signed)
if err != nil {
return nil, err
}
if sender != address {
return nil, fmt.Errorf("signer mismatch: expected %s, got %s", address.Hex(), sender.Hex())
}
return signed, nil
}
// ledgerExchange performs a data exchange with the Ledger wallet, sending it a
// message and retrieving the response.
//
// The common transport header is defined as follows:
//
// Description | Length
// --------------------------------------+----------
// Communication channel ID (big endian) | 2 bytes
// Command tag | 1 byte
// Packet sequence index (big endian) | 2 bytes
// Payload | arbitrary
//
// The Communication channel ID allows commands multiplexing over the same
// physical link. It is not used for the time being, and should be set to 0101
// to avoid compatibility issues with implementations ignoring a leading 00 byte.
//
// The Command tag describes the message content. Use TAG_APDU (0x05) for standard
// APDU payloads, or TAG_PING (0x02) for a simple link test.
//
// The Packet sequence index describes the current sequence for fragmented payloads.
// The first fragment index is 0x00.
//
// APDU Command payloads are encoded as follows:
//
// Description | Length
// -----------------------------------
// APDU length (big endian) | 2 bytes
// APDU CLA | 1 byte
// APDU INS | 1 byte
// APDU P1 | 1 byte
// APDU P2 | 1 byte
// APDU length | 1 byte
// Optional APDU data | arbitrary
func (w *ledgerWallet) ledgerExchange(opcode ledgerOpcode, p1 ledgerParam1, p2 ledgerParam2, data []byte) ([]byte, error) {
// Construct the message payload, possibly split into multiple chunks
apdu := make([]byte, 2, 7+len(data))
binary.BigEndian.PutUint16(apdu, uint16(5+len(data)))
apdu = append(apdu, []byte{0xe0, byte(opcode), byte(p1), byte(p2), byte(len(data))}...)
apdu = append(apdu, data...)
// Stream all the chunks to the device
header := []byte{0x01, 0x01, 0x05, 0x00, 0x00} // Channel ID and command tag appended
chunk := make([]byte, 64)
space := len(chunk) - len(header)
for i := 0; len(apdu) > 0; i++ {
// Construct the new message to stream
chunk = append(chunk[:0], header...)
binary.BigEndian.PutUint16(chunk[3:], uint16(i))
if len(apdu) > space {
chunk = append(chunk, apdu[:space]...)
apdu = apdu[space:]
} else {
chunk = append(chunk, apdu...)
apdu = nil
}
// Send over to the device
if glog.V(logger.Detail) {
glog.Infof("-> %03d.%03d: %x", w.device.Bus, w.device.Address, chunk)
}
if _, err := w.input.Write(chunk); err != nil {
return nil, err
}
}
// Stream the reply back from the wallet in 64 byte chunks
var reply []byte
chunk = chunk[:64] // Yeah, we surely have enough space
for {
// Read the next chunk from the Ledger wallet
if _, err := io.ReadFull(w.output, chunk); err != nil {
return nil, err
}
if glog.V(logger.Detail) {
glog.Infof("<- %03d.%03d: %x", w.device.Bus, w.device.Address, chunk)
}
// Make sure the transport header matches
if chunk[0] != 0x01 || chunk[1] != 0x01 || chunk[2] != 0x05 {
return nil, errReplyInvalidHeader
}
// If it's the first chunk, retrieve the total message length
var payload []byte
if chunk[3] == 0x00 && chunk[4] == 0x00 {
reply = make([]byte, 0, int(binary.BigEndian.Uint16(chunk[5:7])))
payload = chunk[7:]
} else {
payload = chunk[5:]
}
// Append to the reply and stop when filled up
if left := cap(reply) - len(reply); left > len(payload) {
reply = append(reply, payload...)
} else {
reply = append(reply, payload[:left]...)
break
}
}
return reply[:len(reply)-2], nil
}

View File

@@ -0,0 +1,29 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build !ios
// Package usbwallet implements support for USB hardware wallets.
package usbwallet
import "github.com/karalabe/gousb/usb"
// deviceID is a combined vendor/product identifier to uniquely identify a USB
// hardware device.
type deviceID struct {
Vendor usb.ID // The Vendor identifer
Product usb.ID // The Product identifier
}

View File

@@ -0,0 +1,38 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// This file contains the implementation for interacting with the Ledger hardware
// wallets. The wire protocol spec can be found in the Ledger Blue GitHub repo:
// https://raw.githubusercontent.com/LedgerHQ/blue-app-eth/master/doc/ethapp.asc
// +build ios
package usbwallet
import (
"errors"
"github.com/ethereum/go-ethereum/accounts"
)
// Here be dragons! There is no USB support on iOS.
// ErrIOSNotSupported is returned for all USB hardware backends on iOS.
var ErrIOSNotSupported = errors.New("no USB support on iOS")
func NewLedgerHub() (accounts.Backend, error) {
return nil, ErrIOSNotSupported
}

View File

@@ -24,7 +24,7 @@ Usage: go run ci.go <command> <command flags/arguments>
Available commands are:
install [-arch architecture] [ packages... ] -- builds packages and executables
test [ -coverage ] [ -vet ] [ packages... ] -- runs the tests
test [ -coverage ] [ -vet ] [ -misspell ] [ packages... ] -- runs the tests
archive [-arch architecture] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artefacts
importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
@@ -236,6 +236,14 @@ func goToolArch(arch string, subcmd string, args ...string) *exec.Cmd {
gocmd := filepath.Join(runtime.GOROOT(), "bin", "go")
cmd := exec.Command(gocmd, subcmd)
cmd.Args = append(cmd.Args, args...)
if subcmd == "build" || subcmd == "install" || subcmd == "test" {
// Go CGO has a Windows linker error prior to 1.8 (https://github.com/golang/go/issues/8756).
// Work around issue by allowing multiple definitions for <1.8 builds.
if runtime.GOOS == "windows" && runtime.Version() < "go1.8" {
cmd.Args = append(cmd.Args, []string{"-ldflags", "-extldflags -Wl,--allow-multiple-definition"}...)
}
}
cmd.Env = []string{
"GO15VENDOREXPERIMENT=1",
"GOPATH=" + build.GOPATH(),
@@ -262,6 +270,7 @@ func goToolArch(arch string, subcmd string, args ...string) *exec.Cmd {
func doTest(cmdline []string) {
var (
vet = flag.Bool("vet", false, "Whether to run go vet")
misspell = flag.Bool("misspell", false, "Whether to run the spell checker")
coverage = flag.Bool("coverage", false, "Whether to record code coverage")
)
flag.CommandLine.Parse(cmdline)
@@ -287,7 +296,9 @@ func doTest(cmdline []string) {
if *vet {
build.MustRun(goTool("vet", packages...))
}
if *misspell {
spellcheck(packages)
}
// Run the actual tests.
gotest := goTool("test")
// Test a single package at a time. CI builders are slow
@@ -300,6 +311,34 @@ func doTest(cmdline []string) {
build.MustRun(gotest)
}
// spellcheck runs the client9/misspell spellchecker package on all Go, Cgo and
// test files in the requested packages.
func spellcheck(packages []string) {
// Ensure the spellchecker is available
build.MustRun(goTool("get", "github.com/client9/misspell/cmd/misspell"))
// Windows chokes on long argument lists, check packages individually
for _, pkg := range packages {
// The spell checker doesn't work on packages, gather all .go files for it
out, err := goTool("list", "-f", "{{.Dir}}{{range .GoFiles}}\n{{.}}{{end}}{{range .CgoFiles}}\n{{.}}{{end}}{{range .TestGoFiles}}\n{{.}}{{end}}", pkg).CombinedOutput()
if err != nil {
log.Fatalf("source file listing failed: %v\n%s", err, string(out))
}
// Retrieve the folder and assemble the source list
lines := strings.Split(string(out), "\n")
root := lines[0]
sources := make([]string, 0, len(lines)-1)
for _, line := range lines[1:] {
if line = strings.TrimSpace(line); line != "" {
sources = append(sources, filepath.Join(root, line))
}
}
// Run the spell checker for this particular package
build.MustRunCommand(filepath.Join(GOBIN, "misspell"), append([]string{"-error"}, sources...)...)
}
}
// Release Packaging
func doArchive(cmdline []string) {
@@ -669,9 +708,16 @@ func doAndroidArchive(cmdline []string) {
flag.CommandLine.Parse(cmdline)
env := build.Env()
// Sanity check that the SDK and NDK are installed and set
if os.Getenv("ANDROID_HOME") == "" {
log.Fatal("Please ensure ANDROID_HOME points to your Android SDK")
}
if os.Getenv("ANDROID_NDK") == "" {
log.Fatal("Please ensure ANDROID_NDK points to your Android NDK")
}
// Build the Android archive and Maven resources
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile"))
build.MustRun(gomobileTool("init"))
build.MustRun(gomobileTool("init", "--ndk", os.Getenv("ANDROID_NDK")))
build.MustRun(gomobileTool("bind", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
if *local {

View File

@@ -185,7 +185,7 @@ func getFiles() []string {
files = append(files, line)
})
if err != nil {
log.Fatalf("error getting files:", err)
log.Fatal("error getting files:", err)
}
return files
}

View File

@@ -94,7 +94,9 @@ func main() {
abi, _ := json.Marshal(contract.Info.AbiDefinition) // Flatten the compiler parse
abis = append(abis, string(abi))
bins = append(bins, contract.Code)
types = append(types, name)
nameParts := strings.Split(name, ":")
types = append(types, nameParts[len(nameParts)-1])
}
} else {
// Otherwise load up the ABI, optional bytecode and type name from the parameters

View File

@@ -161,6 +161,7 @@ func run(ctx *cli.Context) error {
Value: common.Big(ctx.GlobalString(ValueFlag.Name)),
EVMConfig: vm.Config{
Tracer: logger,
Debug: ctx.GlobalBool(DebugFlag.Name),
DisableGasMetering: ctx.GlobalBool(DisableGasMeteringFlag.Name),
},
})
@@ -176,6 +177,7 @@ func run(ctx *cli.Context) error {
Value: common.Big(ctx.GlobalString(ValueFlag.Name)),
EVMConfig: vm.Config{
Tracer: logger,
Debug: ctx.GlobalBool(DebugFlag.Name),
DisableGasMetering: ctx.GlobalBool(DisableGasMeteringFlag.Name),
},
})

View File

@@ -21,6 +21,7 @@ import (
"io/ioutil"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/crypto"
@@ -180,31 +181,36 @@ nodes.
func accountList(ctx *cli.Context) error {
stack := utils.MakeNode(ctx, clientIdentifier, gitCommit)
for i, acct := range stack.AccountManager().Accounts() {
fmt.Printf("Account #%d: {%x} %s\n", i, acct.Address, acct.File)
var index int
for _, wallet := range stack.AccountManager().Wallets() {
for _, account := range wallet.Accounts() {
fmt.Printf("Account #%d: {%x} %s\n", index, account.Address, &account.URL)
index++
}
}
return nil
}
// tries unlocking the specified account a few times.
func unlockAccount(ctx *cli.Context, accman *accounts.Manager, address string, i int, passwords []string) (accounts.Account, string) {
account, err := utils.MakeAddress(accman, address)
func unlockAccount(ctx *cli.Context, ks *keystore.KeyStore, address string, i int, passwords []string) (accounts.Account, string) {
account, err := utils.MakeAddress(ks, address)
if err != nil {
utils.Fatalf("Could not list accounts: %v", err)
}
for trials := 0; trials < 3; trials++ {
prompt := fmt.Sprintf("Unlocking account %s | Attempt %d/%d", address, trials+1, 3)
password := getPassPhrase(prompt, false, i, passwords)
err = accman.Unlock(account, password)
err = ks.Unlock(account, password)
if err == nil {
glog.V(logger.Info).Infof("Unlocked account %x", account.Address)
return account, password
}
if err, ok := err.(*accounts.AmbiguousAddrError); ok {
if err, ok := err.(*keystore.AmbiguousAddrError); ok {
glog.V(logger.Info).Infof("Unlocked account %x", account.Address)
return ambiguousAddrRecovery(accman, err, password), password
return ambiguousAddrRecovery(ks, err, password), password
}
if err != accounts.ErrDecrypt {
if err != keystore.ErrDecrypt {
// No need to prompt again if the error is not decryption-related.
break
}
@@ -244,15 +250,15 @@ func getPassPhrase(prompt string, confirmation bool, i int, passwords []string)
return password
}
func ambiguousAddrRecovery(am *accounts.Manager, err *accounts.AmbiguousAddrError, auth string) accounts.Account {
func ambiguousAddrRecovery(ks *keystore.KeyStore, err *keystore.AmbiguousAddrError, auth string) accounts.Account {
fmt.Printf("Multiple key files exist for address %x:\n", err.Addr)
for _, a := range err.Matches {
fmt.Println(" ", a.File)
fmt.Println(" ", a.URL)
}
fmt.Println("Testing your passphrase against all of them...")
var match *accounts.Account
for _, a := range err.Matches {
if err := am.Unlock(a, auth); err == nil {
if err := ks.Unlock(a, auth); err == nil {
match = &a
break
}
@@ -260,11 +266,11 @@ func ambiguousAddrRecovery(am *accounts.Manager, err *accounts.AmbiguousAddrErro
if match == nil {
utils.Fatalf("None of the listed files could be unlocked.")
}
fmt.Printf("Your passphrase unlocked %s\n", match.File)
fmt.Printf("Your passphrase unlocked %s\n", match.URL)
fmt.Println("In order to avoid this warning, you need to remove the following duplicate key files:")
for _, a := range err.Matches {
if a != *match {
fmt.Println(" ", a.File)
fmt.Println(" ", a.URL)
}
}
return *match
@@ -275,7 +281,8 @@ func accountCreate(ctx *cli.Context) error {
stack := utils.MakeNode(ctx, clientIdentifier, gitCommit)
password := getPassPhrase("Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0, utils.MakePasswordList(ctx))
account, err := stack.AccountManager().NewAccount(password)
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
account, err := ks.NewAccount(password)
if err != nil {
utils.Fatalf("Failed to create account: %v", err)
}
@@ -290,9 +297,11 @@ func accountUpdate(ctx *cli.Context) error {
utils.Fatalf("No accounts specified to update")
}
stack := utils.MakeNode(ctx, clientIdentifier, gitCommit)
account, oldPassword := unlockAccount(ctx, stack.AccountManager(), ctx.Args().First(), 0, nil)
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
account, oldPassword := unlockAccount(ctx, ks, ctx.Args().First(), 0, nil)
newPassword := getPassPhrase("Please give a new password. Do not forget this password.", true, 0, nil)
if err := stack.AccountManager().Update(account, oldPassword, newPassword); err != nil {
if err := ks.Update(account, oldPassword, newPassword); err != nil {
utils.Fatalf("Could not update the account: %v", err)
}
return nil
@@ -310,7 +319,9 @@ func importWallet(ctx *cli.Context) error {
stack := utils.MakeNode(ctx, clientIdentifier, gitCommit)
passphrase := getPassPhrase("", false, 0, utils.MakePasswordList(ctx))
acct, err := stack.AccountManager().ImportPreSaleKey(keyJson, passphrase)
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
acct, err := ks.ImportPreSaleKey(keyJson, passphrase)
if err != nil {
utils.Fatalf("%v", err)
}
@@ -329,7 +340,9 @@ func accountImport(ctx *cli.Context) error {
}
stack := utils.MakeNode(ctx, clientIdentifier, gitCommit)
passphrase := getPassPhrase("Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0, utils.MakePasswordList(ctx))
acct, err := stack.AccountManager().ImportECDSA(key, passphrase)
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
acct, err := ks.ImportECDSA(key, passphrase)
if err != nil {
utils.Fatalf("Could not create the account: %v", err)
}

View File

@@ -35,7 +35,7 @@ import (
func tmpDatadirWithKeystore(t *testing.T) string {
datadir := tmpdir(t)
keystore := filepath.Join(datadir, "keystore")
source := filepath.Join("..", "..", "accounts", "testdata", "keystore")
source := filepath.Join("..", "..", "accounts", "keystore", "testdata", "keystore")
if err := cp.CopyAll(keystore, source); err != nil {
t.Fatal(err)
}
@@ -53,15 +53,15 @@ func TestAccountList(t *testing.T) {
defer geth.expectExit()
if runtime.GOOS == "windows" {
geth.expect(`
Account #0: {7ef5a6135f1fd6a02593eedc869c6d41d934aef8} {{.Datadir}}\keystore\UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Account #1: {f466859ead1932d743d622cb74fc058882e8648a} {{.Datadir}}\keystore\aaa
Account #2: {289d485d9771714cce91d3393d764e1311907acc} {{.Datadir}}\keystore\zzz
Account #0: {7ef5a6135f1fd6a02593eedc869c6d41d934aef8} keystore://{{.Datadir}}\keystore\UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Account #1: {f466859ead1932d743d622cb74fc058882e8648a} keystore://{{.Datadir}}\keystore\aaa
Account #2: {289d485d9771714cce91d3393d764e1311907acc} keystore://{{.Datadir}}\keystore\zzz
`)
} else {
geth.expect(`
Account #0: {7ef5a6135f1fd6a02593eedc869c6d41d934aef8} {{.Datadir}}/keystore/UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Account #1: {f466859ead1932d743d622cb74fc058882e8648a} {{.Datadir}}/keystore/aaa
Account #2: {289d485d9771714cce91d3393d764e1311907acc} {{.Datadir}}/keystore/zzz
Account #0: {7ef5a6135f1fd6a02593eedc869c6d41d934aef8} keystore://{{.Datadir}}/keystore/UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Account #1: {f466859ead1932d743d622cb74fc058882e8648a} keystore://{{.Datadir}}/keystore/aaa
Account #2: {289d485d9771714cce91d3393d764e1311907acc} keystore://{{.Datadir}}/keystore/zzz
`)
}
}
@@ -230,7 +230,7 @@ Fatal: Failed to unlock account 0 (could not decrypt key with given passphrase)
}
func TestUnlockFlagAmbiguous(t *testing.T) {
store := filepath.Join("..", "..", "accounts", "testdata", "dupes")
store := filepath.Join("..", "..", "accounts", "keystore", "testdata", "dupes")
geth := runGeth(t,
"--keystore", store, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a",
@@ -247,12 +247,12 @@ Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foobar"}}
Multiple key files exist for address f466859ead1932d743d622cb74fc058882e8648a:
{{keypath "1"}}
{{keypath "2"}}
keystore://{{keypath "1"}}
keystore://{{keypath "2"}}
Testing your passphrase against all of them...
Your passphrase unlocked {{keypath "1"}}
Your passphrase unlocked keystore://{{keypath "1"}}
In order to avoid this warning, you need to remove the following duplicate key files:
{{keypath "2"}}
keystore://{{keypath "2"}}
`)
geth.expectExit()
@@ -267,7 +267,7 @@ In order to avoid this warning, you need to remove the following duplicate key f
}
func TestUnlockFlagAmbiguousWrongPassword(t *testing.T) {
store := filepath.Join("..", "..", "accounts", "testdata", "dupes")
store := filepath.Join("..", "..", "accounts", "keystore", "testdata", "dupes")
geth := runGeth(t,
"--keystore", store, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a")
@@ -283,8 +283,8 @@ Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "wrong"}}
Multiple key files exist for address f466859ead1932d743d622cb74fc058882e8648a:
{{keypath "1"}}
{{keypath "2"}}
keystore://{{keypath "1"}}
keystore://{{keypath "2"}}
Testing your passphrase against all of them...
Fatal: None of the listed files could be unlocked.
`)

View File

@@ -123,6 +123,7 @@ func initGenesis(ctx *cli.Context) error {
if err != nil {
utils.Fatalf("failed to read genesis file: %v", err)
}
defer genesisFile.Close()
block, err := core.WriteGenesisBlock(chaindb, genesisFile)
if err != nil {

View File

@@ -25,11 +25,14 @@ import (
"strings"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/contracts/release"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/logger"
"github.com/ethereum/go-ethereum/logger/glog"
@@ -88,7 +91,6 @@ func init() {
utils.BootnodesFlag,
utils.DataDirFlag,
utils.KeyStoreDirFlag,
utils.OlympicFlag,
utils.FastSyncFlag,
utils.LightModeFlag,
utils.LightServFlag,
@@ -107,7 +109,6 @@ func init() {
utils.AutoDAGFlag,
utils.TargetGasLimitFlag,
utils.NATFlag,
utils.NatspecEnabledFlag,
utils.NoDiscoverFlag,
utils.DiscoveryV5Flag,
utils.NetrestrictFlag,
@@ -133,6 +134,7 @@ func init() {
utils.VMForceJitFlag,
utils.VMJitCacheFlag,
utils.VMEnableJitFlag,
utils.VMEnableDebugFlag,
utils.NetworkIdFlag,
utils.RPCCORSDomainFlag,
utils.EthStatsURLFlag,
@@ -246,14 +248,50 @@ func startNode(ctx *cli.Context, stack *node.Node) {
utils.StartNode(stack)
// Unlock any account specifically requested
accman := stack.AccountManager()
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
passwords := utils.MakePasswordList(ctx)
accounts := strings.Split(ctx.GlobalString(utils.UnlockedAccountFlag.Name), ",")
for i, account := range accounts {
unlocks := strings.Split(ctx.GlobalString(utils.UnlockedAccountFlag.Name), ",")
for i, account := range unlocks {
if trimmed := strings.TrimSpace(account); trimmed != "" {
unlockAccount(ctx, accman, trimmed, i, passwords)
unlockAccount(ctx, ks, trimmed, i, passwords)
}
}
// Register wallet event handlers to open and auto-derive wallets
events := make(chan accounts.WalletEvent, 16)
stack.AccountManager().Subscribe(events)
go func() {
// Create an chain state reader for self-derivation
rpcClient, err := stack.Attach()
if err != nil {
utils.Fatalf("Failed to attach to self: %v", err)
}
stateReader := ethclient.NewClient(rpcClient)
// Open and self derive any wallets already attached
for _, wallet := range stack.AccountManager().Wallets() {
if err := wallet.Open(""); err != nil {
glog.V(logger.Warn).Infof("Failed to open wallet %s: %v", wallet.URL(), err)
} else {
wallet.SelfDerive(accounts.DefaultBaseDerivationPath, stateReader)
}
}
// Listen for wallet event till termination
for event := range events {
if event.Arrive {
if err := event.Wallet.Open(""); err != nil {
glog.V(logger.Info).Infof("New wallet appeared: %s, failed to open: %s", event.Wallet.URL(), err)
} else {
glog.V(logger.Info).Infof("New wallet appeared: %s, %s", event.Wallet.URL(), event.Wallet.Status())
event.Wallet.SelfDerive(accounts.DefaultBaseDerivationPath, stateReader)
}
} else {
glog.V(logger.Info).Infof("Old wallet dropped: %s", event.Wallet.URL())
event.Wallet.Close()
}
}
}()
// Start auxiliary services if enabled
if ctx.GlobalBool(utils.MiningEnabledFlag.Name) {
var ethereum *eth.Ethereum

View File

@@ -236,8 +236,9 @@ func expandMetrics(metrics map[string]interface{}, path string) []string {
// fetchMetric iterates over the metrics map and retrieves a specific one.
func fetchMetric(metrics map[string]interface{}, metric string) float64 {
parts, found := strings.Split(metric, "/"), true
parts := strings.Split(metric, "/")
for _, part := range parts[:len(parts)-1] {
var found bool
metrics, found = metrics[part].(map[string]interface{})
if !found {
return 0

View File

@@ -67,7 +67,6 @@ var AppHelpFlagGroups = []flagGroup{
utils.DataDirFlag,
utils.KeyStoreDirFlag,
utils.NetworkIdFlag,
utils.OlympicFlag,
utils.TestNetFlag,
utils.DevModeFlag,
utils.IdentityFlag,
@@ -156,6 +155,7 @@ var AppHelpFlagGroups = []flagGroup{
utils.VMEnableJitFlag,
utils.VMForceJitFlag,
utils.VMJitCacheFlag,
utils.VMEnableDebugFlag,
},
},
{
@@ -170,7 +170,6 @@ var AppHelpFlagGroups = []flagGroup{
Name: "EXPERIMENTAL",
Flags: []cli.Flag{
utils.WhisperEnabledFlag,
utils.NatspecEnabledFlag,
},
},
{

View File

@@ -23,6 +23,7 @@ import (
"os"
"os/signal"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/ethdb"
@@ -99,17 +100,18 @@ func MakeSystemNode(privkey string, test *tests.BlockTest) (*node.Node, error) {
return nil, err
}
// Create the keystore and inject an unlocked account if requested
accman := stack.AccountManager()
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
if len(privkey) > 0 {
key, err := crypto.HexToECDSA(privkey)
if err != nil {
return nil, err
}
a, err := accman.ImportECDSA(key, "")
a, err := ks.ImportECDSA(key, "")
if err != nil {
return nil, err
}
if err := accman.Unlock(a, ""); err != nil {
if err := ks.Unlock(a, ""); err != nil {
return nil, err
}
}

39
cmd/swarm/cleandb.go Normal file
View File

@@ -0,0 +1,39 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"log"
"github.com/ethereum/go-ethereum/swarm/storage"
"gopkg.in/urfave/cli.v1"
)
func cleandb(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 1 {
log.Fatal("need path to chunks database as the first and only argument")
}
chunkDbPath := args[0]
hash := storage.MakeHashFunc("SHA3")
dbStore, err := storage.NewDbStore(chunkDbPath, hash, 10000000, 0)
if err != nil {
log.Fatalf("cannot initialise dbstore: %v", err)
}
dbStore.Cleanup()
}

View File

@@ -36,6 +36,7 @@ func hash(ctx *cli.Context) {
fmt.Println("Error opening file " + args[1])
os.Exit(1)
}
defer f.Close()
stat, _ := f.Stat()
chunker := storage.NewTreeChunker(storage.NewChunkerParams())

View File

@@ -21,11 +21,14 @@ import (
"fmt"
"io/ioutil"
"os"
"os/signal"
"runtime"
"strconv"
"strings"
"syscall"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
@@ -39,7 +42,6 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/swarm"
bzzapi "github.com/ethereum/go-ethereum/swarm/api"
"github.com/ethereum/go-ethereum/swarm/network"
"gopkg.in/urfave/cli.v1"
)
@@ -76,7 +78,6 @@ var (
SwarmNetworkIdFlag = cli.IntFlag{
Name: "bzznetworkid",
Usage: "Network identifier (integer, default 3=swarm testnet)",
Value: network.NetworkId,
}
SwarmConfigPathFlag = cli.StringFlag{
Name: "bzzconfig",
@@ -154,6 +155,52 @@ The output of this command is supposed to be machine-readable.
ArgsUsage: " <file>",
Description: `
Prints the swarm hash of file or directory.
`,
},
{
Name: "manifest",
Usage: "update a MANIFEST",
ArgsUsage: "manifest COMMAND",
Description: `
Updates a MANIFEST by adding/removing/updating the hash of a path.
`,
Subcommands: []cli.Command{
{
Action: add,
Name: "add",
Usage: "add a new path to the manifest",
ArgsUsage: "<MANIFEST> <path> <hash> [<content-type>]",
Description: `
Adds a new path to the manifest
`,
},
{
Action: update,
Name: "update",
Usage: "update the hash for an already existing path in the manifest",
ArgsUsage: "<MANIFEST> <path> <newhash> [<newcontent-type>]",
Description: `
Update the hash for an already existing path in the manifest
`,
},
{
Action: remove,
Name: "remove",
Usage: "removes a path from the manifest",
ArgsUsage: "<MANIFEST> <path>",
Description: `
Removes a path from the manifest
`,
},
},
},
{
Action: cleandb,
Name: "cleandb",
Usage: "Cleans database of corrupted entries",
ArgsUsage: " ",
Description: `
Cleans database of corrupted entries.
`,
},
}
@@ -226,6 +273,14 @@ func bzzd(ctx *cli.Context) error {
stack := utils.MakeNode(ctx, clientIdentifier, gitCommit)
registerBzzService(ctx, stack)
utils.StartNode(stack)
go func() {
sigc := make(chan os.Signal, 1)
signal.Notify(sigc, syscall.SIGTERM)
defer signal.Stop(sigc)
<-sigc
glog.V(logger.Info).Infoln("Got sigterm, shutting down...")
stack.Stop()
}()
networkId := ctx.GlobalUint64(SwarmNetworkIdFlag.Name)
// Add bootnodes as initial peers.
if ctx.GlobalIsSet(utils.BootnodesFlag.Name) {
@@ -242,6 +297,7 @@ func bzzd(ctx *cli.Context) error {
}
func registerBzzService(ctx *cli.Context, stack *node.Node) {
prvkey := getAccount(ctx, stack)
chbookaddr := common.HexToAddress(ctx.GlobalString(ChequebookAddrFlag.Name))
@@ -249,6 +305,7 @@ func registerBzzService(ctx *cli.Context, stack *node.Node) {
if bzzdir == "" {
bzzdir = stack.InstanceDir()
}
bzzconfig, err := bzzapi.NewConfig(bzzdir, chbookaddr, prvkey, ctx.GlobalUint64(SwarmNetworkIdFlag.Name))
if err != nil {
utils.Fatalf("unable to configure swarm: %v", err)
@@ -280,6 +337,7 @@ func registerBzzService(ctx *cli.Context, stack *node.Node) {
func getAccount(ctx *cli.Context, stack *node.Node) *ecdsa.PrivateKey {
keyid := ctx.GlobalString(SwarmAccountFlag.Name)
if keyid == "" {
utils.Fatalf("Option %q is required", SwarmAccountFlag.Name)
}
@@ -289,29 +347,36 @@ func getAccount(ctx *cli.Context, stack *node.Node) *ecdsa.PrivateKey {
return key
}
// Otherwise try getting it from the keystore.
return decryptStoreAccount(stack.AccountManager(), keyid)
am := stack.AccountManager()
ks := am.Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
return decryptStoreAccount(ks, keyid)
}
func decryptStoreAccount(accman *accounts.Manager, account string) *ecdsa.PrivateKey {
func decryptStoreAccount(ks *keystore.KeyStore, account string) *ecdsa.PrivateKey {
var a accounts.Account
var err error
if common.IsHexAddress(account) {
a, err = accman.Find(accounts.Account{Address: common.HexToAddress(account)})
} else if ix, ixerr := strconv.Atoi(account); ixerr == nil {
a, err = accman.AccountByIndex(ix)
a, err = ks.Find(accounts.Account{Address: common.HexToAddress(account)})
} else if ix, ixerr := strconv.Atoi(account); ixerr == nil && ix > 0 {
if accounts := ks.Accounts(); len(accounts) > ix {
a = accounts[ix]
} else {
err = fmt.Errorf("index %d higher than number of accounts %d", ix, len(accounts))
}
} else {
utils.Fatalf("Can't find swarm account key %s", account)
}
if err != nil {
utils.Fatalf("Can't find swarm account key: %v", err)
}
keyjson, err := ioutil.ReadFile(a.File)
keyjson, err := ioutil.ReadFile(a.URL.Path)
if err != nil {
utils.Fatalf("Can't load swarm account key: %v", err)
}
for i := 1; i <= 3; i++ {
passphrase := promptPassphrase(fmt.Sprintf("Unlocking swarm account %s [%d/3]", a.Address.Hex(), i))
key, err := accounts.DecryptKey(keyjson, passphrase)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err == nil {
return key.PrivateKey
}

360
cmd/swarm/manifest.go Normal file
View File

@@ -0,0 +1,360 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command MANIFEST update
package main
import (
"gopkg.in/urfave/cli.v1"
"log"
"mime"
"path/filepath"
"strings"
"fmt"
"encoding/json"
)
func add(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 3 {
log.Fatal("need atleast three arguments <MHASH> <path> <HASH> [<content-type>]")
}
var (
mhash = args[0]
path = args[1]
hash = args[2]
ctype string
wantManifest = ctx.GlobalBoolT(SwarmWantManifestFlag.Name)
mroot manifest
)
if len(args) > 3 {
ctype = args[3]
} else {
ctype = mime.TypeByExtension(filepath.Ext(path))
}
newManifest := addEntryToManifest (ctx, mhash, path, hash, ctype)
fmt.Println(newManifest)
if !wantManifest {
// Print the manifest. This is the only output to stdout.
mrootJSON, _ := json.MarshalIndent(mroot, "", " ")
fmt.Println(string(mrootJSON))
return
}
}
func update(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 3 {
log.Fatal("need atleast three arguments <MHASH> <path> <HASH>")
}
var (
mhash = args[0]
path = args[1]
hash = args[2]
ctype string
wantManifest = ctx.GlobalBoolT(SwarmWantManifestFlag.Name)
mroot manifest
)
if len(args) > 3 {
ctype = args[3]
} else {
ctype = mime.TypeByExtension(filepath.Ext(path))
}
newManifest := updateEntryInManifest (ctx, mhash, path, hash, ctype)
fmt.Println(newManifest)
if !wantManifest {
// Print the manifest. This is the only output to stdout.
mrootJSON, _ := json.MarshalIndent(mroot, "", " ")
fmt.Println(string(mrootJSON))
return
}
}
func remove(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 2 {
log.Fatal("need atleast two arguments <MHASH> <path>")
}
var (
mhash = args[0]
path = args[1]
wantManifest = ctx.GlobalBoolT(SwarmWantManifestFlag.Name)
mroot manifest
)
newManifest := removeEntryFromManifest (ctx, mhash, path)
fmt.Println(newManifest)
if !wantManifest {
// Print the manifest. This is the only output to stdout.
mrootJSON, _ := json.MarshalIndent(mroot, "", " ")
fmt.Println(string(mrootJSON))
return
}
}
func addEntryToManifest(ctx *cli.Context, mhash , path, hash , ctype string) string {
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client = &client{api: bzzapi}
longestPathEntry = manifestEntry{
Path: "",
Hash: "",
ContentType: "",
}
)
mroot, err := client.downloadManifest(mhash)
if err != nil {
log.Fatalln("manifest download failed:", err)
}
//TODO: check if the "hash" to add is valid and present in swarm
_, err = client.downloadManifest(hash)
if err != nil {
log.Fatalln("hash to add is not present:", err)
}
// See if we path is in this Manifest or do we have to dig deeper
for _, entry := range mroot.Entries {
if path == entry.Path {
log.Fatal(path, "Already present, not adding anything")
}else {
if entry.ContentType == "application/bzz-manifest+json" {
prfxlen := strings.HasPrefix(path, entry.Path)
if prfxlen && len(path) > len(longestPathEntry.Path) {
longestPathEntry = entry
}
}
}
}
if longestPathEntry.Path != "" {
// Load the child Manifest add the entry there
newPath := path[len(longestPathEntry.Path):]
newHash := addEntryToManifest (ctx, longestPathEntry.Hash, newPath, hash, ctype)
// Replace the hash for parent Manifests
newMRoot := manifest{}
for _, entry := range mroot.Entries {
if longestPathEntry.Path == entry.Path {
entry.Hash = newHash
}
newMRoot.Entries = append(newMRoot.Entries, entry)
}
mroot = newMRoot
} else {
// Add the entry in the leaf Manifest
newEntry := manifestEntry{
Path: path,
Hash: hash,
ContentType: ctype,
}
mroot.Entries = append(mroot.Entries, newEntry)
}
newManifestHash, err := client.uploadManifest(mroot)
if err != nil {
log.Fatalln("manifest upload failed:", err)
}
return newManifestHash
}
func updateEntryInManifest(ctx *cli.Context, mhash , path, hash , ctype string) string {
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client = &client{api: bzzapi}
newEntry = manifestEntry{
Path: "",
Hash: "",
ContentType: "",
}
longestPathEntry = manifestEntry{
Path: "",
Hash: "",
ContentType: "",
}
)
mroot, err := client.downloadManifest(mhash)
if err != nil {
log.Fatalln("manifest download failed:", err)
}
//TODO: check if the "hash" with which to update is valid and present in swarm
// See if we path is in this Manifest or do we have to dig deeper
for _, entry := range mroot.Entries {
if path == entry.Path {
newEntry = entry
}else {
if entry.ContentType == "application/bzz-manifest+json" {
prfxlen := strings.HasPrefix(path, entry.Path)
if prfxlen && len(path) > len(longestPathEntry.Path) {
longestPathEntry = entry
}
}
}
}
if longestPathEntry.Path == "" && newEntry.Path == "" {
log.Fatal(path, " Path not present in the Manifest, not setting anything")
}
if longestPathEntry.Path != "" {
// Load the child Manifest add the entry there
newPath := path[len(longestPathEntry.Path):]
newHash := updateEntryInManifest (ctx, longestPathEntry.Hash, newPath, hash, ctype)
// Replace the hash for parent Manifests
newMRoot := manifest{}
for _, entry := range mroot.Entries {
if longestPathEntry.Path == entry.Path {
entry.Hash = newHash
}
newMRoot.Entries = append(newMRoot.Entries, entry)
}
mroot = newMRoot
}
if newEntry.Path != "" {
// Replace the hash for leaf Manifest
newMRoot := manifest{}
for _, entry := range mroot.Entries {
if newEntry.Path == entry.Path {
myEntry := manifestEntry{
Path: entry.Path,
Hash: hash,
ContentType: ctype,
}
newMRoot.Entries = append(newMRoot.Entries, myEntry)
} else {
newMRoot.Entries = append(newMRoot.Entries, entry)
}
}
mroot = newMRoot
}
newManifestHash, err := client.uploadManifest(mroot)
if err != nil {
log.Fatalln("manifest upload failed:", err)
}
return newManifestHash
}
func removeEntryFromManifest(ctx *cli.Context, mhash , path string) string {
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client = &client{api: bzzapi}
entryToRemove = manifestEntry{
Path: "",
Hash: "",
ContentType: "",
}
longestPathEntry = manifestEntry{
Path: "",
Hash: "",
ContentType: "",
}
)
mroot, err := client.downloadManifest(mhash)
if err != nil {
log.Fatalln("manifest download failed:", err)
}
// See if we path is in this Manifest or do we have to dig deeper
for _, entry := range mroot.Entries {
if path == entry.Path {
entryToRemove = entry
}else {
if entry.ContentType == "application/bzz-manifest+json" {
prfxlen := strings.HasPrefix(path, entry.Path)
if prfxlen && len(path) > len(longestPathEntry.Path) {
longestPathEntry = entry
}
}
}
}
if longestPathEntry.Path == "" && entryToRemove.Path == "" {
log.Fatal(path, "Path not present in the Manifest, not removing anything")
}
if longestPathEntry.Path != "" {
// Load the child Manifest remove the entry there
newPath := path[len(longestPathEntry.Path):]
newHash := removeEntryFromManifest (ctx, longestPathEntry.Hash, newPath)
// Replace the hash for parent Manifests
newMRoot := manifest{}
for _, entry := range mroot.Entries {
if longestPathEntry.Path == entry.Path {
entry.Hash = newHash
}
newMRoot.Entries = append(newMRoot.Entries, entry)
}
mroot = newMRoot
}
if entryToRemove.Path != "" {
// remove the entry in this Manifest
newMRoot := manifest{}
for _, entry := range mroot.Entries {
if entryToRemove.Path != entry.Path {
newMRoot.Entries = append(newMRoot.Entries, entry)
}
}
mroot = newMRoot
}
newManifestHash, err := client.uploadManifest(mroot)
if err != nil {
log.Fatalln("manifest upload failed:", err)
}
return newManifestHash
}

View File

@@ -229,3 +229,29 @@ func (c *client) postRaw(mimetype string, size int64, body io.ReadCloser) (strin
content, err := ioutil.ReadAll(resp.Body)
return string(content), err
}
func (c *client) downloadManifest(mhash string) (manifest, error) {
mroot := manifest{}
req, err := http.NewRequest("GET", c.api + "/bzzr:/" + mhash, nil)
if err != nil {
return mroot, err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return mroot, err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return mroot, fmt.Errorf("bad status: %s", resp.Status)
}
content, err := ioutil.ReadAll(resp.Body)
err = json.Unmarshal(content, &mroot)
if err != nil {
return mroot, fmt.Errorf("Manifest %v is malformed: %v", mhash, err)
}
return mroot, err
}

View File

@@ -23,8 +23,6 @@ import (
"os/user"
"path"
"strings"
"gopkg.in/urfave/cli.v1"
)
// Custom type which is registered in the flags library which cli uses for
@@ -46,7 +44,6 @@ func (self *DirectoryString) Set(value string) error {
// Custom cli.Flag type which expand the received string to an absolute path.
// e.g. ~/.ethereum -> /home/username/.ethereum
type DirectoryFlag struct {
cli.GenericFlag
Name string
Value DirectoryString
Usage string
@@ -54,15 +51,10 @@ type DirectoryFlag struct {
}
func (self DirectoryFlag) String() string {
var fmtString string
fmtString = "%s %v\t%v"
fmtString := "%s %v\t%v"
if len(self.Value.Value) > 0 {
fmtString = "%s \"%v\"\t%v"
} else {
fmtString = "%s %v\t%v"
}
return withEnvHint(self.EnvVar, fmt.Sprintf(fmtString, prefixedNames(self.Name), self.Value.Value, self.Usage))
}
@@ -122,7 +114,7 @@ func withEnvHint(envVar, str string) string {
return str + envText
}
func (self DirectoryFlag) getName() string {
func (self DirectoryFlag) GetName() string {
return self.Name
}

View File

@@ -21,7 +21,6 @@ import (
"crypto/ecdsa"
"fmt"
"io/ioutil"
"math"
"math/big"
"os"
"path/filepath"
@@ -31,9 +30,11 @@ import (
"github.com/ethereum/ethash"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/ethdb"
@@ -116,13 +117,9 @@ var (
}
NetworkIdFlag = cli.IntFlag{
Name: "networkid",
Usage: "Network identifier (integer, 0=Olympic (disused), 1=Frontier, 2=Morden (disused), 3=Ropsten)",
Usage: "Network identifier (integer, 1=Frontier, 2=Morden (disused), 3=Ropsten)",
Value: eth.NetworkId,
}
OlympicFlag = cli.BoolFlag{
Name: "olympic",
Usage: "Olympic network: pre-configured pre-release test network",
}
TestNetFlag = cli.BoolFlag{
Name: "testnet",
Usage: "Ropsten network: pre-configured test network",
@@ -135,10 +132,6 @@ var (
Name: "identity",
Usage: "Custom node name",
}
NatspecEnabledFlag = cli.BoolFlag{
Name: "natspec",
Usage: "Enable NatSpec confirmation notice",
}
DocRootFlag = DirectoryFlag{
Name: "docroot",
Usage: "Document Root for HTTPClient file scheme",
@@ -235,6 +228,10 @@ var (
Name: "jitvm",
Usage: "Enable the JIT VM",
}
VMEnableDebugFlag = cli.BoolFlag{
Name: "vmdebug",
Usage: "Record information useful for VM and contract debugging",
}
// Logging and debug settings
EthStatsURLFlag = cli.StringFlag{
Name: "ethstats",
@@ -337,10 +334,10 @@ var (
Usage: "Network listening port",
Value: 30303,
}
BootnodesFlag = cli.StringSliceFlag{
BootnodesFlag = cli.StringFlag{
Name: "bootnodes",
Usage: "Comma separated enode URLs for P2P discovery bootstrap",
Value: nil,
Value: "",
}
NodeKeyFileFlag = cli.StringFlag{
Name: "nodekey",
@@ -487,7 +484,7 @@ func makeNodeUserIdent(ctx *cli.Context) string {
func MakeBootstrapNodes(ctx *cli.Context) []*discover.Node {
urls := params.MainnetBootnodes
if ctx.GlobalIsSet(BootnodesFlag.Name) {
urls = ctx.GlobalStringSlice(BootnodesFlag.Name)
urls = strings.Split(ctx.GlobalString(BootnodesFlag.Name), ",")
} else if ctx.GlobalBool(TestNetFlag.Name) {
urls = params.TestnetBootnodes
}
@@ -509,7 +506,7 @@ func MakeBootstrapNodes(ctx *cli.Context) []*discover.Node {
func MakeBootstrapNodesV5(ctx *cli.Context) []*discv5.Node {
urls := params.DiscoveryV5Bootnodes
if ctx.GlobalIsSet(BootnodesFlag.Name) {
urls = ctx.GlobalStringSlice(BootnodesFlag.Name)
urls = strings.Split(ctx.GlobalString(BootnodesFlag.Name), ",")
}
bootnodes := make([]*discv5.Node, 0, len(urls))
@@ -591,23 +588,27 @@ func MakeDatabaseHandles() int {
// MakeAddress converts an account specified directly as a hex encoded string or
// a key index in the key store to an internal account representation.
func MakeAddress(accman *accounts.Manager, account string) (accounts.Account, error) {
func MakeAddress(ks *keystore.KeyStore, account string) (accounts.Account, error) {
// If the specified account is a valid address, return it
if common.IsHexAddress(account) {
return accounts.Account{Address: common.HexToAddress(account)}, nil
}
// Otherwise try to interpret the account as a keystore index
index, err := strconv.Atoi(account)
if err != nil {
if err != nil || index < 0 {
return accounts.Account{}, fmt.Errorf("invalid account address or index %q", account)
}
return accman.AccountByIndex(index)
accs := ks.Accounts()
if len(accs) <= index {
return accounts.Account{}, fmt.Errorf("index %d higher than number of accounts %d", index, len(accs))
}
return accs[index], nil
}
// MakeEtherbase retrieves the etherbase either from the directly specified
// command line flags or from the keystore if CLI indexed.
func MakeEtherbase(accman *accounts.Manager, ctx *cli.Context) common.Address {
accounts := accman.Accounts()
func MakeEtherbase(ks *keystore.KeyStore, ctx *cli.Context) common.Address {
accounts := ks.Accounts()
if !ctx.GlobalIsSet(EtherbaseFlag.Name) && len(accounts) == 0 {
glog.V(logger.Error).Infoln("WARNING: No etherbase set and no accounts found as default")
return common.Address{}
@@ -617,7 +618,7 @@ func MakeEtherbase(accman *accounts.Manager, ctx *cli.Context) common.Address {
return common.Address{}
}
// If the specified etherbase is a valid address, return it
account, err := MakeAddress(accman, etherbase)
account, err := MakeAddress(ks, etherbase)
if err != nil {
Fatalf("Option %q: %v", EtherbaseFlag.Name, err)
}
@@ -658,6 +659,10 @@ func MakeNode(ctx *cli.Context, name, gitCommit string) *node.Node {
vsn += "-" + gitCommit[:8]
}
// if we're running a light client or server, force enable the v5 peer discovery unless it is explicitly disabled with --nodiscover
// note that explicitly specifying --v5disc overrides --nodiscover, in which case the later only disables v4 discovery
forceV5Discovery := (ctx.GlobalBool(LightModeFlag.Name) || ctx.GlobalInt(LightServFlag.Name) > 0) && !ctx.GlobalBool(NoDiscoverFlag.Name)
config := &node.Config{
DataDir: MakeDataDir(ctx),
KeyStoreDir: ctx.GlobalString(KeyStoreDirFlag.Name),
@@ -666,8 +671,8 @@ func MakeNode(ctx *cli.Context, name, gitCommit string) *node.Node {
Name: name,
Version: vsn,
UserIdent: makeNodeUserIdent(ctx),
NoDiscovery: ctx.GlobalBool(NoDiscoverFlag.Name) || ctx.GlobalBool(LightModeFlag.Name),
DiscoveryV5: ctx.GlobalBool(DiscoveryV5Flag.Name) || ctx.GlobalBool(LightModeFlag.Name) || ctx.GlobalInt(LightServFlag.Name) > 0,
NoDiscovery: ctx.GlobalBool(NoDiscoverFlag.Name) || ctx.GlobalBool(LightModeFlag.Name), // always disable v4 discovery in light client mode
DiscoveryV5: ctx.GlobalBool(DiscoveryV5Flag.Name) || forceV5Discovery,
DiscoveryV5Addr: MakeDiscoveryV5Address(ctx),
BootstrapNodes: MakeBootstrapNodes(ctx),
BootstrapNodesV5: MakeBootstrapNodesV5(ctx),
@@ -712,7 +717,7 @@ func MakeNode(ctx *cli.Context, name, gitCommit string) *node.Node {
// given node.
func RegisterEthService(ctx *cli.Context, stack *node.Node, extra []byte) {
// Avoid conflicting network flags
networks, netFlags := 0, []cli.BoolFlag{DevModeFlag, TestNetFlag, OlympicFlag}
networks, netFlags := 0, []cli.BoolFlag{DevModeFlag, TestNetFlag}
for _, flag := range netFlags {
if ctx.GlobalBool(flag.Name) {
networks++
@@ -721,9 +726,10 @@ func RegisterEthService(ctx *cli.Context, stack *node.Node, extra []byte) {
if networks > 1 {
Fatalf("The %v flags are mutually exclusive", netFlags)
}
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
ethConf := &eth.Config{
Etherbase: MakeEtherbase(stack.AccountManager(), ctx),
Etherbase: MakeEtherbase(ks, ctx),
ChainConfig: MakeChainConfig(ctx, stack),
FastSync: ctx.GlobalBool(FastSyncFlag.Name),
LightMode: ctx.GlobalBool(LightModeFlag.Name),
@@ -735,7 +741,6 @@ func RegisterEthService(ctx *cli.Context, stack *node.Node, extra []byte) {
NetworkId: ctx.GlobalInt(NetworkIdFlag.Name),
MinerThreads: ctx.GlobalInt(MinerThreadsFlag.Name),
ExtraData: MakeMinerExtra(extra, ctx),
NatSpec: ctx.GlobalBool(NatspecEnabledFlag.Name),
DocRoot: ctx.GlobalString(DocRootFlag.Name),
GasPrice: common.String2Big(ctx.GlobalString(GasPriceFlag.Name)),
GpoMinGasPrice: common.String2Big(ctx.GlobalString(GpoMinGasPriceFlag.Name)),
@@ -746,16 +751,11 @@ func RegisterEthService(ctx *cli.Context, stack *node.Node, extra []byte) {
GpobaseCorrectionFactor: ctx.GlobalInt(GpobaseCorrectionFactorFlag.Name),
SolcPath: ctx.GlobalString(SolcPathFlag.Name),
AutoDAG: ctx.GlobalBool(AutoDAGFlag.Name) || ctx.GlobalBool(MiningEnabledFlag.Name),
EnablePreimageRecording: ctx.GlobalBool(VMEnableDebugFlag.Name),
}
// Override any default configs in dev mode or the test net
switch {
case ctx.GlobalBool(OlympicFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
ethConf.NetworkId = 1
}
ethConf.Genesis = core.OlympicGenesisBlock()
case ctx.GlobalBool(TestNetFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
ethConf.NetworkId = 3
@@ -763,7 +763,7 @@ func RegisterEthService(ctx *cli.Context, stack *node.Node, extra []byte) {
ethConf.Genesis = core.DefaultTestnetGenesisBlock()
case ctx.GlobalBool(DevModeFlag.Name):
ethConf.Genesis = core.OlympicGenesisBlock()
ethConf.Genesis = core.DevGenesisBlock()
if !ctx.GlobalIsSet(GasPriceFlag.Name) {
ethConf.GasPrice = new(big.Int)
}
@@ -820,16 +820,6 @@ func RegisterEthStatsService(stack *node.Node, url string) {
// SetupNetwork configures the system for either the main net or some test network.
func SetupNetwork(ctx *cli.Context) {
switch {
case ctx.GlobalBool(OlympicFlag.Name):
params.DurationLimit = big.NewInt(8)
params.GenesisGasLimit = big.NewInt(3141592)
params.MinGasLimit = big.NewInt(125000)
params.MaximumExtraDataSize = big.NewInt(1024)
NetworkIdFlag.Value = 0
core.BlockReward = big.NewInt(1.5e+18)
core.ExpDiffPeriod = big.NewInt(math.MaxInt64)
}
params.TargetGasLimit = common.String2Big(ctx.GlobalString(TargetGasLimitFlag.Name))
}
@@ -920,13 +910,6 @@ func MakeChain(ctx *cli.Context, stack *node.Node) (chain *core.BlockChain, chai
var err error
chainDb = MakeChainDatabase(ctx, stack)
if ctx.GlobalBool(OlympicFlag.Name) {
_, err := core.WriteOlympicGenesisBlock(chainDb)
if err != nil {
glog.Fatalln(err)
}
}
if ctx.GlobalBool(TestNetFlag.Name) {
_, err := core.WriteTestNetGenesisBlock(chainDb)
if err != nil {
@@ -940,7 +923,7 @@ func MakeChain(ctx *cli.Context, stack *node.Node) (chain *core.BlockChain, chai
if !ctx.GlobalBool(FakePoWFlag.Name) {
pow = ethash.New()
}
chain, err = core.NewBlockChain(chainDb, chainConfig, pow, new(event.TypeMux))
chain, err = core.NewBlockChain(chainDb, chainConfig, pow, new(event.TypeMux), vm.Config{EnablePreimageRecording: ctx.GlobalBool(VMEnableDebugFlag.Name)})
if err != nil {
Fatalf("Could not start chainmanager: %v", err)
}

542
cmd/wnode/main.go Normal file
View File

@@ -0,0 +1,542 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// This is a simple Whisper node. It could be used as a stand-alone bootstrap node.
// Also, could be used for different test and diagnostics purposes.
package main
import (
"bufio"
"crypto/ecdsa"
"crypto/sha1"
"crypto/sha256"
"crypto/sha512"
"encoding/binary"
"encoding/hex"
"flag"
"fmt"
"os"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/logger"
"github.com/ethereum/go-ethereum/logger/glog"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/nat"
"github.com/ethereum/go-ethereum/whisper/mailserver"
whisper "github.com/ethereum/go-ethereum/whisper/whisperv5"
"golang.org/x/crypto/pbkdf2"
)
const quitCommand = "~Q"
// singletons
var (
server *p2p.Server
shh *whisper.Whisper
done chan struct{}
mailServer mailserver.WMailServer
input = bufio.NewReader(os.Stdin)
)
// encryption
var (
symKey []byte
pub *ecdsa.PublicKey
asymKey *ecdsa.PrivateKey
nodeid *ecdsa.PrivateKey
topic whisper.TopicType
filterID uint32
msPassword string
)
// cmd arguments
var (
echoMode = flag.Bool("e", false, "echo mode: prints some arguments for diagnostics")
bootstrapMode = flag.Bool("b", false, "boostrap node: don't actively connect to peers, wait for incoming connections")
forwarderMode = flag.Bool("f", false, "forwarder mode: only forward messages, neither send nor decrypt messages")
mailServerMode = flag.Bool("s", false, "mail server mode: delivers expired messages on demand")
requestMail = flag.Bool("r", false, "request expired messages from the bootstrap server")
asymmetricMode = flag.Bool("a", false, "use asymmetric encryption")
testMode = flag.Bool("t", false, "use of predefined parameters for diagnostics")
generateKey = flag.Bool("k", false, "generate and show the private key")
argTTL = flag.Uint("ttl", 30, "time-to-live for messages in seconds")
argWorkTime = flag.Uint("work", 5, "work time in seconds")
argPoW = flag.Float64("pow", whisper.MinimumPoW, "PoW for normal messages in float format (e.g. 2.7)")
argServerPoW = flag.Float64("mspow", whisper.MinimumPoW, "PoW requirement for Mail Server request")
argIP = flag.String("ip", "", "IP address and port of this node (e.g. 127.0.0.1:30303)")
argSalt = flag.String("salt", "", "salt (for topic and key derivation)")
argPub = flag.String("pub", "", "public key for asymmetric encryption")
argDBPath = flag.String("dbpath", "", "path to the server's DB directory")
argIDFile = flag.String("idfile", "", "file name with node id (private key)")
argEnode = flag.String("boot", "", "bootstrap node you want to connect to (e.g. enode://e454......08d50@52.176.211.200:16428)")
argTopic = flag.String("topic", "", "topic in hexadecimal format (e.g. 70a4beef)")
)
func main() {
processArgs()
initialize()
run()
}
func processArgs() {
flag.Parse()
if len(*argIDFile) > 0 {
var err error
nodeid, err = crypto.LoadECDSA(*argIDFile)
if err != nil {
utils.Fatalf("Failed to load file [%s]: %s.", *argIDFile, err)
}
}
const enodePrefix = "enode://"
if len(*argEnode) > 0 {
if (*argEnode)[:len(enodePrefix)] != enodePrefix {
*argEnode = enodePrefix + *argEnode
}
}
if len(*argTopic) > 0 {
x, err := hex.DecodeString(*argTopic)
if err != nil {
utils.Fatalf("Failed to parse the topic: %s", err)
}
topic = whisper.BytesToTopic(x)
}
if *asymmetricMode && len(*argPub) > 0 {
pub = crypto.ToECDSAPub(common.FromHex(*argPub))
if !isKeyValid(pub) {
utils.Fatalf("invalid public key")
}
}
if *echoMode {
echo()
}
}
func echo() {
fmt.Printf("ttl = %d \n", *argTTL)
fmt.Printf("workTime = %d \n", *argWorkTime)
fmt.Printf("pow = %f \n", *argPoW)
fmt.Printf("mspow = %f \n", *argServerPoW)
fmt.Printf("ip = %s \n", *argIP)
fmt.Printf("salt = %s \n", *argSalt)
fmt.Printf("pub = %s \n", common.ToHex(crypto.FromECDSAPub(pub)))
fmt.Printf("idfile = %s \n", *argIDFile)
fmt.Printf("dbpath = %s \n", *argDBPath)
fmt.Printf("boot = %s \n", *argEnode)
}
func initialize() {
glog.SetV(logger.Warn)
glog.SetToStderr(true)
done = make(chan struct{})
var peers []*discover.Node
var err error
if *generateKey {
key, err := crypto.GenerateKey()
if err != nil {
utils.Fatalf("Failed to generate private key: %s", err)
}
k := hex.EncodeToString(crypto.FromECDSA(key))
fmt.Printf("Random private key: %s \n", k)
os.Exit(0)
}
if *testMode {
password := []byte("test password for symmetric encryption")
salt := []byte("test salt for symmetric encryption")
symKey = pbkdf2.Key(password, salt, 64, 32, sha256.New)
topic = whisper.TopicType{0xFF, 0xFF, 0xFF, 0xFF}
msPassword = "mail server test password"
}
if *bootstrapMode {
if len(*argIP) == 0 {
argIP = scanLineA("Please enter your IP and port (e.g. 127.0.0.1:30348): ")
}
} else {
if len(*argEnode) == 0 {
argEnode = scanLineA("Please enter the peer's enode: ")
}
peer := discover.MustParseNode(*argEnode)
peers = append(peers, peer)
}
if *mailServerMode {
if len(msPassword) == 0 {
msPassword, err = console.Stdin.PromptPassword("Please enter the Mail Server password: ")
if err != nil {
utils.Fatalf("Failed to read Mail Server password: %s", err)
}
}
shh = whisper.NewWhisper(&mailServer)
mailServer.Init(shh, *argDBPath, msPassword, *argServerPoW)
} else {
shh = whisper.NewWhisper(nil)
}
asymKey = shh.NewIdentity()
if nodeid == nil {
nodeid = shh.NewIdentity()
}
maxPeers := 80
if *bootstrapMode {
maxPeers = 800
}
server = &p2p.Server{
Config: p2p.Config{
PrivateKey: nodeid,
MaxPeers: maxPeers,
Name: common.MakeName("whisper-go", "5.0"),
Protocols: shh.Protocols(),
ListenAddr: *argIP,
NAT: nat.Any(),
BootstrapNodes: peers,
StaticNodes: peers,
TrustedNodes: peers,
},
}
}
func startServer() {
err := server.Start()
if err != nil {
utils.Fatalf("Failed to start Whisper peer: %s.", err)
}
fmt.Printf("my public key: %s \n", common.ToHex(crypto.FromECDSAPub(&asymKey.PublicKey)))
fmt.Println(server.NodeInfo().Enode)
if *bootstrapMode {
configureNode()
fmt.Println("Bootstrap Whisper node started")
} else {
fmt.Println("Whisper node started")
// first see if we can establish connection, then ask for user input
waitForConnection(true)
configureNode()
}
if !*forwarderMode {
fmt.Printf("Please type the message. To quit type: '%s'\n", quitCommand)
}
}
func isKeyValid(k *ecdsa.PublicKey) bool {
return k.X != nil && k.Y != nil
}
func configureNode() {
var err error
var p2pAccept bool
if *forwarderMode {
return
}
if *asymmetricMode {
if len(*argPub) == 0 {
s := scanLine("Please enter the peer's public key: ")
pub = crypto.ToECDSAPub(common.FromHex(s))
if !isKeyValid(pub) {
utils.Fatalf("Error: invalid public key")
}
}
}
if *requestMail {
p2pAccept = true
if len(msPassword) == 0 {
msPassword, err = console.Stdin.PromptPassword("Please enter the Mail Server password: ")
if err != nil {
utils.Fatalf("Failed to read Mail Server password: %s", err)
}
}
}
if !*asymmetricMode && !*forwarderMode && !*testMode {
pass, err := console.Stdin.PromptPassword("Please enter the password: ")
if err != nil {
utils.Fatalf("Failed to read passphrase: %v", err)
}
if len(*argSalt) == 0 {
argSalt = scanLineA("Please enter the salt: ")
}
symKey = pbkdf2.Key([]byte(pass), []byte(*argSalt), 65356, 32, sha256.New)
if len(*argTopic) == 0 {
generateTopic([]byte(pass), []byte(*argSalt))
}
}
if *mailServerMode {
if len(*argDBPath) == 0 {
argDBPath = scanLineA("Please enter the path to DB file: ")
}
}
filter := whisper.Filter{
KeySym: symKey,
KeyAsym: asymKey,
Topics: []whisper.TopicType{topic},
AcceptP2P: p2pAccept,
}
filterID = shh.Watch(&filter)
fmt.Printf("Filter is configured for the topic: %x \n", topic)
}
func generateTopic(password, salt []byte) {
const rounds = 4000
const size = 128
x1 := pbkdf2.Key(password, salt, rounds, size, sha512.New)
x2 := pbkdf2.Key(password, salt, rounds, size, sha1.New)
x3 := pbkdf2.Key(x1, x2, rounds, size, sha256.New)
for i := 0; i < size; i++ {
topic[i%whisper.TopicLength] ^= x3[i]
}
}
func waitForConnection(timeout bool) {
var cnt int
var connected bool
for !connected {
time.Sleep(time.Millisecond * 50)
connected = server.PeerCount() > 0
if timeout {
cnt++
if cnt > 1000 {
utils.Fatalf("Timeout expired, failed to connect")
}
}
}
fmt.Println("Connected to peer.")
}
func run() {
defer mailServer.Close()
startServer()
defer server.Stop()
shh.Start(nil)
defer shh.Stop()
if !*forwarderMode {
go messageLoop()
}
if *requestMail {
requestExpiredMessagesLoop()
} else {
sendLoop()
}
}
func sendLoop() {
for {
s := scanLine("")
if s == quitCommand {
fmt.Println("Quit command received")
close(done)
break
}
sendMsg([]byte(s))
if *asymmetricMode {
// print your own message for convenience,
// because in asymmetric mode it is impossible to decrypt it
hour, min, sec := time.Now().Clock()
from := crypto.PubkeyToAddress(asymKey.PublicKey)
fmt.Printf("\n%02d:%02d:%02d <%x>: %s\n", hour, min, sec, from, s)
}
}
}
func scanLine(prompt string) string {
if len(prompt) > 0 {
fmt.Print(prompt)
}
txt, err := input.ReadString('\n')
if err != nil {
utils.Fatalf("input error: %s", err)
}
txt = strings.TrimRight(txt, "\n\r")
return txt
}
func scanLineA(prompt string) *string {
s := scanLine(prompt)
return &s
}
func scanUint(prompt string) uint32 {
s := scanLine(prompt)
i, err := strconv.Atoi(s)
if err != nil {
utils.Fatalf("Fail to parse the lower time limit: %s", err)
}
return uint32(i)
}
func sendMsg(payload []byte) {
params := whisper.MessageParams{
Src: asymKey,
Dst: pub,
KeySym: symKey,
Payload: payload,
Topic: topic,
TTL: uint32(*argTTL),
PoW: *argPoW,
WorkTime: uint32(*argWorkTime),
}
msg := whisper.NewSentMessage(&params)
envelope, err := msg.Wrap(&params)
if err != nil {
fmt.Printf("failed to seal message: %v \n", err)
return
}
err = shh.Send(envelope)
if err != nil {
fmt.Printf("failed to send message: %v \n", err)
}
}
func messageLoop() {
f := shh.GetFilter(filterID)
if f == nil {
utils.Fatalf("filter is not installed")
}
ticker := time.NewTicker(time.Millisecond * 50)
for {
select {
case <-ticker.C:
messages := f.Retrieve()
for _, msg := range messages {
printMessageInfo(msg)
}
case <-done:
return
}
}
}
func printMessageInfo(msg *whisper.ReceivedMessage) {
timestamp := fmt.Sprintf("%d", msg.Sent) // unix timestamp for diagnostics
text := string(msg.Payload)
var address common.Address
if msg.Src != nil {
address = crypto.PubkeyToAddress(*msg.Src)
}
if whisper.IsPubKeyEqual(msg.Src, &asymKey.PublicKey) {
fmt.Printf("\n%s <%x>: %s\n", timestamp, address, text) // message from myself
} else {
fmt.Printf("\n%s [%x]: %s\n", timestamp, address, text) // message from a peer
}
}
func requestExpiredMessagesLoop() {
var key, peerID []byte
var timeLow, timeUpp uint32
var t string
var xt, empty whisper.TopicType
err := shh.AddSymKey(mailserver.MailServerKeyName, []byte(msPassword))
if err != nil {
utils.Fatalf("Failed to create symmetric key for mail request: %s", err)
}
key = shh.GetSymKey(mailserver.MailServerKeyName)
peerID = extractIdFromEnode(*argEnode)
shh.MarkPeerTrusted(peerID)
for {
timeLow = scanUint("Please enter the lower limit of the time range (unix timestamp): ")
timeUpp = scanUint("Please enter the upper limit of the time range (unix timestamp): ")
t = scanLine("Please enter the topic (hexadecimal): ")
if len(t) >= whisper.TopicLength*2 {
x, err := hex.DecodeString(t)
if err != nil {
utils.Fatalf("Failed to parse the topic: %s", err)
}
xt = whisper.BytesToTopic(x)
}
if timeUpp == 0 {
timeUpp = 0xFFFFFFFF
}
data := make([]byte, 8+whisper.TopicLength)
binary.BigEndian.PutUint32(data, timeLow)
binary.BigEndian.PutUint32(data[4:], timeUpp)
copy(data[8:], xt[:])
if xt == empty {
data = data[:8]
}
var params whisper.MessageParams
params.PoW = *argServerPoW
params.Payload = data
params.KeySym = key
params.Src = nodeid
params.WorkTime = 5
msg := whisper.NewSentMessage(&params)
env, err := msg.Wrap(&params)
if err != nil {
utils.Fatalf("Wrap failed: %s", err)
}
err = shh.RequestHistoricMessages(peerID, env)
if err != nil {
utils.Fatalf("Failed to send P2P message: %s", err)
}
time.Sleep(time.Second * 5)
}
}
func extractIdFromEnode(s string) []byte {
n, err := discover.ParseNode(s)
if err != nil {
utils.Fatalf("Failed to parse enode: %s", err)
return nil
}
return n.ID[:]
}

View File

@@ -36,7 +36,7 @@ contract test {
}
}
`
testInfo = `{"source":"\ncontract test {\n /// @notice Will multiply ` + "`a`" + ` by 7.\n function multiply(uint a) returns(uint d) {\n return a * 7;\n }\n}\n","language":"Solidity","languageVersion":"0.1.1","compilerVersion":"0.1.1","compilerOptions":"--binary file --json-abi file --natspec-user file --natspec-dev file --add-std 1","abiDefinition":[{"constant":false,"inputs":[{"name":"a","type":"uint256"}],"name":"multiply","outputs":[{"name":"d","type":"uint256"}],"type":"function"}],"userDoc":{"methods":{"multiply(uint256)":{"notice":"Will multiply ` + "`a`" + ` by 7."}}},"developerDoc":{"methods":{}}}`
testInfo = `{"source":"\ncontract test {\n /// @notice Will multiply ` + "`a`" + ` by 7.\n function multiply(uint a) returns(uint d) {\n return a * 7;\n }\n}\n","language":"Solidity","languageVersion":"0.1.1","compilerVersion":"0.1.1","compilerOptions":"--binary file --json-abi file --add-std 1","abiDefinition":[{"constant":false,"inputs":[{"name":"a","type":"uint256"}],"name":"multiply","outputs":[{"name":"d","type":"uint256"}],"type":"function"}],"userDoc":{"methods":{"multiply(uint256)":{"notice":"Will multiply ` + "`a`" + ` by 7."}}},"developerDoc":{"methods":{}}}`
)
func skipWithoutSolc(t *testing.T) {
@@ -99,7 +99,7 @@ func TestSaveInfo(t *testing.T) {
if string(got) != testInfo {
t.Errorf("incorrect info.json extracted, expected:\n%s\ngot\n%s", testInfo, string(got))
}
wantHash := common.HexToHash("0x9f3803735e7f16120c5a140ab3f02121fd3533a9655c69b33a10e78752cc49b0")
wantHash := common.HexToHash("0x22450a77f0c3ff7a395948d07bc1456881226a1b6325f4189cb5f1254a824080")
if cinfohash != wantHash {
t.Errorf("content hash for info is incorrect. expected %v, got %v", wantHash.Hex(), cinfohash.Hex())
}

View File

@@ -169,12 +169,7 @@ func EncodeBig(bigint *big.Int) string {
if nbits == 0 {
return "0x0"
}
enc := make([]byte, 2, (nbits/8)*2+2)
copy(enc, "0x")
for i := len(bigint.Bits()) - 1; i >= 0; i-- {
enc = strconv.AppendUint(enc, uint64(bigint.Bits()[i]), 16)
}
return string(enc)
return fmt.Sprintf("0x%x", bigint)
}
func has0xPrefix(input string) bool {

View File

@@ -46,6 +46,7 @@ var (
{referenceBig("1"), "0x1"},
{referenceBig("ff"), "0xff"},
{referenceBig("112233445566778899aabbccddeeff"), "0x112233445566778899aabbccddeeff"},
{referenceBig("80a7f2c1bcc396c00"), "0x80a7f2c1bcc396c00"},
}
encodeUint64Tests = []marshalTest{

View File

@@ -109,13 +109,8 @@ func (b *Big) MarshalJSON() ([]byte, error) {
if nbits == 0 {
return jsonZero, nil
}
enc := make([]byte, 3, (nbits/8)*2+4)
copy(enc, `"0x`)
for i := len(bigint.Bits()) - 1; i >= 0; i-- {
enc = strconv.AppendUint(enc, uint64(bigint.Bits()[i]), 16)
}
enc = append(enc, '"')
return enc, nil
enc := fmt.Sprintf(`"0x%x"`, bigint)
return []byte(enc), nil
}
// UnmarshalJSON implements json.Unmarshaler.
@@ -237,7 +232,7 @@ func checkJSON(input []byte) (raw []byte, err error) {
return nil, errNonString
}
if len(input) == 2 {
return nil, ErrEmptyString
return nil, nil // empty strings are allowed
}
if !bytesHave0xPrefix(input[1:]) {
return nil, ErrMissingPrefix
@@ -255,7 +250,7 @@ func checkNumberJSON(input []byte) (raw []byte, err error) {
}
input = input[1 : len(input)-1]
if len(input) == 0 {
return nil, ErrEmptyString
return nil, nil // empty strings are allowed
}
if !bytesHave0xPrefix(input) {
return nil, ErrMissingPrefix

View File

@@ -60,13 +60,13 @@ var unmarshalBytesTests = []unmarshalTest{
{input: "", wantErr: errNonString},
{input: "null", wantErr: errNonString},
{input: "10", wantErr: errNonString},
{input: `""`, wantErr: ErrEmptyString},
{input: `"0"`, wantErr: ErrMissingPrefix},
{input: `"0x0"`, wantErr: ErrOddLength},
{input: `"0xxx"`, wantErr: ErrSyntax},
{input: `"0x01zz01"`, wantErr: ErrSyntax},
// valid encoding
{input: `""`, want: referenceBytes("")},
{input: `"0x"`, want: referenceBytes("")},
{input: `"0x02"`, want: referenceBytes("02")},
{input: `"0X02"`, want: referenceBytes("02")},
@@ -125,7 +125,6 @@ var unmarshalBigTests = []unmarshalTest{
{input: "", wantErr: errNonString},
{input: "null", wantErr: errNonString},
{input: "10", wantErr: errNonString},
{input: `""`, wantErr: ErrEmptyString},
{input: `"0"`, wantErr: ErrMissingPrefix},
{input: `"0x"`, wantErr: ErrEmptyNumber},
{input: `"0x01"`, wantErr: ErrLeadingZero},
@@ -133,6 +132,7 @@ var unmarshalBigTests = []unmarshalTest{
{input: `"0x1zz01"`, wantErr: ErrSyntax},
// valid encoding
{input: `""`, want: big.NewInt(0)},
{input: `"0x0"`, want: big.NewInt(0)},
{input: `"0x2"`, want: big.NewInt(0x2)},
{input: `"0x2F2"`, want: big.NewInt(0x2f2)},
@@ -198,7 +198,6 @@ var unmarshalUint64Tests = []unmarshalTest{
{input: "", wantErr: errNonString},
{input: "null", wantErr: errNonString},
{input: "10", wantErr: errNonString},
{input: `""`, wantErr: ErrEmptyString},
{input: `"0"`, wantErr: ErrMissingPrefix},
{input: `"0x"`, wantErr: ErrEmptyNumber},
{input: `"0x01"`, wantErr: ErrLeadingZero},
@@ -207,6 +206,7 @@ var unmarshalUint64Tests = []unmarshalTest{
{input: `"0x1zz01"`, wantErr: ErrSyntax},
// valid encoding
{input: `""`, want: uint64(0)},
{input: `"0x0"`, want: uint64(0)},
{input: `"0x2"`, want: uint64(0x2)},
{input: `"0x2F2"`, want: uint64(0x2f2)},

View File

@@ -137,10 +137,14 @@ func (c *Console) init(preload []string) error {
continue // manually mapped or ignore
}
if file, ok := web3ext.Modules[api]; ok {
// Load our extension for the module.
if err = c.jsre.Compile(fmt.Sprintf("%s.js", api), file); err != nil {
return fmt.Errorf("%s.js: %v", api, err)
}
flatten += fmt.Sprintf("var %s = web3.%s; ", api, api)
} else if obj, err := c.jsre.Run("web3." + api); err == nil && obj.IsObject() {
// Enable web3.js built-in extension if available.
flatten += fmt.Sprintf("var %s = web3.%s; ", api, api)
}
}
if _, err = c.jsre.Run(flatten); err != nil {
@@ -226,8 +230,8 @@ func (c *Console) AutoCompleteInput(line string, pos int) (string, []string, str
}
// Chunck data to relevant part for autocompletion
// E.g. in case of nested lines eth.getBalance(eth.coinb<tab><tab>
start := 0
for start = pos - 1; start > 0; start-- {
start := pos - 1
for ; start > 0; start-- {
// Skip all methods and namespaces (i.e. including te dot)
if line[start] == '.' || (line[start] >= 'a' && line[start] <= 'z') || (line[start] >= 'A' && line[start] <= 'Z') {
continue

View File

@@ -1,11 +1,11 @@
FROM alpine:3.4
FROM alpine:3.5
RUN \
apk add --update go git make gcc musl-dev && \
apk add --update go git make gcc musl-dev linux-headers ca-certificates && \
git clone --depth 1 https://github.com/ethereum/go-ethereum && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /geth && \
apk del go git make gcc musl-dev && \
apk del go git make gcc musl-dev linux-headers && \
rm -rf /go-ethereum && rm -rf /var/cache/apk/*
EXPOSE 8545

View File

@@ -1,17 +1,15 @@
FROM ubuntu:wily
MAINTAINER caktux
FROM ubuntu:xenial
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get upgrade -q -y && \
apt-get dist-upgrade -q -y && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 923F6CA9 && \
echo "deb http://ppa.launchpad.net/ethereum/ethereum-dev/ubuntu wily main" | tee -a /etc/apt/sources.list.d/ethereum.list && \
apt-get update && \
apt-get install -q -y geth
RUN \
apt-get update && apt-get upgrade -q -y && \
apt-get install -y --no-install-recommends golang git make gcc libc-dev ca-certificates && \
git clone --depth 1 https://github.com/ethereum/go-ethereum && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /geth && \
apt-get remove -y golang git make gcc libc-dev && apt autoremove -y && apt-get clean && \
rm -rf /go-ethereum
EXPOSE 8545
EXPOSE 30303
ENTRYPOINT ["/usr/bin/geth"]
ENTRYPOINT ["/geth"]

View File

@@ -1,11 +1,11 @@
FROM alpine:3.4
FROM alpine:3.5
RUN \
apk add --update go git make gcc musl-dev && \
apk add --update go git make gcc musl-dev linux-headers ca-certificates && \
git clone --depth 1 --branch release/1.5 https://github.com/ethereum/go-ethereum && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /geth && \
apk del go git make gcc musl-dev && \
apk del go git make gcc musl-dev linux-headers && \
rm -rf /go-ethereum && rm -rf /var/cache/apk/*
EXPOSE 8545

View File

@@ -1,17 +1,15 @@
FROM ubuntu:wily
MAINTAINER caktux
FROM ubuntu:xenial
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get upgrade -q -y && \
apt-get dist-upgrade -q -y && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 923F6CA9 && \
echo "deb http://ppa.launchpad.net/ethereum/ethereum/ubuntu wily main" | tee -a /etc/apt/sources.list.d/ethereum.list && \
apt-get update && \
apt-get install -q -y geth
RUN \
apt-get update && apt-get upgrade -q -y && \
apt-get install -y --no-install-recommends golang git make gcc libc-dev ca-certificates && \
git clone --depth 1 --branch release/1.5 https://github.com/ethereum/go-ethereum && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /geth && \
apt-get remove -y golang git make gcc libc-dev && apt autoremove -y && apt-get clean && \
rm -rf /go-ethereum
EXPOSE 8545
EXPOSE 30303
ENTRYPOINT ["/usr/bin/geth"]
ENTRYPOINT ["/geth"]

View File

@@ -73,8 +73,8 @@ func TestIssueAndReceive(t *testing.T) {
}
chbook.sent[addr1] = new(big.Int).SetUint64(42)
amount := common.Big1
ch, err := chbook.Issue(addr1, amount)
if err == nil {
if _, err = chbook.Issue(addr1, amount); err == nil {
t.Fatalf("expected insufficient funds error, got none")
}
@@ -83,7 +83,7 @@ func TestIssueAndReceive(t *testing.T) {
t.Fatalf("expected: %v, got %v", "0", chbook.Balance())
}
ch, err = chbook.Issue(addr1, amount)
ch, err := chbook.Issue(addr1, amount)
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
@@ -128,8 +128,8 @@ func TestCheckbookFile(t *testing.T) {
t.Errorf("expected: %v, got %v", "0", chbook.Balance())
}
ch, err := chbook.Issue(addr1, common.Big1)
if err != nil {
var ch *Cheque
if ch, err = chbook.Issue(addr1, common.Big1); err != nil {
t.Fatalf("expected no error, got %v", err)
}
if ch.Amount.Cmp(new(big.Int).SetUint64(43)) != 0 {
@@ -155,7 +155,7 @@ func TestVerifyErrors(t *testing.T) {
}
path1 := filepath.Join(os.TempDir(), "chequebook-test-1.json")
contr1, err := deploy(key1, common.Big2, backend)
contr1, _ := deploy(key1, common.Big2, backend)
chbook1, err := NewChequebook(path1, contr1, key1, backend)
if err != nil {
t.Errorf("expected no error, got %v", err)
@@ -223,7 +223,8 @@ func TestVerifyErrors(t *testing.T) {
func TestDeposit(t *testing.T) {
path0 := filepath.Join(os.TempDir(), "chequebook-test-0.json")
backend := newTestBackend()
contr0, err := deploy(key0, new(big.Int), backend)
contr0, _ := deploy(key0, new(big.Int), backend)
chbook, err := NewChequebook(path0, contr0, key0, backend)
if err != nil {
t.Errorf("expected no error, got %v", err)
@@ -361,7 +362,8 @@ func TestDeposit(t *testing.T) {
func TestCash(t *testing.T) {
path := filepath.Join(os.TempDir(), "chequebook-test.json")
backend := newTestBackend()
contr0, err := deploy(key0, common.Big2, backend)
contr0, _ := deploy(key0, common.Big2, backend)
chbook, err := NewChequebook(path, contr0, key0, backend)
if err != nil {
t.Errorf("expected no error, got %v", err)
@@ -380,11 +382,12 @@ func TestCash(t *testing.T) {
}
// cashing latest cheque
_, err = chbox.Receive(ch)
if err != nil {
if _, err = chbox.Receive(ch); err != nil {
t.Fatalf("expected no error, got %v", err)
}
_, err = ch.Cash(chbook.session)
if _, err = ch.Cash(chbook.session); err != nil {
t.Fatal("Cash failed:", err)
}
backend.Commit()
chbook.balance = new(big.Int).Set(common.Big3)

View File

@@ -29,7 +29,7 @@ import (
var (
key, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
name = "my name on ENS"
hash = crypto.Sha3Hash([]byte("my content"))
hash = crypto.Keccak256Hash([]byte("my content"))
addr = crypto.PubkeyToAddress(key.PublicKey)
)

View File

@@ -25,6 +25,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
@@ -168,7 +169,7 @@ func benchInsertChain(b *testing.B, disk bool, gen func(int, *BlockGen)) {
// Time the insertion of the new chain.
// State and blocks are stored in the same DB.
evmux := new(event.TypeMux)
chainman, _ := NewBlockChain(db, &params.ChainConfig{HomesteadBlock: new(big.Int)}, FakePow{}, evmux)
chainman, _ := NewBlockChain(db, &params.ChainConfig{HomesteadBlock: new(big.Int)}, FakePow{}, evmux, vm.Config{})
defer chainman.Stop()
b.ReportAllocs()
b.ResetTimer()
@@ -278,7 +279,7 @@ func benchReadChain(b *testing.B, full bool, count uint64) {
if err != nil {
b.Fatalf("error opening database at %v: %v", dir, err)
}
chain, err := NewBlockChain(db, testChainConfig(), FakePow{}, new(event.TypeMux))
chain, err := NewBlockChain(db, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
if err != nil {
b.Fatalf("error creating chain: %v", err)
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/params"
@@ -39,7 +40,7 @@ func proc() (Validator, *BlockChain) {
var mux event.TypeMux
WriteTestNetGenesisBlock(db)
blockchain, err := NewBlockChain(db, testChainConfig(), thePow(), &mux)
blockchain, err := NewBlockChain(db, testChainConfig(), thePow(), &mux, vm.Config{})
if err != nil {
fmt.Println(err)
}

View File

@@ -29,6 +29,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/mclock"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
@@ -106,12 +107,13 @@ type BlockChain struct {
pow pow.PoW
processor Processor // block processor interface
validator Validator // block and state validator interface
vmConfig vm.Config
}
// NewBlockChain returns a fully initialised block chain using information
// available in the database. It initialiser the default Ethereum Validator and
// Processor.
func NewBlockChain(chainDb ethdb.Database, config *params.ChainConfig, pow pow.PoW, mux *event.TypeMux) (*BlockChain, error) {
func NewBlockChain(chainDb ethdb.Database, config *params.ChainConfig, pow pow.PoW, mux *event.TypeMux, vmConfig vm.Config) (*BlockChain, error) {
bodyCache, _ := lru.New(bodyCacheLimit)
bodyRLPCache, _ := lru.New(bodyCacheLimit)
blockCache, _ := lru.New(blockCacheLimit)
@@ -127,6 +129,7 @@ func NewBlockChain(chainDb ethdb.Database, config *params.ChainConfig, pow pow.P
blockCache: blockCache,
futureBlocks: futureBlocks,
pow: pow,
vmConfig: vmConfig,
}
bc.SetValidator(NewBlockValidator(config, bc, pow))
bc.SetProcessor(NewStateProcessor(config, bc))
@@ -799,7 +802,7 @@ func (self *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain
if stats.ignored > 0 {
ignored = fmt.Sprintf(" (%d ignored)", stats.ignored)
}
glog.V(logger.Info).Infof("imported %d receipts in %9v. #%d [%x… / %x…]%s", stats.processed, common.PrettyDuration(time.Since(start)), last.Number(), first.Hash().Bytes()[:4], last.Hash().Bytes()[:4], ignored)
glog.V(logger.Info).Infof("imported %4d receipts in %9v. #%d [%x… / %x…]%s", stats.processed, common.PrettyDuration(time.Since(start)), last.Number(), first.Hash().Bytes()[:4], last.Hash().Bytes()[:4], ignored)
return 0, nil
}
@@ -850,7 +853,7 @@ func (self *BlockChain) WriteBlock(block *types.Block) (status WriteStatus, err
return
}
// InsertChain will attempt to insert the given chain in to the canonical chain or, otherwise, create a fork. It an error is returned
// InsertChain will attempt to insert the given chain in to the canonical chain or, otherwise, create a fork. If an error is returned
// it will return the index number of the failing block as well an error describing what went wrong (for possible errors see core/errors.go).
func (self *BlockChain) InsertChain(chain types.Blocks) (int, error) {
// Do a sanity check that the provided chain is actually ordered and linked
@@ -875,7 +878,7 @@ func (self *BlockChain) InsertChain(chain types.Blocks) (int, error) {
// faster than direct delivery and requires much less mutex
// acquiring.
var (
stats = insertStats{startTime: time.Now()}
stats = insertStats{startTime: mclock.Now()}
events = make([]interface{}, 0, len(chain))
coalescedLogs []*types.Log
nonceChecked = make([]bool, len(chain))
@@ -953,7 +956,7 @@ func (self *BlockChain) InsertChain(chain types.Blocks) (int, error) {
return i, err
}
// Process block using the parent state as reference point.
receipts, logs, usedGas, err := self.processor.Process(block, self.stateCache, vm.Config{})
receipts, logs, usedGas, err := self.processor.Process(block, self.stateCache, self.vmConfig)
if err != nil {
self.reportBlock(block, receipts, err)
return i, err
@@ -1003,6 +1006,10 @@ func (self *BlockChain) InsertChain(chain types.Blocks) (int, error) {
if err := WriteMipmapBloom(self.chainDb, block.NumberU64(), receipts); err != nil {
return i, err
}
// Write hash preimages
if err := WritePreimages(self.chainDb, block.NumberU64(), self.stateCache.Preimages()); err != nil {
return i, err
}
case SideStatTy:
if glog.V(logger.Detail) {
glog.Infof("inserted forked block #%d [%x…] (TD=%v) in %9v: %3d txs %d uncles.", block.Number(), block.Hash().Bytes()[0:4], block.Difficulty(), common.PrettyDuration(time.Since(bstart)), len(block.Transactions()), len(block.Uncles()))
@@ -1031,7 +1038,7 @@ type insertStats struct {
queued, processed, ignored int
usedGas uint64
lastIndex int
startTime time.Time
startTime mclock.AbsTime
}
// statsReportLimit is the time limit during import after which we always print
@@ -1043,28 +1050,24 @@ const statsReportLimit = 8 * time.Second
func (st *insertStats) report(chain []*types.Block, index int) {
// Fetch the timings for the batch
var (
now = time.Now()
elapsed = now.Sub(st.startTime)
now = mclock.Now()
elapsed = time.Duration(now) - time.Duration(st.startTime)
)
if elapsed == 0 { // Yes Windows, I'm looking at you
elapsed = 1
}
// If we're at the last block of the batch or report period reached, log
if index == len(chain)-1 || elapsed >= statsReportLimit {
start, end := chain[st.lastIndex], chain[index]
txcount := countTransactions(chain[st.lastIndex : index+1])
extra := ""
var hashes, extra string
if st.queued > 0 || st.ignored > 0 {
extra = fmt.Sprintf(" (%d queued %d ignored)", st.queued, st.ignored)
}
hashes := ""
if st.processed > 1 {
hashes = fmt.Sprintf("%x… / %x…", start.Hash().Bytes()[:4], end.Hash().Bytes()[:4])
} else {
hashes = fmt.Sprintf("%x…", end.Hash().Bytes()[:4])
}
glog.Infof("imported %d blocks, %5d txs (%7.3f Mg) in %9v (%6.3f Mg/s). #%v [%s]%s", st.processed, txcount, float64(st.usedGas)/1000000, common.PrettyDuration(elapsed), float64(st.usedGas)*1000/float64(elapsed), end.Number(), hashes, extra)
glog.Infof("imported %4d blocks, %5d txs (%7.3f Mg) in %9v (%6.3f Mg/s). #%v [%s]%s", st.processed, txcount, float64(st.usedGas)/1000000, common.PrettyDuration(elapsed), float64(st.usedGas)*1000/float64(elapsed), end.Number(), hashes, extra)
*st = insertStats{startTime: now, lastIndex: index}
}
@@ -1085,8 +1088,6 @@ func (self *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
newChain types.Blocks
oldChain types.Blocks
commonBlock *types.Block
oldStart = oldBlock
newStart = newBlock
deletedTxs types.Transactions
deletedLogs []*types.Log
// collectLogs collects the logs that were generated during the
@@ -1127,7 +1128,6 @@ func (self *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
return fmt.Errorf("Invalid new chain")
}
numSplit := newBlock.Number()
for {
if oldBlock.Hash() == newBlock.Hash() {
commonBlock = oldBlock
@@ -1148,9 +1148,19 @@ func (self *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
}
}
if glog.V(logger.Debug) {
commonHash := commonBlock.Hash()
glog.Infof("Chain split detected @ %x. Reorganising chain from #%v %x to %x", commonHash[:4], numSplit, oldStart.Hash().Bytes()[:4], newStart.Hash().Bytes()[:4])
if oldLen := len(oldChain); oldLen > 63 || glog.V(logger.Debug) {
newLen := len(newChain)
newLast := newChain[0]
newFirst := newChain[newLen-1]
oldLast := oldChain[0]
oldFirst := oldChain[oldLen-1]
glog.Infof("Chain split detected after #%v [%x…]. Reorganising chain (-%v +%v blocks), rejecting #%v-#%v [%x…/%x…] in favour of #%v-#%v [%x…/%x…]",
commonBlock.Number(), commonBlock.Hash().Bytes()[:4],
oldLen, newLen,
oldFirst.Number(), oldLast.Number(),
oldFirst.Hash().Bytes()[:4], oldLast.Hash().Bytes()[:4],
newFirst.Number(), newLast.Number(),
newFirst.Hash().Bytes()[:4], newLast.Hash().Bytes()[:4])
}
var addedTxs types.Transactions

View File

@@ -53,7 +53,7 @@ func thePow() pow.PoW {
func theBlockChain(db ethdb.Database, t *testing.T) *BlockChain {
var eventMux event.TypeMux
WriteTestNetGenesisBlock(db)
blockchain, err := NewBlockChain(db, testChainConfig(), thePow(), &eventMux)
blockchain, err := NewBlockChain(db, testChainConfig(), thePow(), &eventMux, vm.Config{})
if err != nil {
t.Error("failed creating blockchain:", err)
t.FailNow()
@@ -614,7 +614,7 @@ func testReorgBadHashes(t *testing.T, full bool) {
defer func() { delete(BadHashes, headers[3].Hash()) }()
}
// Create a new chain manager and check it rolled back the state
ncm, err := NewBlockChain(db, testChainConfig(), FakePow{}, new(event.TypeMux))
ncm, err := NewBlockChain(db, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
if err != nil {
t.Fatalf("failed to create new chain manager: %v", err)
}
@@ -735,7 +735,7 @@ func TestFastVsFullChains(t *testing.T) {
archiveDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(archiveDb, GenesisAccount{address, funds})
archive, _ := NewBlockChain(archiveDb, testChainConfig(), FakePow{}, new(event.TypeMux))
archive, _ := NewBlockChain(archiveDb, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
if n, err := archive.InsertChain(blocks); err != nil {
t.Fatalf("failed to process block %d: %v", n, err)
@@ -743,7 +743,7 @@ func TestFastVsFullChains(t *testing.T) {
// Fast import the chain as a non-archive node to test
fastDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(fastDb, GenesisAccount{address, funds})
fast, _ := NewBlockChain(fastDb, testChainConfig(), FakePow{}, new(event.TypeMux))
fast, _ := NewBlockChain(fastDb, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
headers := make([]*types.Header, len(blocks))
for i, block := range blocks {
@@ -819,7 +819,7 @@ func TestLightVsFastVsFullChainHeads(t *testing.T) {
archiveDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(archiveDb, GenesisAccount{address, funds})
archive, _ := NewBlockChain(archiveDb, testChainConfig(), FakePow{}, new(event.TypeMux))
archive, _ := NewBlockChain(archiveDb, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
if n, err := archive.InsertChain(blocks); err != nil {
t.Fatalf("failed to process block %d: %v", n, err)
@@ -831,7 +831,7 @@ func TestLightVsFastVsFullChainHeads(t *testing.T) {
// Import the chain as a non-archive node and ensure all pointers are updated
fastDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(fastDb, GenesisAccount{address, funds})
fast, _ := NewBlockChain(fastDb, testChainConfig(), FakePow{}, new(event.TypeMux))
fast, _ := NewBlockChain(fastDb, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
headers := make([]*types.Header, len(blocks))
for i, block := range blocks {
@@ -850,7 +850,7 @@ func TestLightVsFastVsFullChainHeads(t *testing.T) {
// Import the chain as a light node and ensure all pointers are updated
lightDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(lightDb, GenesisAccount{address, funds})
light, _ := NewBlockChain(lightDb, testChainConfig(), FakePow{}, new(event.TypeMux))
light, _ := NewBlockChain(lightDb, testChainConfig(), FakePow{}, new(event.TypeMux), vm.Config{})
if n, err := light.InsertHeaderChain(headers, 1); err != nil {
t.Fatalf("failed to insert header %d: %v", n, err)
@@ -916,7 +916,7 @@ func TestChainTxReorgs(t *testing.T) {
})
// Import the chain. This runs all block validation rules.
evmux := &event.TypeMux{}
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux)
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux, vm.Config{})
if i, err := blockchain.InsertChain(chain); err != nil {
t.Fatalf("failed to insert original chain[%d]: %v", i, err)
}
@@ -990,7 +990,7 @@ func TestLogReorgs(t *testing.T) {
)
evmux := &event.TypeMux{}
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux)
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux, vm.Config{})
subs := evmux.Subscribe(RemovedLogsEvent{})
chain, _ := GenerateChain(params.TestChainConfig, genesis, db, 2, func(i int, gen *BlockGen) {
@@ -1027,7 +1027,7 @@ func TestReorgSideEvent(t *testing.T) {
)
evmux := &event.TypeMux{}
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux)
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux, vm.Config{})
chain, _ := GenerateChain(params.TestChainConfig, genesis, db, 3, func(i int, gen *BlockGen) {})
if _, err := blockchain.InsertChain(chain); err != nil {
@@ -1103,7 +1103,7 @@ func TestCanonicalBlockRetrieval(t *testing.T) {
)
evmux := &event.TypeMux{}
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux)
blockchain, _ := NewBlockChain(db, testChainConfig(), FakePow{}, evmux, vm.Config{})
chain, _ := GenerateChain(params.TestChainConfig, genesis, db, 10, func(i int, gen *BlockGen) {})
@@ -1146,7 +1146,7 @@ func TestEIP155Transition(t *testing.T) {
mux event.TypeMux
)
blockchain, _ := NewBlockChain(db, config, FakePow{}, &mux)
blockchain, _ := NewBlockChain(db, config, FakePow{}, &mux, vm.Config{})
blocks, _ := GenerateChain(config, genesis, db, 4, func(i int, block *BlockGen) {
var (
tx *types.Transaction
@@ -1250,7 +1250,7 @@ func TestEIP161AccountRemoval(t *testing.T) {
}
mux event.TypeMux
blockchain, _ = NewBlockChain(db, config, FakePow{}, &mux)
blockchain, _ = NewBlockChain(db, config, FakePow{}, &mux, vm.Config{})
)
blocks, _ := GenerateChain(config, genesis, db, 3, func(i int, block *BlockGen) {
var (

View File

@@ -256,7 +256,7 @@ func newCanonical(n int, full bool) (ethdb.Database, *BlockChain, error) {
// Initialize a fresh chain with only a genesis block
genesis, _ := WriteTestNetGenesisBlock(db)
blockchain, _ := NewBlockChain(db, MakeChainConfig(), FakePow{}, evmux)
blockchain, _ := NewBlockChain(db, MakeChainConfig(), FakePow{}, evmux, vm.Config{})
// Create and inject the requested chain
if n == 0 {
return db, blockchain, nil

View File

@@ -21,6 +21,7 @@ import (
"math/big"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
@@ -81,7 +82,7 @@ func ExampleGenerateChain() {
// Import the chain. This runs all block validation rules.
evmux := &event.TypeMux{}
blockchain, _ := NewBlockChain(db, chainConfig, FakePow{}, evmux)
blockchain, _ := NewBlockChain(db, chainConfig, FakePow{}, evmux, vm.Config{})
if i, err := blockchain.InsertChain(chain); err != nil {
fmt.Printf("insert error (block %d): %v\n", chain[i].NumberU64(), err)
return

View File

@@ -20,6 +20,7 @@ import (
"math/big"
"testing"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/params"
@@ -39,12 +40,12 @@ func TestDAOForkRangeExtradata(t *testing.T) {
proDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(proDb)
proConf := &params.ChainConfig{HomesteadBlock: big.NewInt(0), DAOForkBlock: forkBlock, DAOForkSupport: true}
proBc, _ := NewBlockChain(proDb, proConf, new(FakePow), new(event.TypeMux))
proBc, _ := NewBlockChain(proDb, proConf, new(FakePow), new(event.TypeMux), vm.Config{})
conDb, _ := ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(conDb)
conConf := &params.ChainConfig{HomesteadBlock: big.NewInt(0), DAOForkBlock: forkBlock, DAOForkSupport: false}
conBc, _ := NewBlockChain(conDb, conConf, new(FakePow), new(event.TypeMux))
conBc, _ := NewBlockChain(conDb, conConf, new(FakePow), new(event.TypeMux), vm.Config{})
if _, err := proBc.InsertChain(prefix); err != nil {
t.Fatalf("pro-fork: failed to import chain prefix: %v", err)
@@ -57,7 +58,7 @@ func TestDAOForkRangeExtradata(t *testing.T) {
// Create a pro-fork block, and try to feed into the no-fork chain
db, _ = ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(db)
bc, _ := NewBlockChain(db, conConf, new(FakePow), new(event.TypeMux))
bc, _ := NewBlockChain(db, conConf, new(FakePow), new(event.TypeMux), vm.Config{})
blocks := conBc.GetBlocksFromHash(conBc.CurrentBlock().Hash(), int(conBc.CurrentBlock().NumberU64()+1))
for j := 0; j < len(blocks)/2; j++ {
@@ -78,7 +79,7 @@ func TestDAOForkRangeExtradata(t *testing.T) {
// Create a no-fork block, and try to feed into the pro-fork chain
db, _ = ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(db)
bc, _ = NewBlockChain(db, proConf, new(FakePow), new(event.TypeMux))
bc, _ = NewBlockChain(db, proConf, new(FakePow), new(event.TypeMux), vm.Config{})
blocks = proBc.GetBlocksFromHash(proBc.CurrentBlock().Hash(), int(proBc.CurrentBlock().NumberU64()+1))
for j := 0; j < len(blocks)/2; j++ {
@@ -100,7 +101,7 @@ func TestDAOForkRangeExtradata(t *testing.T) {
// Verify that contra-forkers accept pro-fork extra-datas after forking finishes
db, _ = ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(db)
bc, _ := NewBlockChain(db, conConf, new(FakePow), new(event.TypeMux))
bc, _ := NewBlockChain(db, conConf, new(FakePow), new(event.TypeMux), vm.Config{})
blocks := conBc.GetBlocksFromHash(conBc.CurrentBlock().Hash(), int(conBc.CurrentBlock().NumberU64()+1))
for j := 0; j < len(blocks)/2; j++ {
@@ -116,7 +117,7 @@ func TestDAOForkRangeExtradata(t *testing.T) {
// Verify that pro-forkers accept contra-fork extra-datas after forking finishes
db, _ = ethdb.NewMemDatabase()
WriteGenesisBlockForTesting(db)
bc, _ = NewBlockChain(db, proConf, new(FakePow), new(event.TypeMux))
bc, _ = NewBlockChain(db, proConf, new(FakePow), new(event.TypeMux), vm.Config{})
blocks = proBc.GetBlocksFromHash(proBc.CurrentBlock().Hash(), int(proBc.CurrentBlock().NumberU64()+1))
for j := 0; j < len(blocks)/2; j++ {

View File

@@ -30,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/logger"
"github.com/ethereum/go-ethereum/logger/glog"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
)
@@ -45,6 +46,7 @@ var (
blockHashPrefix = []byte("H") // blockHashPrefix + hash -> num (uint64 big endian)
bodyPrefix = []byte("b") // bodyPrefix + num (uint64 big endian) + hash -> block body
blockReceiptsPrefix = []byte("r") // blockReceiptsPrefix + num (uint64 big endian) + hash -> block receipts
preimagePrefix = "secure-key-" // preimagePrefix + hash -> preimage
txMetaSuffix = []byte{0x01}
receiptsPrefix = []byte("receipts-")
@@ -66,6 +68,9 @@ var (
ChainConfigNotFoundErr = errors.New("ChainConfig not found") // general config not found error
mipmapBloomMu sync.Mutex // protect against race condition when updating mipmap blooms
preimageCounter = metrics.NewCounter("db/preimage/total")
preimageHitCounter = metrics.NewCounter("db/preimage/hits")
)
// encodeBlockNumber encodes a block number as big endian uint64
@@ -595,6 +600,34 @@ func GetMipmapBloom(db ethdb.Database, number, level uint64) types.Bloom {
return types.BytesToBloom(bloomDat)
}
// PreimageTable returns a Database instance with the key prefix for preimage entries.
func PreimageTable(db ethdb.Database) ethdb.Database {
return ethdb.NewTable(db, preimagePrefix)
}
// WritePreimages writes the provided set of preimages to the database. `number` is the
// current block number, and is used for debug messages only.
func WritePreimages(db ethdb.Database, number uint64, preimages map[common.Hash][]byte) error {
table := PreimageTable(db)
batch := table.NewBatch()
hitCount := 0
for hash, preimage := range preimages {
if _, err := table.Get(hash.Bytes()); err != nil {
batch.Put(hash.Bytes(), preimage)
hitCount += 1
}
}
preimageCounter.Inc(int64(len(preimages)))
preimageHitCounter.Inc(int64(hitCount))
if hitCount > 0 {
if err := batch.Write(); err != nil {
return fmt.Errorf("preimage write fail for block %d: %v", number, err)
}
glog.V(logger.Debug).Infof("%d preimages in block %d, including %d new", len(preimages), number, hitCount)
}
return nil
}
// GetBlockChainVersion reads the version number from db.
func GetBlockChainVersion(db ethdb.Database) int {
var vsn uint

View File

@@ -23,3 +23,6 @@ const defaultGenesisBlock = "H4sIAAAJbogA/5S9267gSHOl9y59PRd5Pszb5BEYQJi5sQEbgt/
// defaultTestnetGenesisBlock is a gzip compressed dump of the official default Ethereum
// test network genesis block (currently Ropsten).
const defaultTestnetGenesisBlock = "QlpoOTFBWSZTWezl5v8AGI59gHRQABBcBH/0FGQ/795qUAN+PcGHAoQMVIwm0zCSp+ihiAANB6gqp+mmp6eMmqo0AADRpoABgAGjTQBk0BoMgNAbSqNAyDQyaBoNAyAGBUkhNNNA0JGmgD0g0AekxJtVIgTtRupd52vqSSSVfG2ALQBhhdbCIgmAgUiIJnEQTviQF8kkN1IOKiRVVFVSVXHT1cHLHdiSQBlAAG2JtCtNgCS5IEB6dNtP782Z729Tbt3Z6+uFr7u3w79widyYSMAhtlL5XeGZWdwEBNdFm+Hh5gVRBoslgksAMkUKmYlSJRTxBsQAAKKlnd3WjPWrNd66NR7WAu6AQL40vqlny1We+p3l1QThy4gRd2lSVjEKISLDFrVJ1UtjPEPctVSZGjBTJWRzpdgRmCCprJWGe3DK9wvz4eGvdv1ZZcMvxGgdakeMAVEeFywVQWLWCrUWg60Y9ZLoAuZ0uSJJUELb8N2e7j05d3Tlz7vRwgk+LisedJALLVUgFjLGzfv5SEGPLlhhjpx2aaO8T87whcYDmgPLVDBI3RF1TLnzvymWtN732u93GiAGiBdm5s1q9Haru7+yCLJAk3wBDTtrGeMVh3iJmXHo+OBAsqoDOZwjKM3d3mZmcHdBJiSVouSKVkDg+L4vSzNRqs7u6NEaIiUKoYxERGfTWIhoiHwQFESGzta21qu1qtZ3cAXQshVAUybTLbWttWh3iHLoDJEgkACRVCzbeX774Z6558+mfDTdz058efTpDbXai+tddbQBbn/YA9fwxy8/Pq3HFW7deL8W447duuyI2RGqIAAA9SOqHeXzbRtGiXeHiYBMggAAIyQh/f5vW96Xvvu7b7777yMURRHFE6EkEnxRIiOlY2ijs7RV4gYIB0R4LmgB49xg0Oz0bVr2a95u4AZE6rTjl2w58Cntu1ZV6e3Lw3ne2RegHw7R58NdbtVlJEkuqRV2WKa/hpbPIx3bIyX43aNWmezrCBry2ePzkA8reSwB/xdyRThQkOzl5v8="
// defaultDevnetGenesisBlockis a gzip compressed dump of a dev Ethereum network genesis block.
const defaultDevnetGenesisBlock = "QlpoOTFBWSZTWb66siUAAmldgEERWAR/8AAEP6eOakACGu6ULVwyRNGKeBCbUaAGIJUDJpT2lP1BQAAANqp6jTUxMIxGmAJgFVTAp4mjSaoAxAB4cLqQ6aBgTRPcAECErBIZ9PyZJw5IYvQPWlcXDXnhaC9buNtJaAFApAgwkUIRBmBY6EhTY8/6ZXse6lLD1Eh3sfas7VHnxse2LZeXpME9fADujvYB4Wr9cgQA+Wq0gRnrfVmgVwIiIrgfON+c4EhrApvQT2LjUh6y3x41XXOxriA89W52MDId9R7vasE3pfWWATQF7nAEAWRb861EU6M9SmBTET1UUGuBmgxzewhmUe4Rj4jSvgRFRTYgI8WzzcQn3nNr6x3GW94Yp1xG4kOoC/MxDhbF8uMahuMTmKyEep0SwLaxquoT3F9X0NxmouMWhxTAlxHQoN872vbYJS/Wrls2LuptmbKr4R6BJ/oJfPNpq+H362F5ZyhNQBxXep4ZOr94ylFM6UfOeUq37kFeGzXOle3LKItkvJ70OKqEqzujic72rnpZHym6b3xSONoWjYzRzIUbOKSUgdVtEhtvUg24asA2cy1zFzsfnj8CXe0Bdi7kinChIX11ZEo="

View File

@@ -57,6 +57,7 @@ func WriteGenesisBlock(chainDb ethdb.Database, reader io.Reader) (*types.Block,
Code string
Storage map[string]string
Balance string
Nonce string
}
}
@@ -69,7 +70,8 @@ func WriteGenesisBlock(chainDb ethdb.Database, reader io.Reader) (*types.Block,
for addr, account := range genesis.Alloc {
address := common.HexToAddress(addr)
statedb.AddBalance(address, common.String2Big(account.Balance))
statedb.SetCode(address, common.Hex2Bytes(account.Code))
statedb.SetCode(address, common.FromHex(account.Code))
statedb.SetNonce(address, common.String2Big(account.Nonce).Uint64())
for key, value := range account.Storage {
statedb.SetState(address, common.HexToHash(key), common.HexToHash(value))
}
@@ -178,12 +180,6 @@ func WriteTestNetGenesisBlock(chainDb ethdb.Database) (*types.Block, error) {
return WriteGenesisBlock(chainDb, strings.NewReader(DefaultTestnetGenesisBlock()))
}
// WriteOlympicGenesisBlock assembles the Olympic genesis block and writes it
// along with all associated state into a chain database.
func WriteOlympicGenesisBlock(db ethdb.Database) (*types.Block, error) {
return WriteGenesisBlock(db, strings.NewReader(OlympicGenesisBlock()))
}
// DefaultGenesisBlock assembles a JSON string representing the default Ethereum
// genesis block.
func DefaultGenesisBlock() string {
@@ -209,26 +205,12 @@ func DefaultTestnetGenesisBlock() string {
return string(blob)
}
// OlympicGenesisBlock assembles a JSON string representing the Olympic genesis
// block.
func OlympicGenesisBlock() string {
return fmt.Sprintf(`{
"nonce":"0x%x",
"gasLimit":"0x%x",
"difficulty":"0x%x",
"alloc": {
"0000000000000000000000000000000000000001": {"balance": "1"},
"0000000000000000000000000000000000000002": {"balance": "1"},
"0000000000000000000000000000000000000003": {"balance": "1"},
"0000000000000000000000000000000000000004": {"balance": "1"},
"dbdbdb2cbd23b783741e8d7fcf51e459b497e4a6": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"e4157b34ea9615cfbde6b4fda419828124b70c78": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"b9c015918bdaba24b4ff057a92a3873d6eb201be": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"6c386a4b26f73c802f34673f7248bb118f97424a": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"cd2a3d9f938e13cd947ec05abc7fe734df8dd826": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"2ef47100e0787b915105fd5e3f4ff6752079d5cb": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"e6716f9544a56c530d868e4bfbacb172315bdead": {"balance": "1606938044258990275541962092341162602522202993782792835301376"},
"1a26338f0d905e295fccb71fa9ea849ffa12aaf4": {"balance": "1606938044258990275541962092341162602522202993782792835301376"}
// DevGenesisBlock assembles a JSON string representing a local dev genesis block.
func DevGenesisBlock() string {
reader := bzip2.NewReader(base64.NewDecoder(base64.StdEncoding, strings.NewReader(defaultDevnetGenesisBlock)))
blob, err := ioutil.ReadAll(reader)
if err != nil {
panic(fmt.Sprintf("failed to load dev genesis: %v", err))
}
}`, types.EncodeNonce(42), params.GenesisGasLimit.Bytes(), params.GenesisDifficulty.Bytes())
return string(blob)
}

View File

@@ -339,7 +339,7 @@ func (hc *HeaderChain) InsertHeaderChain(chain []*types.Header, checkFreq int, w
if stats.ignored > 0 {
ignored = fmt.Sprintf(" (%d ignored)", stats.ignored)
}
glog.V(logger.Info).Infof("imported %d headers%s in %9v. #%v [%x… / %x…]", stats.processed, ignored, common.PrettyDuration(time.Since(start)), last.Number, first.Hash().Bytes()[:4], last.Hash().Bytes()[:4])
glog.V(logger.Info).Infof("imported %4d headers%s in %9v. #%v [%x… / %x…]", stats.processed, ignored, common.PrettyDuration(time.Since(start)), last.Number, first.Hash().Bytes()[:4], last.Hash().Bytes()[:4])
return 0, nil
}

View File

@@ -67,6 +67,9 @@ type (
addLogChange struct {
txhash common.Hash
}
addPreimageChange struct {
hash common.Hash
}
touchChange struct {
account *common.Address
prev bool
@@ -127,3 +130,7 @@ func (ch addLogChange) undo(s *StateDB) {
s.logs[ch.txhash] = logs[:len(logs)-1]
}
}
func (ch addPreimageChange) undo(s *StateDB) {
delete(s.preimages, ch.hash)
}

View File

@@ -82,10 +82,12 @@ func (ms *ManagedState) NewNonce(addr common.Address) uint64 {
return uint64(len(account.nonces)-1) + account.nstart
}
// GetNonce returns the canonical nonce for the managed or unmanaged account
// GetNonce returns the canonical nonce for the managed or unmanaged account.
//
// Because GetNonce mutates the DB, we must take a write lock.
func (ms *ManagedState) GetNonce(addr common.Address) uint64 {
ms.mu.RLock()
defer ms.mu.RUnlock()
ms.mu.Lock()
defer ms.mu.Unlock()
if ms.hasAccount(addr) {
account := ms.getAccount(addr)

View File

@@ -88,12 +88,12 @@ func TestRemoteNonceChange(t *testing.T) {
nn[i] = true
}
account.nonces = append(account.nonces, nn...)
nonce := ms.NewNonce(addr)
ms.NewNonce(addr)
ms.StateDB.stateObjects[addr].data.Nonce = 200
nonce = ms.NewNonce(addr)
nonce := ms.NewNonce(addr)
if nonce != 200 {
t.Error("expected nonce after remote update to be", 201, "got", nonce)
t.Error("expected nonce after remote update to be", 200, "got", nonce)
}
ms.NewNonce(addr)
ms.NewNonce(addr)
@@ -101,7 +101,7 @@ func TestRemoteNonceChange(t *testing.T) {
ms.StateDB.stateObjects[addr].data.Nonce = 200
nonce = ms.NewNonce(addr)
if nonce != 204 {
t.Error("expected nonce after remote update to be", 201, "got", nonce)
t.Error("expected nonce after remote update to be", 204, "got", nonce)
}
}

View File

@@ -75,6 +75,8 @@ type StateDB struct {
logs map[common.Hash][]*types.Log
logSize uint
preimages map[common.Hash][]byte
// Journal of state modifications. This is the backbone of
// Snapshot and RevertToSnapshot.
journal journal
@@ -99,6 +101,7 @@ func New(root common.Hash, db ethdb.Database) (*StateDB, error) {
stateObjectsDirty: make(map[common.Address]struct{}),
refund: new(big.Int),
logs: make(map[common.Hash][]*types.Log),
preimages: make(map[common.Hash][]byte),
}, nil
}
@@ -120,6 +123,7 @@ func (self *StateDB) New(root common.Hash) (*StateDB, error) {
stateObjectsDirty: make(map[common.Address]struct{}),
refund: new(big.Int),
logs: make(map[common.Hash][]*types.Log),
preimages: make(map[common.Hash][]byte),
}, nil
}
@@ -141,6 +145,7 @@ func (self *StateDB) Reset(root common.Hash) error {
self.txIndex = 0
self.logs = make(map[common.Hash][]*types.Log)
self.logSize = 0
self.preimages = make(map[common.Hash][]byte)
self.clearJournalAndRefund()
return nil
@@ -199,6 +204,21 @@ func (self *StateDB) Logs() []*types.Log {
return logs
}
// AddPreimage records a SHA3 preimage seen by the VM.
func (self *StateDB) AddPreimage(hash common.Hash, preimage []byte) {
if _, ok := self.preimages[hash]; !ok {
self.journal = append(self.journal, addPreimageChange{hash: hash})
pi := make([]byte, len(preimage))
copy(pi, preimage)
self.preimages[hash] = pi
}
}
// Preimages returns a list of SHA3 preimages that have been submitted.
func (self *StateDB) Preimages() map[common.Hash][]byte {
return self.preimages
}
func (self *StateDB) AddRefund(gas *big.Int) {
self.journal = append(self.journal, refundChange{prev: new(big.Int).Set(self.refund)})
self.refund.Add(self.refund, gas)
@@ -477,8 +497,9 @@ func (self *StateDB) Copy() *StateDB {
refund: new(big.Int).Set(self.refund),
logs: make(map[common.Hash][]*types.Log, len(self.logs)),
logSize: self.logSize,
preimages: make(map[common.Hash][]byte),
}
// Copy the dirty states and logs
// Copy the dirty states, logs, and preimages
for addr := range self.stateObjectsDirty {
state.stateObjects[addr] = self.stateObjects[addr].deepCopy(state, state.MarkStateObjectDirty)
state.stateObjectsDirty[addr] = struct{}{}
@@ -487,6 +508,9 @@ func (self *StateDB) Copy() *StateDB {
state.logs[hash] = make([]*types.Log, len(logs))
copy(state.logs[hash], logs)
}
for hash, preimage := range self.preimages {
state.preimages[hash] = preimage
}
return state
}

View File

@@ -99,7 +99,7 @@ func ApplyTransaction(config *params.ChainConfig, bc *BlockChain, gp *GasPool, s
context := NewEVMContext(msg, header, bc)
// Create a new environment which holds all relevant information
// about the transaction and calling mechanisms.
vmenv := vm.NewEVM(context, statedb, config, vm.Config{})
vmenv := vm.NewEVM(context, statedb, config, cfg)
// Apply the transaction to the current state (included in the env)
_, gas, err := ApplyMessage(vmenv, msg, gp)
if err != nil {

View File

@@ -233,9 +233,6 @@ func (self *StateTransition) TransitionDb() (ret []byte, requiredGas, usedGas *b
)
if contractCreation {
ret, _, vmerr = vmenv.Create(sender, self.data, self.gas, self.value)
if homestead && err == vm.ErrCodeStoreOutOfGas {
self.gas = Big0
}
} else {
// Increment the nonce for the next transaction
self.state.SetNonce(sender.Address(), self.state.GetNonce(sender.Address())+1)

View File

@@ -89,7 +89,7 @@ type TxPool struct {
gasLimit func() *big.Int // The current gas limit function callback
minGasPrice *big.Int
eventMux *event.TypeMux
events event.Subscription
events *event.TypeMuxSubscription
localTx *txSet
signer types.Signer
mu sync.RWMutex

Some files were not shown because too many files have changed in this diff Show More