Compare commits

...

105 Commits

Author SHA1 Message Date
52f2461774 params: release Geth v1.9.0 2019-07-10 10:41:15 +03:00
54d0eeb494 Merge pull request #19818 from rjl493456442/encap-les
cmd: encapsulate les relative cli options
2019-07-10 09:29:14 +03:00
c705aac826 cmd, eth, les: make les flags conform to dotted style 2019-07-10 09:12:07 +03:00
c6a9616cfd cmd: encapsulate les relative cli options 2019-07-10 08:32:26 +03:00
8c4bc4f7ef Merge pull request #19814 from karalabe/ulc-fixup
cmd, eth, les: fix up ultra light config integration
2019-07-10 08:31:35 +03:00
e970d60d37 Merge pull request #19815 from karalabe/go-1.12.7
appveyor: bump builder to Go 1.12.7
2019-07-09 21:04:31 +03:00
2808bc68b9 appveyor: bump builder to Go 1.12.7 2019-07-09 21:03:37 +03:00
213690cdfd cmd, eth, les: fix up ultra light config integration 2019-07-09 20:34:42 +03:00
7527215a68 core/state: fix random test args (#19255) 2019-07-09 10:32:27 +02:00
8c249cb82f Merge pull request #19810 from karalabe/txpool-noncer
core: kill off managed state, use own tiny noncer for txpool
2019-07-09 11:02:13 +03:00
a966425a1d core: kill off managed state, use own tiny noncer for txpool 2019-07-09 10:42:09 +03:00
5873c01c3d Merge pull request #19807 from karalabe/cht
params: bump all CHTs, deploy all checkpoint oracles
2019-07-09 02:19:49 +03:00
1bed5afd92 params: bump all CHTs, deploy all checkpoint oracles 2019-07-09 02:05:01 +03:00
16e313699f cmd/puppeth: integrate blockscout (#18261)
* cmd/puppeth: integrate blockscout

* cmd/puppeth: expose debug namespace for blockscout

* cmd/puppeth: fix dbdir

* cmd/puppeth: run explorer in archive mode

* cmd/puppeth: ensure node is synced

* cmd/puppeth: fix explorer docker alignment + drop unneeded exec

* cmd/puppeth: polish up config saving and reloading

* cmd/puppeth: check both web and p2p port for explorer service
2019-07-08 20:49:11 +03:00
fa538ee7ed p2p/discover: improve randomness of ReadRandomNodes (#19799)
Make it select from all live nodes instead of selecting the heads of
random buckets.
2019-07-08 18:58:03 +03:00
983f92368b core/forkid: implement the forkid EIP, announce via ENR (#19738)
* eth: chain config (genesis + fork) ENR entry

* core/forkid, eth: protocol independent fork ID, update to CRC32 spec

* core/forkid, eth: make forkid a struct, next uint64, enr struct, RLP

* core/forkid: change forkhash rlp encoding from int to [4]byte

* eth: fixup eth entry a bit and update it every block

* eth: fix lint

* eth: fix crash in ethclient tests
2019-07-08 18:53:47 +03:00
cc0f0e27a6 p2p: remove "cap" enr entry (#19800)
This entry was an experiment, but we're moving on to the
entry-per-protocol instead.
2019-07-08 18:41:41 +03:00
22060611fb cmd/abigen: refactor command line interface (#19797)
* cmd, common: refactor abigen command line interface

* cmd/abigen: address comment
2019-07-08 14:59:07 +02:00
cdfe9a3a2a eth, les: add sanity checks for unbounded block fields (#19573)
This PR adds some hardening in the lower levels of the protocol stack, to bail early on invalid data. Primarily, attacks that this PR protects against are on the "annoyance"-level, which would otherwise write a couple of megabytes of data into the log output, which is a bit resource intensive.
2019-07-08 11:42:22 +02:00
5bc9ccfa0a accounts/abi/bind: link dependent libs in deploy (#19718)
* accounts, abigen: link dependent libs in deploy

* abigen: add java generation

* bind: Fix unit tests

* abigen: add unit test

* Fix CI

* Post-rebase fixes

* Fix rebase issue

* accounts/abi: Gary's review feedback

* accounts/abi: More Gary feedback

* accounts/abi: minor fixes
2019-07-08 10:27:05 +02:00
f2eb3b1c56 core: lessen mem-spike during 1.8->1.9 conversion (#19610)
* core/blockchain: lessen mem-spike during 1.8->1.9 conversion

* core/blockchain.go: make levedb->freezer conversion gradually

* core/blockchain: write the batch
2019-07-08 11:24:16 +03:00
7fd82a0e3e p2p: add address info to peer event reporting (#19716) 2019-07-05 20:27:13 +02:00
dcc4adfcd7 cmd/geth: wrong memory size sanitizing on OpenBSD (#19793) 2019-07-05 13:13:21 +03:00
d9c75cd10e accounts/abi/bind: fix typo in comments (#19791) 2019-07-04 10:00:34 +02:00
6814797173 accounts, cmd, contracts, les: integrate clef for transaction signing (#19783)
* accounts, cmd, contracts, les: integrate clef for transaction signing

* accounts, cmd/checkpoint-admin, signer/core: minor fixups
2019-07-03 22:54:59 +03:00
59a3198382 les: remove half-finished priority pool APIs (#19780)
* les: remove half-finish APIs

* les: remove half-finish APIs
2019-07-03 21:23:06 +03:00
8d2cf028a5 vendor: update karalabe/usb to fix CGO=0 builds (#19790) 2019-07-03 18:22:27 +03:00
5f5de49cd9 accounts/abi: enable struct golang binding generation (#18491)
* accounts/abi, cmd/abigen: support tuple

accounts/abi/bind, cmd/abigen: add objc back

accounts/abi/bind: use byte[24] as function indicator

accounts/abi/bind: resolve struct slice or array

accounts/abi/bind: remove sort logic

accounts: fix issues in abi

* accounts/abi: address comment
2019-07-03 12:17:43 +02:00
ca6c8c2af4 core: fix receipt insertion (#19764) 2019-07-03 11:19:15 +03:00
802074cba9 core: fix chain indexer (#19786)
This PR fixes an issue in chain indexer. Currently chain indexer will
validate whether the stored data is canonical by comparing section head
and canonical hash. But the header of the checkpoint may not exist in
the database. We should skip validation for sections below the
checkpoint.
2019-07-03 11:18:48 +03:00
32273df0ea core: fix chain indexer reorg bug (#19748)
* core: fix chain indexer reorg bug

* core: prevent reverting valid section when reorg happens
2019-07-02 15:30:32 +03:00
de6facb966 Merge pull request #19784 from karalabe/fix-constantinople-fix
cmd, eth, les, param: drop --override.constantinople
2019-07-02 14:59:28 +03:00
22411919da cmd, eth, les, param: drop --override.constantinople 2019-07-02 14:14:59 +03:00
a0943b8932 cmd/clef, signer: refresh tutorial, fix noticed issues (#19774)
* cmd/clef, signer: refresh tutorial, fix noticed issues

* cmd/clef, signer: support removing stored keys (delpw + rules)

* cmd/clef: polishes + Geth integration in the tutorial
2019-07-02 14:01:47 +03:00
6bf5555c4f accounts/abi/bind: Accept function ptr parameter (#19755)
* accounts/abi/bind: Accept function ptr parameter

They are translated as [24]byte

* Add Java template version

* accounts/abi/bind: fix merge issue

* Fix CI
2019-07-02 09:52:58 +02:00
0b26a826e9 accounts/abi: Fix method overwritten by same name methods. (#17099)
* accounts/abi: Fix method overwritten by same name methods.

* accounts/abi: Fix method overwritten by same name methods.

* accounts/abi: avoid possible name conflict

Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2019-07-01 16:51:21 +02:00
f7cdea2bdc all: on-chain oracle checkpoint syncing (#19543)
* all: implement simple checkpoint syncing

cmd, les, node: remove callback mechanism

cmd, node: remove callback definition

les: simplify the registrar

les: expose checkpoint rpc services in the light client

les, light: don't store untrusted receipt

cmd, contracts, les: discard stale checkpoint

cmd, contracts/registrar: loose restriction of registeration

cmd, contracts: add replay-protection

all: off-chain multi-signature contract

params: deploy checkpoint contract for rinkeby

cmd/registrar: add raw signing mode for registrar

cmd/registrar, contracts/registrar, les: fixed messages

* cmd/registrar, contracts/registrar: fix lints

* accounts/abi/bind, les: address comments

* cmd, contracts, les, light, params: minor checkpoint sync cleanups

* cmd, eth, les, light: move checkpoint config to config file

* cmd, eth, les, params: address comments

* eth, les, params: address comments

* cmd: polish up the checkpoint admin CLI

* cmd, contracts, params: deploy new version contract

* cmd/checkpoint-admin: add another flag for clef mode signing

* cmd, contracts, les: rename and regen checkpoint oracle with abigen
2019-06-28 10:34:02 +03:00
702f52fb99 les: prefer nil slices over zero-length slices (#19081) 2019-06-27 12:03:56 +03:00
6069b1a5f5 mobile: fix mobile interface (#19180)
* mobile: fix mobile interface

* mobile, accounts: generate correct java binding

* accounts: fix java type binding

* mobile: support integer slice

* accounts/abi/bind, cmd/abigen: implement java binding tests
2019-06-27 11:48:13 +03:00
fd072c2fd1 eth: fix sync bloom panic (#19757)
* eth: fix sync bloom panic

* eth: delete useless test cases
2019-06-26 11:00:21 +03:00
54fd263b40 whisper: PoW calculations as specified in EIP-627 (#19753)
* whisper: PoW calculations as specified in EIP-627

* Fix unit tests
2019-06-25 12:01:34 +02:00
2ca89ea479 cmd/evm: evm input minor fixes (#19740)
* cmd/evm: evm input minor fixes, handle prefix, validate length, fixes #18041

* cmd/evm: remove whitespace
2019-06-25 11:03:04 +03:00
1da5e0ebb0 core/state, cmd/geth: streaming json output for dump command (#15475)
* core/state, cmd/geth: streaming json output dump cmd + optional code+storage

* dump: add option to continue even if preimages are missing

* core, evm: lint nits

* cmd: use local flags for dump, omit empty code/storage

* core/state: fix state dump test
2019-06-24 17:16:44 +03:00
e4a1488b2f abi: adding the method EventByID and its test (#19359)
This function searches for an event+parameters in the ABI and returns it if found.

Co-authored-by: Victor Tran <vu.tran54@gmail.com>
Co-authored-by: Guillaume Ballet <gballet@gmail.com>
2019-06-24 12:52:50 +02:00
98099d6fa7 rpc: fix subscription buffer documentation and test (#19747)
This PR updates a comment about the maximum client subscription buffer
to reflect changes made previously, and fixes a test that wouldn't fail
when wantError == true but execution did not return an error.
2019-06-24 12:43:18 +02:00
92a90d7578 graphql: check the integrity of the CDN files (#19742)
* graphql: check the integrity of the cdn files

* graphql: omit go-bindata
2019-06-24 11:56:55 +03:00
f578d41ee6 core/vm, internal/ethapi: fail on eth_call when it times out, fixes #19186 (#19737) 2019-06-24 11:54:06 +03:00
cdadf57bf9 p2p/simulations: Enable access to MsgEvents with execadapter (#19749) 2019-06-21 13:45:32 +02:00
60c062e17d core: move TxPool reorg and events to background goroutine (#19705)
* core: move TxPool reorg and events to background goroutine

This change moves internal queue re-shuffling work in TxPool to a
background goroutine, TxPool.runReorg. Requests to execute runReorg are
accumulated by the new scheduleReorgLoop. The new loop also accumulates
transaction events.

The motivation for this change is making sends to txFeed synchronous
instead of sending them in one-off goroutines launched by 'add' and
'promoteExecutables'. If a downstream consumer of txFeed is blocked for
a while, reorg requests and events will queue up.

* core: remove homestead check in TxPool

This change removes tracking of the homestead block number from TxPool.
The homestead field was used to enforce minimum gas of 53000 for
contract creations after the homestead fork, but not before it. Since
nobody would want configure a non-homestead chain nowadays and contract
creations usually take more than 53000 gas, the extra correctness is
redundant and can be removed.

* core: fixes for review comments

* core: remove BenchmarkPoolInsert

This is useless now because there is no separate code path for
individual transactions anymore.

* core: fix pending counter metric

* core: fix pool tests

* core: dedup txpool announced events, discard stales

* core: reorg tx promotion/demotion to avoid weird pending gaps
2019-06-21 11:29:14 +03:00
25c3282cf1 mobile: fix comment typos (#19741) 2019-06-20 11:38:31 +02:00
1eddd332d2 Merge pull request #19680 from holiman/bootnode_update
params: add new bootnodes
2019-06-20 12:09:37 +03:00
c94068a8bb Merge pull request #19700 from karalabe/cleanup-graphql
cmd, graphql, node: graphql flag polishes, les integration
2019-06-20 09:54:53 +03:00
e3ec77f50e cmd, graphql, node: graphql flag polishes, les integration 2019-06-20 09:40:26 +03:00
8d815e365c rpc: fix rare deadlock when canceling HTTP call context (#19715)
When cancelling the context for a call on a HTTP-based client while the
call is running, the select in requestOp.wait may hit the <-context.Done()
case instead of the <-op.resp case. This doesn't happen often -- our
cancel test hasn't caught this even though it ran thousands of times
on CI since the RPC client was added.

Fixes #19714
2019-06-20 09:36:27 +03:00
3271a5afa0 miner: don't update pending state when no transactions are added (#19734)
* miner: don't update pending state when no transactions are added

* miner: avoid transaction processing when pending block is already full
2019-06-19 14:09:28 +03:00
2b303e7d62 clef: fix stutter in warning message (#19736) 2019-06-19 11:43:04 +02:00
96f8be36ad Merge pull request #19732 from karalabe/simulated-eip155
accounts/abi/bind/backends: use EIP155 on the simulated chain
2019-06-18 12:14:00 +03:00
38c914be64 accounts/abi/bind/backends: use EIP155 on the simulated chain 2019-06-18 11:49:30 +03:00
cf47ee5339 Merge pull request #19731 from holiman/fix_19707
accounts/keystore: fix #19707, avoid keyword as variable name
2019-06-18 11:24:11 +03:00
2046d66fe5 accounts/keystore: fix #19707, avoid keyword as variable name 2019-06-18 09:46:56 +02:00
d6ccfd92f7 Merge pull request #19725 from karalabe/goroutine-metrics
metrics: gather and export threads and goroutines
2019-06-17 10:58:44 +03:00
5298eb7519 metrics: gather and export threads and goroutines 2019-06-17 10:53:17 +03:00
79c90dce20 appveyor: bump to Go 1.12.6 (#19709)
* appveyor: bump to Go 1.12.6

* vendor/vendor.json: govendor fetch github.com/karalabe/usb/^
2019-06-13 22:48:04 +03:00
2da6d1e047 README.md: update formatting (#19532) 2019-06-13 15:23:22 +02:00
f213ceb83f p2p: add more info to peer addition and removal logs (#19712) 2019-06-13 11:51:11 +02:00
c8c3ebd593 les: reject client if it makes too many invalid requests (#19691)
* les: reject client connection if it makes too much invalid req

* les: address comments

* les: use uint32

* les: fix variable name

* les: add invalid counter for duplicate invalid req
2019-06-12 14:09:40 +03:00
b3f7609d7d accounts/abi/bind: rename NewKeystoreTransactor (#19703)
renamed NewKeyStoreFromTransactor to NewKeystoreTransactor
fixed godoc
2019-06-12 14:06:37 +03:00
32dafea1f0 Merge pull request #19701 from holiman/fixles
les/handler: avoid lookup missing state
2019-06-12 13:08:42 +03:00
3d7d7384ca Merge pull request #19702 from karalabe/txprop-stricter-limiting
eth: enforce stricter known limits on idle peers
2019-06-12 13:07:56 +03:00
fc4fee8649 eth: enforce stricter known limits on idle peers 2019-06-12 12:30:06 +03:00
3675b8545d les/handler: avoid lookup missing state 2019-06-12 11:18:49 +02:00
50e3795eef core/types: document RawSignatureValues (#19695) 2019-06-12 10:22:33 +03:00
c4e8806d9b dashboard: update yarn.lock (#19697) 2019-06-12 10:01:57 +03:00
2b54666018 ethclient, internal/ethapi: add support for EIP-695 (eth_chainId) (#19694)
EIP-695 was written in 2017. Parity and Infura have support for this
method and we should, too.
2019-06-11 14:12:33 +03:00
c420dcb39c p2p: enforce connection retry limit on server side (#19684)
The dialer limits itself to one attempt every 30s. Apply the same limit
in Server and reject peers which try to connect too eagerly. The check
against the limit happens right after accepting the connection.

Further changes in this commit ensure we pass the Server logger
down to Peer instances, discovery and dialState. Unit test logging now
works in all Server tests.
2019-06-11 12:45:33 +02:00
c0a034ec89 eth, les: reject stale request (#19689)
* eth, les: reject stale request

* les: reuse local head number
2019-06-11 10:40:32 +03:00
3d3e83ecff Merge pull request #19692 from karalabe/metrics-extensions
core, ethdb, metrics, p2p: expose various counter metrics for grafana
2019-06-11 10:07:41 +03:00
b02958b9c5 core, ethdb, metrics, p2p: expose various counter metrics for grafana 2019-06-11 09:49:13 +03:00
f9c0e093ed core/rawdb: avoid O_APPEND (#19676)
* Fix file system access for Windows

* Encapsulate file accesses

* Style fixes
2019-06-10 12:45:12 +03:00
6f80629383 accounts: added transactorFromKeyStore (#19685) 2019-06-08 15:19:26 +02:00
afb9e6513f vendor: remove unused dependencies (#19683)
* vendor: remove unused dependencies

These were used by swarm code, which has now migrated to its own repository.

* travis.yml: remove sudo requirement for test builders

These needed sudo to run FUSE tests for swarm.
2019-06-07 17:51:18 +03:00
e83c3ccc47 p2p/enode: improve IPv6 support, add ENR text representation (#19663)
* p2p/enr: add entries for for IPv4/IPv6 separation

This adds entry types for "ip6", "udp6", "tcp6" keys. The IP type stays
around because removing it would break a lot of code and force everyone
to care about the distinction.

* p2p/enode: track IPv4 and IPv6 address separately

LocalNode predicts the local node's UDP endpoint and updates the record.
This change makes it predict IPv4 and IPv6 endpoints separately since
they can now be in the record at the same time.

* p2p/enode: implement base64 text format
* all: switch to enode.Parse(...)

This allows passing base64-encoded node records to all the places that
previously accepted enode:// URLs. The URL format is still supported.

* cmd/bootnode, p2p: log node URL instead of ENR

...and return the base64 record in NodeInfo.
2019-06-07 15:31:00 +02:00
896322bf88 cmd/devp2p: add devp2p debug tool (#19657)
* p2p/discover: export Ping and RequestENR

These two are useful for checking the status of a node.

* cmd/devp2p: add devp2p debug tool

This is a new tool for debugging p2p issues. It supports a few
basic tasks for now, but many more things can and will be added
in the near future.

   devp2p enrdump            -- prints ENRs readably
   devp2p discv4 ping        -- checks if a node is up
   devp2p discv4 requestenr  -- gets a node's record
   devp2p discv4 resolve     -- finds a node through the DHT
2019-06-07 15:29:16 +02:00
1b15c6da33 accounts/scwallet: Disable macos support (#19679) 2019-06-07 12:41:54 +03:00
ce43c2366d Merge pull request #19681 from karalabe/fix-libusb-docker
vendor: pull in USB fix for docker (alpine/musl)
2019-06-07 12:37:32 +03:00
9805288cdd vendor: pull in USB fix for docker (alpine/musl) 2019-06-07 12:30:50 +03:00
a4eeeedb37 params: add new bootnodes 2019-06-07 10:55:45 +02:00
19fa9064d7 SECURITY.md: create security policy (#19666)
Github has started supporting SECURITY.md to contain a project's security policy. Adding this information to the repository makes it easier to determine how to disclosure a vulnerability as SECURITY.md becomes a standard.

The pgp fingerprint and key is taken from bounty.ethereum.org.
2019-06-06 14:40:52 +02:00
f01f3f266c Merge pull request #19674 from karalabe/usb-ios-fixup
vendor: pull fixed usb library for nocgo builds
2019-06-06 12:50:03 +03:00
c52390d078 vendor: pull fixed usb library for nocgo builds 2019-06-06 12:25:32 +03:00
2823ce0086 eth: check for DefaultConfig.NetworkId in test (#17599)
This makes the test work if NetworkId is changed in forks of go-ethereum.
2019-06-06 10:49:35 +02:00
30c2b1b06d Merge pull request #19671 from holiman/usbfix
account/usbwallet: abort usb enumeration after failures
2019-06-05 16:52:31 +03:00
2fae1bde42 account/usbwallet: abort usb enumeration after failures 2019-06-05 16:48:09 +03:00
b8ca3cb7d2 cmd/clef: enable smartcard hub (#19649)
* cmd/clef: Enable smartcard hub

* clef: don't error is pcsc is not installed
2019-06-05 15:27:37 +02:00
7c9307c683 cmd: Add retesteth command (to support execution and generation of tests via retesteth) (#19631)
* Add retesteth command

* Remove label and insert full version

* mineBlock - break the inner loop when the block is full

* Fixes for touched non-reward accounts, gas limit issues

* Not fail when SendTx has transaction with incorrect RLP

* Fix linter (unnecessary conversion)

* retesteth: add usage string to flag
2019-06-05 14:03:23 +02:00
7641bbe535 eth/downloader: make syncing error more obvious (#19413) 2019-06-05 14:00:46 +02:00
645756cda5 cmd/utils: close quote (#19665) 2019-06-04 21:17:12 +02:00
d97f0372b1 accounts/scwallet: don't error when pcsc socket is missing (#19662)
* scwallet: don't error when pcsc socket is missing

* review feedback

* more review feedback
2019-06-04 18:40:34 +03:00
de38a1dbd4 Merge pull request #19588 from gballet/trezor-fix-ownlib
accounts/usbwallet: add webusb trezor support
2019-06-04 18:06:11 +03:00
5d68400cad accounts/usbwallet, vendor: switch from HID to generic USB lib 2019-06-04 18:04:55 +03:00
42b81f94ad swarm: code cleanup, move to ethersphere/swarm (#19661) 2019-06-04 16:35:36 +03:00
15f24ff189 ethclient: ensure tx json is not nil before accessing it (#19653)
TransactionInBlock crashed if json was nil and there was an error
because it tried to access fields `From` and `BlockHash` of the nil object.
2019-06-03 17:52:02 +02:00
17381ecc66 core/signer, clef: improve ui-test flow, fix errors in uint handling (#19584)
* core/signer, clef: improve ui-test flow, fix errors in uint handling for eip-712

* core/signer: add fuzzer testcases + crashfixes

* signer: address review concerns, check sign in integer parsing
2019-06-03 16:56:05 +02:00
b4cc7b660c accounts/usbwallet: recreate Trezor protocol, support old and new 2019-06-03 16:08:04 +03:00
4799b5abd4 accounts/usbwallet: support webusb for Trezor wallets 2019-06-03 16:08:03 +03:00
848 changed files with 37292 additions and 137377 deletions

View File

@ -5,35 +5,23 @@ matrix:
include:
- os: linux
dist: xenial
sudo: required
go: 1.10.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- os: linux
dist: xenial
sudo: required
go: 1.11.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- os: linux
dist: xenial
sudo: required
go: 1.12.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
@ -64,7 +52,7 @@ matrix:
- go run build/ci.go lint
# This builder does the Ubuntu PPA upload
- if: repo = ethereum/go-ethereum AND type = push
- if: type = push
os: linux
dist: xenial
go: 1.12.x
@ -86,7 +74,7 @@ matrix:
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Linux Azure uploads
- if: repo = ethereum/go-ethereum AND type = push
- if: type = push
os: linux
dist: xenial
sudo: required
@ -120,7 +108,7 @@ matrix:
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the Linux Azure MIPS xgo uploads
- if: repo = ethereum/go-ethereum AND type = push
- if: type = push
os: linux
dist: xenial
services:
@ -148,7 +136,7 @@ matrix:
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the Android Maven and Azure uploads
- if: repo = ethereum/go-ethereum AND type = push
- if: type = push
os: linux
dist: xenial
addons:
@ -185,7 +173,7 @@ matrix:
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- if: repo = ethereum/go-ethereum AND type = push
- if: type = push
os: osx
go: 1.12.x
env:
@ -214,7 +202,7 @@ matrix:
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
# This builder does the Azure archive purges to avoid accumulating junk
- if: repo = ethereum/go-ethereum AND type = cron
- if: type = cron
os: linux
dist: xenial
go: 1.12.x
@ -224,12 +212,3 @@ matrix:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go purge -store gethstore/builds -days 14
- name: Race Detector for Swarm
if: repo = ethersphere/go-ethereum
os: linux
dist: xenial
go: 1.12.x
git:
submodules: false # avoid cloning ethereum/tests
script: ./build/travis_keepalive.sh go test -timeout 20m -race ./swarm... ./p2p/{protocols,simulations,testing}/...

View File

@ -2,7 +2,7 @@
# with Go source code. If you know what GOPATH is then you probably
# don't need to bother with make.
.PHONY: geth android ios geth-cross swarm evm all test clean
.PHONY: geth android ios geth-cross evm all test clean
.PHONY: geth-linux geth-linux-386 geth-linux-amd64 geth-linux-mips64 geth-linux-mips64le
.PHONY: geth-linux-arm geth-linux-arm-5 geth-linux-arm-6 geth-linux-arm-7 geth-linux-arm64
.PHONY: geth-darwin geth-darwin-386 geth-darwin-amd64
@ -16,11 +16,6 @@ geth:
@echo "Done building."
@echo "Run \"$(GOBIN)/geth\" to launch geth."
swarm:
build/env.sh go run build/ci.go install ./cmd/swarm
@echo "Done building."
@echo "Run \"$(GOBIN)/swarm\" to launch swarm."
all:
build/env.sh go run build/ci.go install
@ -57,9 +52,6 @@ devtools:
@type "solc" 2> /dev/null || echo 'Please install solc'
@type "protoc" 2> /dev/null || echo 'Please install protoc'
swarm-devtools:
env GOBIN= go install ./cmd/swarm/mimegen
# Cross Compilation Targets (xgo)
geth-cross: geth-linux geth-darwin geth-windows geth-android geth-ios

351
README.md
View File

@ -9,303 +9,336 @@ https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/6874
[![Travis](https://travis-ci.org/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv)
Automated builds are available for stable releases and the unstable master branch.
Binary archives are published at https://geth.ethereum.org/downloads/.
Automated builds are available for stable releases and the unstable master branch. Binary
archives are published at https://geth.ethereum.org/downloads/.
## Building the source
For prerequisites and detailed build instructions please read the
[Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum)
on the wiki.
For prerequisites and detailed build instructions please read the [Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum) on the wiki.
Building geth requires both a Go (version 1.10 or later) and a C compiler.
You can install them using your favourite package manager.
Once the dependencies are installed, run
Building `geth` requires both a Go (version 1.10 or later) and a C compiler. You can install
them using your favourite package manager. Once the dependencies are installed, run
make geth
```shell
make geth
```
or, to build the full suite of utilities:
make all
```shell
make all
```
## Executables
The go-ethereum project comes with several wrappers/executables found in the `cmd` directory.
The go-ethereum project comes with several wrappers/executables found in the `cmd`
directory.
| Command | Description |
|:----------:|-------------|
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `swarm` | Swarm daemon and tools. This is the entry point for the Swarm network. `swarm --help` for command line options and subcommands. See [Swarm README](https://github.com/ethereum/go-ethereum/tree/master/swarm) for more information. |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
| Command | Description |
| :-----------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
## Running geth
## Running `geth`
Going through all the possible command line flags is out of scope here (please consult our
[CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options)), but we've
enumerated a few common parameter combos to get you up to speed quickly on how you can run your
own Geth instance.
[CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options)),
but we've enumerated a few common parameter combos to get you up to speed quickly
on how you can run your own `geth` instance.
### Full node on the main Ethereum network
By far the most common scenario is people wanting to simply interact with the Ethereum network:
create accounts; transfer funds; deploy and interact with contracts. For this particular use-case
the user doesn't care about years-old historical data, so we can fast-sync quickly to the current
state of the network. To do so:
By far the most common scenario is people wanting to simply interact with the Ethereum
network: create accounts; transfer funds; deploy and interact with contracts. For this
particular use-case the user doesn't care about years-old historical data, so we can
fast-sync quickly to the current state of the network. To do so:
```
```shell
$ geth console
```
This command will:
* Start geth in fast sync mode (default, can be changed with the `--syncmode` flag), causing it to
download more data in exchange for avoiding processing the entire history of the Ethereum network,
which is very CPU intensive.
* Start up Geth's built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
* Start `geth` in fast sync mode (default, can be changed with the `--syncmode` flag),
causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive.
* Start up `geth`'s built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://github.com/ethereum/wiki/wiki/JavaScript-API)
as well as Geth's own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
This tool is optional and if you leave it out you can always attach to an already running Geth instance
with `geth attach`.
as well as `geth`'s own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
This tool is optional and if you leave it out you can always attach to an already running
`geth` instance with `geth attach`.
### A Full node on the Ethereum test network
Transitioning towards developers, if you'd like to play around with creating Ethereum contracts, you
almost certainly would like to do that without any real money involved until you get the hang of the
entire system. In other words, instead of attaching to the main network, you want to join the **test**
network with your node, which is fully equivalent to the main network, but with play-Ether only.
Transitioning towards developers, if you'd like to play around with creating Ethereum
contracts, you almost certainly would like to do that without any real money involved until
you get the hang of the entire system. In other words, instead of attaching to the main
network, you want to join the **test** network with your node, which is fully equivalent to
the main network, but with play-Ether only.
```
```shell
$ geth --testnet console
```
The `console` subcommand has the exact same meaning as above and they are equally useful on the
testnet too. Please see above for their explanations if you've skipped here.
The `console` subcommand has the exact same meaning as above and they are equally
useful on the testnet too. Please see above for their explanations if you've skipped here.
Specifying the `--testnet` flag, however, will reconfigure your Geth instance a bit:
Specifying the `--testnet` flag, however, will reconfigure your `geth` instance a bit:
* Instead of using the default data directory (`~/.ethereum` on Linux for example), Geth will nest
itself one level deeper into a `testnet` subfolder (`~/.ethereum/testnet` on Linux). Note, on OSX
and Linux this also means that attaching to a running testnet node requires the use of a custom
endpoint since `geth attach` will try to attach to a production node endpoint by default. E.g.
`geth attach <datadir>/testnet/geth.ipc`. Windows users are not affected by this.
* Instead of connecting the main Ethereum network, the client will connect to the test network,
which uses different P2P bootnodes, different network IDs and genesis states.
* Instead of using the default data directory (`~/.ethereum` on Linux for example), `geth`
will nest itself one level deeper into a `testnet` subfolder (`~/.ethereum/testnet` on
Linux). Note, on OSX and Linux this also means that attaching to a running testnet node
requires the use of a custom endpoint since `geth attach` will try to attach to a
production node endpoint by default. E.g.
`geth attach <datadir>/testnet/geth.ipc`. Windows users are not affected by
this.
* Instead of connecting the main Ethereum network, the client will connect to the test
network, which uses different P2P bootnodes, different network IDs and genesis states.
*Note: Although there are some internal protective measures to prevent transactions from crossing
over between the main network and test network, you should make sure to always use separate accounts
for play-money and real-money. Unless you manually move accounts, Geth will by default correctly
separate the two networks and will not make any accounts available between them.*
*Note: Although there are some internal protective measures to prevent transactions from
crossing over between the main network and test network, you should make sure to always
use separate accounts for play-money and real-money. Unless you manually move
accounts, `geth` will by default correctly separate the two networks and will not make any
accounts available between them.*
### Full node on the Rinkeby test network
The above test network is a cross-client one based on the ethash proof-of-work consensus algorithm. As such, it has certain extra overhead and is more susceptible to reorganization attacks due to the network's low difficulty/security. Go Ethereum also supports connecting to a proof-of-authority based test network called [*Rinkeby*](https://www.rinkeby.io) (operated by members of the community). This network is lighter, more secure, but is only supported by go-ethereum.
The above test network is a cross-client one based on the ethash proof-of-work consensus
algorithm. As such, it has certain extra overhead and is more susceptible to reorganization
attacks due to the network's low difficulty/security. Go Ethereum also supports connecting
to a proof-of-authority based test network called [*Rinkeby*](https://www.rinkeby.io)
(operated by members of the community). This network is lighter, more secure, but is only
supported by go-ethereum.
```
```shell
$ geth --rinkeby console
```
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a configuration file via:
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a
configuration file via:
```
```shell
$ geth --config /path/to/your_config.toml
```
To get an idea how the file should look like you can use the `dumpconfig` subcommand to export your existing configuration:
To get an idea how the file should look like you can use the `dumpconfig` subcommand to
export your existing configuration:
```
```shell
$ geth --your-favourite-flags dumpconfig
```
*Note: This works only with geth v1.6.0 and above.*
*Note: This works only with `geth` v1.6.0 and above.*
#### Docker quick start
One of the quickest ways to get Ethereum up and running on your machine is by using Docker:
One of the quickest ways to get Ethereum up and running on your machine is by using
Docker:
```
```shell
docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
-p 8545:8545 -p 30303:30303 \
ethereum/client-go
```
This will start geth in fast-sync mode with a DB memory allowance of 1GB just as the above command does. It will also create a persistent volume in your home directory for saving your blockchain as well as map the default ports. There is also an `alpine` tag available for a slim version of the image.
This will start `geth` in fast-sync mode with a DB memory allowance of 1GB just as the
above command does. It will also create a persistent volume in your home directory for
saving your blockchain as well as map the default ports. There is also an `alpine` tag
available for a slim version of the image.
Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containers and/or hosts. By default, `geth` binds to the local interface and RPC endpoints is not accessible from the outside.
Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containers
and/or hosts. By default, `geth` binds to the local interface and RPC endpoints is not
accessible from the outside.
### Programmatically interfacing Geth nodes
### Programmatically interfacing `geth` nodes
As a developer, sooner rather than later you'll want to start interacting with Geth and the Ethereum
network via your own programs and not manually through the console. To aid this, Geth has built-in
support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC) and
[Geth specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)). These can be
exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based platforms, and named pipes on Windows).
As a developer, sooner rather than later you'll want to start interacting with `geth` and the
Ethereum network via your own programs and not manually through the console. To aid
this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC)
and [`geth` specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)).
These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based
platforms, and named pipes on Windows).
The IPC interface is enabled by default and exposes all the APIs supported by Geth, whereas the HTTP
and WS interfaces need to manually be enabled and only expose a subset of APIs due to security reasons.
These can be turned on/off and configured as you'd expect.
The IPC interface is enabled by default and exposes all the APIs supported by `geth`,
whereas the HTTP and WS interfaces need to manually be enabled and only expose a
subset of APIs due to security reasons. These can be turned on/off and configured as
you'd expect.
HTTP based JSON-RPC API options:
* `--rpc` Enable the HTTP-RPC server
* `--rpcaddr` HTTP-RPC server listening interface (default: "localhost")
* `--rpcport` HTTP-RPC server listening port (default: 8545)
* `--rpcapi` API's offered over the HTTP-RPC interface (default: "eth,net,web3")
* `--rpcaddr` HTTP-RPC server listening interface (default: `localhost`)
* `--rpcport` HTTP-RPC server listening port (default: `8545`)
* `--rpcapi` API's offered over the HTTP-RPC interface (default: `eth,net,web3`)
* `--rpccorsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--ws` Enable the WS-RPC server
* `--wsaddr` WS-RPC server listening interface (default: "localhost")
* `--wsport` WS-RPC server listening port (default: 8546)
* `--wsapi` API's offered over the WS-RPC interface (default: "eth,net,web3")
* `--wsaddr` WS-RPC server listening interface (default: `localhost`)
* `--wsport` WS-RPC server listening port (default: `8546`)
* `--wsapi` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--wsorigins` Origins from which to accept websockets requests
* `--ipcdisable` Disable the IPC-RPC server
* `--ipcapi` API's offered over the IPC-RPC interface (default: "admin,debug,eth,miner,net,personal,shh,txpool,web3")
* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,shh,txpool,web3`)
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to connect
via HTTP, WS or IPC to a Geth node configured with the above flags and you'll need to speak [JSON-RPC](https://www.jsonrpc.org/specification)
on all transports. You can reuse the same connection for multiple requests!
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to
connect via HTTP, WS or IPC to a `geth` node configured with the above flags and you'll
need to speak [JSON-RPC](https://www.jsonrpc.org/specification) on all transports. You
can reuse the same connection for multiple requests!
**Note: Please understand the security implications of opening up an HTTP/WS based transport before
doing so! Hackers on the internet are actively trying to subvert Ethereum nodes with exposed APIs!
Further, all browser tabs can access locally running web servers, so malicious web pages could try to
subvert locally available APIs!**
**Note: Please understand the security implications of opening up an HTTP/WS based
transport before doing so! Hackers on the internet are actively trying to subvert
Ethereum nodes with exposed APIs! Further, all browser tabs can access locally
running web servers, so malicious web pages could try to subvert locally available
APIs!**
### Operating a private network
Maintaining your own private network is more involved as a lot of configurations taken for granted in
the official networks need to be manually set up.
Maintaining your own private network is more involved as a lot of configurations taken for
granted in the official networks need to be manually set up.
#### Defining the private genesis state
First, you'll need to create the genesis state of your networks, which all nodes need to be aware of
and agree upon. This consists of a small JSON file (e.g. call it `genesis.json`):
First, you'll need to create the genesis state of your networks, which all nodes need to be
aware of and agree upon. This consists of a small JSON file (e.g. call it `genesis.json`):
```json
{
"config": {
"chainId": 0,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
},
"alloc" : {},
"coinbase" : "0x0000000000000000000000000000000000000000",
"difficulty" : "0x20000",
"extraData" : "",
"gasLimit" : "0x2fefd8",
"nonce" : "0x0000000000000042",
"mixhash" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp" : "0x00"
"chainId": 0,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
},
"alloc": {},
"coinbase": "0x0000000000000000000000000000000000000000",
"difficulty": "0x20000",
"extraData": "",
"gasLimit": "0x2fefd8",
"nonce": "0x0000000000000042",
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00"
}
```
The above fields should be fine for most purposes, although we'd recommend changing the `nonce` to
some random value so you prevent unknown remote nodes from being able to connect to you. If you'd
like to pre-fund some accounts for easier testing, you can populate the `alloc` field with account
configs:
The above fields should be fine for most purposes, although we'd recommend changing
the `nonce` to some random value so you prevent unknown remote nodes from being able
to connect to you. If you'd like to pre-fund some accounts for easier testing, you can
populate the `alloc` field with account configs:
```json
"alloc": {
"0x0000000000000000000000000000000000000001": {"balance": "111111111"},
"0x0000000000000000000000000000000000000002": {"balance": "222222222"}
"0x0000000000000000000000000000000000000001": {
"balance": "111111111"
},
"0x0000000000000000000000000000000000000002": {
"balance": "222222222"
}
}
```
With the genesis state defined in the above JSON file, you'll need to initialize **every** Geth node
with it prior to starting it up to ensure all blockchain parameters are correctly set:
With the genesis state defined in the above JSON file, you'll need to initialize **every**
`geth` node with it prior to starting it up to ensure all blockchain parameters are correctly
set:
```
```shell
$ geth init path/to/genesis.json
```
#### Creating the rendezvous point
With all nodes that you want to run initialized to the desired genesis state, you'll need to start a
bootstrap node that others can use to find each other in your network and/or over the internet. The
clean way is to configure and run a dedicated bootnode:
With all nodes that you want to run initialized to the desired genesis state, you'll need to
start a bootstrap node that others can use to find each other in your network and/or over
the internet. The clean way is to configure and run a dedicated bootnode:
```
```shell
$ bootnode --genkey=boot.key
$ bootnode --nodekey=boot.key
```
With the bootnode online, it will display an [`enode` URL](https://github.com/ethereum/wiki/wiki/enode-url-format)
that other nodes can use to connect to it and exchange peer information. Make sure to replace the
displayed IP address information (most probably `[::]`) with your externally accessible IP to get the
actual `enode` URL.
that other nodes can use to connect to it and exchange peer information. Make sure to
replace the displayed IP address information (most probably `[::]`) with your externally
accessible IP to get the actual `enode` URL.
*Note: You could also use a full-fledged Geth node as a bootnode, but it's the less recommended way.*
*Note: You could also use a full-fledged `geth` node as a bootnode, but it's the less
recommended way.*
#### Starting up your member nodes
With the bootnode operational and externally reachable (you can try `telnet <ip> <port>` to ensure
it's indeed reachable), start every subsequent Geth node pointed to the bootnode for peer discovery
via the `--bootnodes` flag. It will probably also be desirable to keep the data directory of your
private network separated, so do also specify a custom `--datadir` flag.
With the bootnode operational and externally reachable (you can try
`telnet <ip> <port>` to ensure it's indeed reachable), start every subsequent `geth`
node pointed to the bootnode for peer discovery via the `--bootnodes` flag. It will
probably also be desirable to keep the data directory of your private network separated, so
do also specify a custom `--datadir` flag.
```
```shell
$ geth --datadir=path/to/custom/data/folder --bootnodes=<bootnode-enode-url-from-above>
```
*Note: Since your network will be completely cut off from the main and test networks, you'll also
need to configure a miner to process transactions and create new blocks for you.*
*Note: Since your network will be completely cut off from the main and test networks, you'll
also need to configure a miner to process transactions and create new blocks for you.*
#### Running a private miner
Mining on the public Ethereum network is a complex task as it's only feasible using GPUs, requiring
an OpenCL or CUDA enabled `ethminer` instance. For information on such a setup, please consult the
[EtherMining subreddit](https://www.reddit.com/r/EtherMining/) and the [Genoil miner](https://github.com/Genoil/cpp-ethereum)
repository.
Mining on the public Ethereum network is a complex task as it's only feasible using GPUs,
requiring an OpenCL or CUDA enabled `ethminer` instance. For information on such a
setup, please consult the [EtherMining subreddit](https://www.reddit.com/r/EtherMining/)
and the [Genoil miner](https://github.com/Genoil/cpp-ethereum) repository.
In a private network setting, however a single CPU miner instance is more than enough for practical
purposes as it can produce a stable stream of blocks at the correct intervals without needing heavy
resources (consider running on a single thread, no need for multiple ones either). To start a Geth
instance for mining, run it with all your usual flags, extended by:
In a private network setting, however a single CPU miner instance is more than enough for
practical purposes as it can produce a stable stream of blocks at the correct intervals
without needing heavy resources (consider running on a single thread, no need for multiple
ones either). To start a `geth` instance for mining, run it with all your usual flags, extended
by:
```
```shell
$ geth <usual-flags> --mine --minerthreads=1 --etherbase=0x0000000000000000000000000000000000000000
```
Which will start mining blocks and transactions on a single CPU thread, crediting all proceedings to
the account specified by `--etherbase`. You can further tune the mining by changing the default gas
limit blocks converge to (`--targetgaslimit`) and the price transactions are accepted at (`--gasprice`).
Which will start mining blocks and transactions on a single CPU thread, crediting all
proceedings to the account specified by `--etherbase`. You can further tune the mining
by changing the default gas limit blocks converge to (`--targetgaslimit`) and the price
transactions are accepted at (`--gasprice`).
## Contribution
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
Thank you for considering to help out with the source code! We welcome contributions
from anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit more
complex changes though, please check up with the core devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum)
to ensure those changes are in line with the general philosophy of the project and/or get some
early feedback which can make both your efforts much lighter as well as our review and merge
procedures quick and simple.
for the maintainers to review and merge into the main code base. If you wish to submit
more complex changes though, please check up with the core devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum)
to ensure those changes are in line with the general philosophy of the project and/or get
some early feedback which can make both your efforts much lighter as well as our review
and merge procedures quick and simple.
Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting)
guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary)
guidelines.
* Pull requests need to be based on and opened against the `master` branch.
* Commit messages should be prefixed with the package(s) they modify.
* E.g. "eth, rpc: make trace configs optional"
Please see the [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
for more details on configuring your environment, managing project dependencies, and testing procedures.
for more details on configuring your environment, managing project dependencies, and
testing procedures.
## License
The go-ethereum library (i.e. all code outside of the `cmd` directory) is licensed under the
[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html), also
included in our repository in the `COPYING.LESSER` file.
[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html),
also included in our repository in the `COPYING.LESSER` file.
The go-ethereum binaries (i.e. all code inside of the `cmd` directory) is licensed under the
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also included
in our repository in the `COPYING` file.
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also
included in our repository in the `COPYING` file.

120
SECURITY.md Normal file
View File

@ -0,0 +1,120 @@
# Security Policy
## Supported Versions
Please see Releases. We recommend to use the most recent released version.
## Audit reports
Audit reports are published in the `docs` folder: https://github.com/ethereum/go-ethereum/tree/master/docs/audits
| Scope | Date | Report Link |
| ------- | ------- | ----------- |
| `geth` | 20170425 | [pdf](https://github.com/ethereum/go-ethereum/blob/master/docs/audits/2017-04-25_Geth-audit_Truesec.pdf) |
| `clef` | 20180914 | [pdf](https://github.com/ethereum/go-ethereum/blob/master/docs/audits/2018-09-14_Clef-audit_NCC.pdf) |
## Reporting a Vulnerability
**Please do not file a public ticket** mentioning the vulnerability.
To find out how to disclose a vulnerability in Ethereum visit [https://bounty.ethereum.org](https://bounty.ethereum.org) or email bounty@ethereum.org.
The following key may be used to communicate sensitive information to developers.
Fingerprint: `AE96 ED96 9E47 9B00 84F3 E17F E88D 3334 FA5F 6A0A`
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQINBFgl3tgBEAC8A1tUBkD9YV+eLrOmtgy+/JS/H9RoZvkg3K1WZ8IYfj6iIRaY
neAk3Bp182GUPVz/zhKr2g0tMXIScDR3EnaDsY+Qg+JqQl8NOG+Cikr1nnkG2on9
L8c8yiqry1ZTCmYMqCa2acTFqnyuXJ482aZNtB4QG2BpzfhW4k8YThpegk/EoRUi
m+y7buJDtoNf7YILlhDQXN8qlHB02DWOVUihph9tUIFsPK6BvTr9SIr/eG6j6k0b
fUo9pexOn7LS4SojoJmsm/5dp6AoKlac48cZU5zwR9AYcq/nvkrfmf2WkObg/xRd
EvKZzn05jRopmAIwmoC3CiLmqCHPmT5a29vEob/yPFE335k+ujjZCPOu7OwjzDk7
M0zMSfnNfDq8bXh16nn+ueBxJ0NzgD1oC6c2PhM+XRQCXChoyI8vbfp4dGvCvYqv
QAE1bWjqnumZ/7vUPgZN6gDfiAzG2mUxC2SeFBhacgzDvtQls+uuvm+FnQOUgg2H
h8x2zgoZ7kqV29wjaUPFREuew7e+Th5BxielnzOfVycVXeSuvvIn6cd3g/s8mX1c
2kLSXJR7+KdWDrIrR5Az0kwAqFZt6B6QTlDrPswu3mxsm5TzMbny0PsbL/HBM+GZ
EZCjMXxB8bqV2eSaktjnSlUNX1VXxyOxXA+ZG2jwpr51egi57riVRXokrQARAQAB
tDlFdGhlcmV1bSBGb3VuZGF0aW9uIFNlY3VyaXR5IFRlYW0gPHNlY3VyaXR5QGV0
aGVyZXVtLm9yZz6JAj4EEwECACgCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA
BQJaCWH6BQkFo2BYAAoJEOiNMzT6X2oK+DEP/3H6dxkm0hvHZKoHLVuuxcu3EHYo
k5sd3MMWPrZSN8qzZnY7ayEDMxnarWOizc+2jfOxfJlzX/g8lR1/fsHdWPFPhPoV
Qk8ygrHn1H8U8+rpw/U03BqmqHpYCDzJ+CIis9UWROniqXw1nuqu/FtWOsdWxNKh
jUo6k/0EsaXsxRPzgJv7fEUcVcQ7as/C3x9sy3muc2gvgA4/BKoGPb1/U0GuA8lV
fDIDshAggmnSUAg+TuYSAAdoFQ1sKwFMPigcLJF2eyKuK3iUyixJrec/c4LSf3wA
cGghbeuqI8INP0Y2zvXDQN2cByxsFAuoZG+m0cyKGaDH2MVUvOKKYqn/03qvrf15
AWAsW0l0yQwOTCo3FbsNzemClm5Bj/xH0E4XuwXwChcMCMOWJrFoxyvCEI+keoQc
c08/a8/MtS7vBAABXwOziSmm6CNqmzpWrh/fDrjlJlba9U3MxzvqU3IFlTdMratv
6V+SgX+L25lCzW4NxxUavoB8fAlvo8lxpHKo24FP+RcLQ8XqkU3RiUsgRjQRFOqQ
TaJcsp8mimmiYyf24mNu6b48pi+a5c/eQR9w59emeEUZqsJU+nqv8BWIIp7o4Agh
NYnKjkhPlY5e1fLVfAHIADZFynWwRPkPMJSrBiP5EtcOFxQGHGjRxU/KjXkvE0hV
xYb1PB8pWMTu/beeiQI+BBMBAgAoBQJYJd7YAhsDBQkB4TOABgsJCAcDAgYVCAIJ
CgsEFgIDAQIeAQIXgAAKCRDojTM0+l9qCplDD/9IZ2i+m1cnqQKtiyHbyFGx32oL
fzqPylX2bOG5DPsSTorSUdJMGVfT04oVxXc4S/2DVnNvi7RAbSiLapCWSplgtBOj
j1xlblOoXxT3m7s1XHGCX5tENxI9fVSSPVKJn+fQaWpPB2MhBA+1lUI6GJ+11T7K
J8LrP/fiw1/nOb7rW61HW44Gtyox23sA/d1+DsFVaF8hxJlNj5coPKr8xWzQ8pQl
juzdjHDukjevuw4rRmRq9vozvj9keEU9XJ5dldyEVXFmdDk7KT0p0Rla9nxYhzf/
r/Bv8Bzy0HCWRb2D31BjXXGG05oVnYmNGxGFxYja4MwgrMmne3ilEVjfUJsapsqi
w41BAyQgIdfREulYN7ahsF5PrjVAqBd9IGtE8ULelF2SQxEBQBngEkP0ahP6tRAL
i7/CBjPKOyKijtqVny7qrGOnU2ygcA88/WDibexDhrjz0Gx8WmErU7rIWZiZ5u4Y
vJYVRo0+6rBCXRPeSJfiP5h1p17Anr2l42boAYslfcrzquB8MHtrNcyn650OLtHG
nbxgIdniKrpuzGN6Opw+O2id2JhD1/1p4SOemwAmthplr1MIyOHNP3q93rEj2J7h
5zPS/AJuKkMDFUpslPNLQjCOwPXtdzL7/kUZGBSyez1T3TaW1uY6l9XaJJRaSn+v
1zPgfp4GJ3lPs4AlAbQ0RXRoZXJldW0gRm91bmRhdGlvbiBCdWcgQm91bnR5IDxi
b3VudHlAZXRoZXJldW0ub3JnPokCPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYC
AwECHgECF4AFAloJYfoFCQWjYFgACgkQ6I0zNPpfagoENg/+LnSaVeMxiGVtcjWl
b7Xd73yrEy4uxiESS1AalW9mMf7oZzfI05f7QIQlaLAkNac74vZDJbPKjtb7tpMO
RFhRZMCveq6CPKU6pd1SI8IUVUKwpEe6AJP3lHdVP57dquieFE2HlYKm6uHbCGWU
0cjyTA+uu2KbgCHGmofsPY/xOcZLGEHTHqa5w60JJAQm+BSDKnw8wTyrxGvA3EK/
ePSvOZMYa+iw6vYuZeBIMbdiXR/A2keBi3GuvqB8tDMj7P22TrH5mVDm3zNqGYD6
amDPeiWp4cztY3aZyLcgYotqXPpDceZzDn+HopBPzAb/llCdE7bVswKRhphVMw4b
bhL0R/TQY7Sf6TK2LKSBrjv0DWOSijikE71SJcBnJvHU7EpKrQQ0lMGclm3ynyji
Nf0YTPXQt4I+fwTmOew2GFeK3UytNWbWI7oXX7Nm4bj9bhf3IJ0kmZb/Gs73+xII
e7Rz52Mby436tWyQIQiF9ITYNGvNf53TwBBZMn0pKPiTyr3Ur7FHEotkEOFNh1//
4zQY10XxuBdLrYGyZ4V8xHJM+oKre8Eg2R9qHXVbjvErHE+7CvgnV7YUip0criPr
BlKRvuoJaSliH2JFhSjWVrkPmFGrWN0BAx10yIqMnEplfKeHf4P9Elek3oInS8WP
G1zJG6s/t5+hQK0X37+TB+6rd3GJAj4EEwECACgFAlgl4TsCGwMFCQHhM4AGCwkI
BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEOiNMzT6X2oKzf8P/iIKd77WHTbp4pMN
8h52HyZJtDJmjA1DPZrbGl1TesW/Z9uTd12txlgqZnbG2GfN9+LSP6EOPzR6v2xC
OVhR+RdWhZDJJuQCVS7lJIqQrZgmeTZG0TyQPZdLjVFBOrrhVwYX+HXbu429IzHr
URf5InyR1QgqOXyElDYS6e28HFqvaoA0DWTWDDqOLPVl+U5fuceIE2XXdv3AGLeP
Yf8J5MPobjPiZtBqI6S6iENY2Yn35qLX+axeC/iYSCHVtFuCCIdb/QYR1ZZV8Ps/
aI9DwC7LU+YfPw7iqCIoqxSeA3o1PORkdSigEg3jtfRv5UqVo9a0oBb9jdoADsat
F/gW0E7mto3XGOiaR0eB9SSdsM3x7Bz4A0HIGNaxpZo1RWqlO91leP4c13Px7ISv
5OGXfLg+M8qb+qxbGd1HpitGi9s1y1aVfEj1kOtZ0tN8eu+Upg5WKwPNBDX3ar7J
9NCULgVSL+E79FG+zXw62gxiQrLfKzm4wU/9L5wVkwQnm29hLJ0tokrSBZFnc/1l
7OC+GM63tYicKkY4rqmoWUeYx7IwFH9mtDtvR1RxO85RbQhZizwpZpdpRkH0DqZu
ZJRmRa5r7rPqmfa7d+VIFhz2Xs8pJMLVqxTsLKcLglmjw7aOrYG0SWeH7YraXWGD
N3SlvSBiVwcK7QUKzLLvpadLwxfsuQINBFgl3tgBEACbgq6HTN5gEBi0lkD/MafI
nmNi+59U5gRGYqk46WlfRjhHudXjDpgD0lolGb4hYontkMaKRlCg2Rvgjvk3Zve0
PKWjKw7gr8YBa9fMFY8BhAXI32OdyI9rFhxEZFfWAfwKVmT19BdeAQRFvcfd+8w8
f1XVc+zddULMJFBTr+xKDlIRWwTkdLPQeWbjo0eHl/g4tuLiLrTxVbnj26bf+2+1
DbM/w5VavzPrkviHqvKe/QP/gay4QDViWvFgLb90idfAHIdsPgflp0VDS5rVHFL6
D73rSRdIRo3I8c8mYoNjSR4XDuvgOkAKW9LR3pvouFHHjp6Fr0GesRbrbb2EG66i
PsR99MQ7FqIL9VMHPm2mtR+XvbnKkH2rYyEqaMbSdk29jGapkAWle4sIhSKk749A
4tGkHl08KZ2N9o6GrfUehP/V2eJLaph2DioFL1HxRryrKy80QQKLMJRekxigq8gr
eW8xB4zuf9Mkuou+RHNmo8PebHjFstLigiD6/zP2e+4tUmrT0/JTGOShoGMl8Rt0
VRxdPImKun+4LOXbfOxArOSkY6i35+gsgkkSy1gTJE0BY3S9auT6+YrglY/TWPQ9
IJxWVOKlT+3WIp5wJu2bBKQ420VLqDYzkoWytel/bM1ACUtipMiIVeUs2uFiRjpz
A1Wy0QHKPTdSuGlJPRrfcQARAQABiQIlBBgBAgAPAhsMBQJaCWIIBQkFo2BYAAoJ
EOiNMzT6X2oKgSwQAKKs7BGF8TyZeIEO2EUK7R2bdQDCdSGZY06tqLFg3IHMGxDM
b/7FVoa2AEsFgv6xpoebxBB5zkhUk7lslgxvKiSLYjxfNjTBltfiFJ+eQnf+OTs8
KeR51lLa66rvIH2qUzkNDCCTF45H4wIDpV05AXhBjKYkrDCrtey1rQyFp5fxI+0I
Q1UKKXvzZK4GdxhxDbOUSd38MYy93nqcmclGSGK/gF8XiyuVjeifDCM6+T1NQTX0
K9lneidcqtBDvlggJTLJtQPO33o5EHzXSiud+dKth1uUhZOFEaYRZoye1YE3yB0T
NOOE8fXlvu8iuIAMBSDL9ep6sEIaXYwoD60I2gHdWD0lkP0DOjGQpi4ouXM3Edsd
5MTi0MDRNTij431kn8T/D0LCgmoUmYYMBgbwFhXr67axPZlKjrqR0z3F/Elv0ZPP
cVg1tNznsALYQ9Ovl6b5M3cJ5GapbbvNWC7yEE1qScl9HiMxjt/H6aPastH63/7w
cN0TslW+zRBy05VNJvpWGStQXcngsSUeJtI1Gd992YNjUJq4/Lih6Z1TlwcFVap+
cTcDptoUvXYGg/9mRNNPZwErSfIJ0Ibnx9wPVuRN6NiCLOt2mtKp2F1pM6AOQPpZ
85vEh6I8i6OaO0w/Z0UHBwvpY6jDUliaROsWUQsqz78Z34CVj4cy6vPW2EF4
=r6KK
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@ -21,6 +21,8 @@ import (
"encoding/json"
"fmt"
"io"
"github.com/ethereum/go-ethereum/common"
)
// The ABI holds information about a contract's context and available
@ -134,15 +136,27 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
}
// empty defaults to function according to the abi spec
case "function", "":
abi.Methods[field.Name] = Method{
Name: field.Name,
name := field.Name
_, ok := abi.Methods[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", field.Name, idx)
_, ok = abi.Methods[name]
}
abi.Methods[name] = Method{
Name: name,
Const: field.Constant,
Inputs: field.Inputs,
Outputs: field.Outputs,
}
case "event":
abi.Events[field.Name] = Event{
Name: field.Name,
name := field.Name
_, ok := abi.Events[name]
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", field.Name, idx)
_, ok = abi.Events[name]
}
abi.Events[name] = Event{
Name: name,
Anonymous: field.Anonymous,
Inputs: field.Inputs,
}
@ -165,3 +179,14 @@ func (abi *ABI) MethodById(sigdata []byte) (*Method, error) {
}
return nil, fmt.Errorf("no method with id: %#x", sigdata[:4])
}
// EventByID looks an event up by its topic hash in the
// ABI and returns nil if none found.
func (abi *ABI) EventByID(topic common.Hash) (*Event, error) {
for _, event := range abi.Events {
if bytes.Equal(event.Id().Bytes(), topic.Bytes()) {
return &event, nil
}
}
return nil, fmt.Errorf("no event with id: %#x", topic.Hex())
}

View File

@ -832,9 +832,6 @@ func TestUnpackIntoMapNamingConflict(t *testing.T) {
if err = abi.UnpackIntoMap(receivedMap, "received", data); err != nil {
t.Error("naming conflict between two events; no error expected")
}
if len(receivedMap) != 1 {
t.Error("naming conflict between two events; event defined latest in the abi expected to be used")
}
// Method and event have the same name
abiJSON = `[{"constant":false,"inputs":[{"name":"memo","type":"bytes"}],"name":"received","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"}],"name":"receivedAddr","type":"event"}]`
@ -931,3 +928,113 @@ func TestABI_MethodById(t *testing.T) {
t.Errorf("Expected error, nil is short to decode data")
}
}
func TestABI_EventById(t *testing.T) {
tests := []struct {
name string
json string
event string
}{
{
name: "",
json: `[
{"type":"event","name":"received","anonymous":false,"inputs":[
{"indexed":false,"name":"sender","type":"address"},
{"indexed":false,"name":"amount","type":"uint256"},
{"indexed":false,"name":"memo","type":"bytes"}
]
}]`,
event: "received(address,uint256,bytes)",
}, {
name: "",
json: `[
{ "constant": true, "inputs": [], "name": "name", "outputs": [ { "name": "", "type": "string" } ], "payable": false, "stateMutability": "view", "type": "function" },
{ "constant": false, "inputs": [ { "name": "_spender", "type": "address" }, { "name": "_value", "type": "uint256" } ], "name": "approve", "outputs": [ { "name": "", "type": "bool" } ], "payable": false, "stateMutability": "nonpayable", "type": "function" },
{ "constant": true, "inputs": [], "name": "totalSupply", "outputs": [ { "name": "", "type": "uint256" } ], "payable": false, "stateMutability": "view", "type": "function" },
{ "constant": false, "inputs": [ { "name": "_from", "type": "address" }, { "name": "_to", "type": "address" }, { "name": "_value", "type": "uint256" } ], "name": "transferFrom", "outputs": [ { "name": "", "type": "bool" } ], "payable": false, "stateMutability": "nonpayable", "type": "function" },
{ "constant": true, "inputs": [], "name": "decimals", "outputs": [ { "name": "", "type": "uint8" } ], "payable": false, "stateMutability": "view", "type": "function" },
{ "constant": true, "inputs": [ { "name": "_owner", "type": "address" } ], "name": "balanceOf", "outputs": [ { "name": "balance", "type": "uint256" } ], "payable": false, "stateMutability": "view", "type": "function" },
{ "constant": true, "inputs": [], "name": "symbol", "outputs": [ { "name": "", "type": "string" } ], "payable": false, "stateMutability": "view", "type": "function" },
{ "constant": false, "inputs": [ { "name": "_to", "type": "address" }, { "name": "_value", "type": "uint256" } ], "name": "transfer", "outputs": [ { "name": "", "type": "bool" } ], "payable": false, "stateMutability": "nonpayable", "type": "function" },
{ "constant": true, "inputs": [ { "name": "_owner", "type": "address" }, { "name": "_spender", "type": "address" } ], "name": "allowance", "outputs": [ { "name": "", "type": "uint256" } ], "payable": false, "stateMutability": "view", "type": "function" },
{ "payable": true, "stateMutability": "payable", "type": "fallback" },
{ "anonymous": false, "inputs": [ { "indexed": true, "name": "owner", "type": "address" }, { "indexed": true, "name": "spender", "type": "address" }, { "indexed": false, "name": "value", "type": "uint256" } ], "name": "Approval", "type": "event" },
{ "anonymous": false, "inputs": [ { "indexed": true, "name": "from", "type": "address" }, { "indexed": true, "name": "to", "type": "address" }, { "indexed": false, "name": "value", "type": "uint256" } ], "name": "Transfer", "type": "event" }
]`,
event: "Transfer(address,address,uint256)",
},
}
for testnum, test := range tests {
abi, err := JSON(strings.NewReader(test.json))
if err != nil {
t.Error(err)
}
topic := test.event
topicID := crypto.Keccak256Hash([]byte(topic))
event, err := abi.EventByID(topicID)
if err != nil {
t.Fatalf("Failed to look up ABI method: %v, test #%d", err, testnum)
}
if event == nil {
t.Errorf("We should find a event for topic %s, test #%d", topicID.Hex(), testnum)
}
if event.Id() != topicID {
t.Errorf("Event id %s does not match topic %s, test #%d", event.Id().Hex(), topicID.Hex(), testnum)
}
unknowntopicID := crypto.Keccak256Hash([]byte("unknownEvent"))
unknownEvent, err := abi.EventByID(unknowntopicID)
if err == nil {
t.Errorf("EventByID should return an error if a topic is not found, test #%d", testnum)
}
if unknownEvent != nil {
t.Errorf("We should not find any event for topic %s, test #%d", unknowntopicID.Hex(), testnum)
}
}
}
func TestDuplicateMethodNames(t *testing.T) {
abiJSON := `[{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"},{"name":"customFallback","type":"string"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
if _, ok := contractAbi.Methods["transfer"]; !ok {
t.Fatalf("Could not find original method")
}
if _, ok := contractAbi.Methods["transfer0"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer1"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer2"]; ok {
t.Fatalf("Should not have found extra method")
}
}
// TestDoubleDuplicateMethodNames checks that if transfer0 already exists, there won't be a name
// conflict and that the second transfer method will be renamed transfer1.
func TestDoubleDuplicateMethodNames(t *testing.T) {
abiJSON := `[{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"}],"name":"transfer0","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"},{"name":"data","type":"bytes"},{"name":"customFallback","type":"string"}],"name":"transfer","outputs":[{"name":"ok","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"}]`
contractAbi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
if _, ok := contractAbi.Methods["transfer"]; !ok {
t.Fatalf("Could not find original method")
}
if _, ok := contractAbi.Methods["transfer0"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer1"]; !ok {
t.Fatalf("Could not find duplicate method")
}
if _, ok := contractAbi.Methods["transfer2"]; ok {
t.Fatalf("Should not have found extra method")
}
}

View File

@ -119,11 +119,22 @@ func unpack(t *Type, dst interface{}, src interface{}) error {
dstVal = reflect.ValueOf(dst).Elem()
srcVal = reflect.ValueOf(src)
)
if t.T != TupleTy && !((t.T == SliceTy || t.T == ArrayTy) && t.Elem.T == TupleTy) {
tuple, typ := false, t
for {
if typ.T == SliceTy || typ.T == ArrayTy {
typ = typ.Elem
continue
}
tuple = typ.T == TupleTy
break
}
if !tuple {
return set(dstVal, srcVal)
}
// Dereferences interface or pointer wrapper
dstVal = indirectInterfaceOrPtr(dstVal)
switch t.T {
case TupleTy:
if dstVal.Kind() != reflect.Struct {
@ -191,7 +202,7 @@ func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues interfac
argument := arguments.NonIndexed()[0]
elem := reflect.ValueOf(v).Elem()
if elem.Kind() == reflect.Struct {
if elem.Kind() == reflect.Struct && argument.Type.T != TupleTy {
fieldmap, err := mapArgNamesToStructFields([]string{argument.Name}, elem)
if err != nil {
return err

View File

@ -22,6 +22,8 @@ import (
"io"
"io/ioutil"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/external"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
@ -42,6 +44,24 @@ func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
return NewKeyedTransactor(key.PrivateKey), nil
}
// NewKeyStoreTransactor is a utility method to easily create a transaction signer from
// an decrypted key from a keystore
func NewKeyStoreTransactor(keystore *keystore.KeyStore, account accounts.Account) (*TransactOpts, error) {
return &TransactOpts{
From: account.Address,
Signer: func(signer types.Signer, address common.Address, tx *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, errors.New("not authorized to sign this account")
}
signature, err := keystore.SignHash(account, signer.Hash(tx).Bytes())
if err != nil {
return nil, err
}
return tx.WithSignature(signer, signature)
},
}, nil
}
// NewKeyedTransactor is a utility method to easily create a transaction signer
// from a single private key.
func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
@ -60,3 +80,17 @@ func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
},
}
}
// NewClefTransactor is a utility method to easily create a transaction signer
// with a clef backend.
func NewClefTransactor(clef *external.ExternalSigner, account accounts.Account) *TransactOpts {
return &TransactOpts{
From: account.Address,
Signer: func(signer types.Signer, address common.Address, transaction *types.Transaction) (*types.Transaction, error) {
if address != account.Address {
return nil, errors.New("not authorized to sign this account")
}
return clef.SignTx(account, transaction, nil) // Clef enforces its own chain id
},
}
}

View File

@ -45,8 +45,10 @@ import (
// This nil assignment ensures compile time that SimulatedBackend implements bind.ContractBackend.
var _ bind.ContractBackend = (*SimulatedBackend)(nil)
var errBlockNumberUnsupported = errors.New("SimulatedBackend cannot access blocks other than the latest block")
var errGasEstimationFailed = errors.New("gas required exceeds allowance or always failing transaction")
var (
errBlockNumberUnsupported = errors.New("simulatedBackend cannot access blocks other than the latest block")
errGasEstimationFailed = errors.New("gas required exceeds allowance or always failing transaction")
)
// SimulatedBackend implements bind.ContractBackend, simulating a blockchain in
// the background. Its main purpose is to allow easily testing contract bindings.
@ -63,10 +65,9 @@ type SimulatedBackend struct {
config *params.ChainConfig
}
// NewSimulatedBackend creates a new binding backend using a simulated blockchain
// for testing purposes.
func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
database := rawdb.NewMemoryDatabase()
// NewSimulatedBackendWithDatabase creates a new binding backend based on the given database
// and uses a simulated blockchain for testing purposes.
func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
genesis.MustCommit(database)
blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil)
@ -81,6 +82,12 @@ func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBac
return backend
}
// NewSimulatedBackend creates a new binding backend using a simulated blockchain
// for testing purposes.
func NewSimulatedBackend(alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
return NewSimulatedBackendWithDatabase(rawdb.NewMemoryDatabase(), alloc, gasLimit)
}
// Commit imports all the pending transactions as a single block and starts a
// fresh new state.
func (b *SimulatedBackend) Commit() {
@ -316,7 +323,7 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
b.mu.Lock()
defer b.mu.Unlock()
sender, err := types.Sender(types.HomesteadSigner{}, tx)
sender, err := types.Sender(types.NewEIP155Signer(b.config.ChainID), tx)
if err != nil {
panic(fmt.Errorf("invalid transaction: %v", err))
}
@ -424,6 +431,11 @@ func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
return nil
}
// Blockchain returns the underlying blockchain.
func (b *SimulatedBackend) Blockchain() *core.BlockChain {
return b.blockchain
}
// callmsg implements core.Message to allow passing it as a transaction simulator.
type callmsg struct {
ethereum.CallMsg

View File

@ -345,29 +345,3 @@ func TestUnpackIndexedBytesTyLogIntoMap(t *testing.T) {
t.Error("unpacked map does not match expected map")
}
}
func TestUnpackIntoMapNamingConflict(t *testing.T) {
hash := crypto.Keccak256Hash([]byte("testName"))
mockLog := types.Log{
Address: common.HexToAddress("0x0"),
Topics: []common.Hash{
common.HexToHash("0x0"),
hash,
},
Data: hexutil.MustDecode(hexData),
BlockNumber: uint64(26),
TxHash: common.HexToHash("0x0"),
TxIndex: 111,
BlockHash: common.BytesToHash([]byte{1, 2, 3, 4, 5}),
Index: 7,
Removed: false,
}
abiString := `[{"anonymous":false,"inputs":[{"indexed":true,"name":"name","type":"string"},{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
receivedMap := make(map[string]interface{})
if err := bc.UnpackLogIntoMap(receivedMap, "received", mockLog); err == nil {
t.Error("naming conflict between two events; error expected")
}
}

View File

@ -22,6 +22,7 @@ package bind
import (
"bytes"
"errors"
"fmt"
"go/format"
"regexp"
@ -30,6 +31,7 @@ import (
"unicode"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/log"
)
// Lang is a target programming language selector to generate bindings for.
@ -45,10 +47,13 @@ const (
// to be used as is in client code, but rather as an intermediate struct which
// enforces compile time type safety and naming convention opposed to having to
// manually maintain hard coded strings that break on runtime.
func Bind(types []string, abis []string, bytecodes []string, pkg string, lang Lang) (string, error) {
func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]string, pkg string, lang Lang, libs map[string]string) (string, error) {
// Process each individual contract requested binding
contracts := make(map[string]*tmplContract)
// Map used to flag each encountered library as such
isLib := make(map[string]struct{})
for i := 0; i < len(types); i++ {
// Parse the actual ABI to generate the binding for
evmABI, err := abi.JSON(strings.NewReader(abis[i]))
@ -63,11 +68,12 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
return r
}, abis[i])
// Extract the call and transact methods; events; and sort them alphabetically
// Extract the call and transact methods; events, struct definitions; and sort them alphabetically
var (
calls = make(map[string]*tmplMethod)
transacts = make(map[string]*tmplMethod)
events = make(map[string]*tmplEvent)
structs = make(map[string]*tmplStruct)
)
for _, original := range evmABI.Methods {
// Normalize the method for capital cases and non-anonymous inputs/outputs
@ -80,6 +86,9 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
if _, exist := structs[input.Type.String()]; input.Type.T == abi.TupleTy && !exist {
bindStructType[lang](input.Type, structs)
}
}
normalized.Outputs = make([]abi.Argument, len(original.Outputs))
copy(normalized.Outputs, original.Outputs)
@ -87,6 +96,9 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
if output.Name != "" {
normalized.Outputs[j].Name = capitalise(output.Name)
}
if _, exist := structs[output.Type.String()]; output.Type.T == abi.TupleTy && !exist {
bindStructType[lang](output.Type, structs)
}
}
// Append the methods to the call or transact lists
if original.Const {
@ -112,25 +124,61 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
if _, exist := structs[input.Type.String()]; input.Type.T == abi.TupleTy && !exist {
bindStructType[lang](input.Type, structs)
}
}
}
// Append the event to the accumulator list
events[original.Name] = &tmplEvent{Original: original, Normalized: normalized}
}
// There is no easy way to pass arbitrary java objects to the Go side.
if len(structs) > 0 && lang == LangJava {
return "", errors.New("java binding for tuple arguments is not supported yet")
}
contracts[types[i]] = &tmplContract{
Type: capitalise(types[i]),
InputABI: strings.Replace(strippedABI, "\"", "\\\"", -1),
InputBin: strings.TrimSpace(bytecodes[i]),
InputBin: strings.TrimPrefix(strings.TrimSpace(bytecodes[i]), "0x"),
Constructor: evmABI.Constructor,
Calls: calls,
Transacts: transacts,
Events: events,
Libraries: make(map[string]string),
Structs: structs,
}
// Function 4-byte signatures are stored in the same sequence
// as types, if available.
if len(fsigs) > i {
contracts[types[i]].FuncSigs = fsigs[i]
}
// Parse library references.
for pattern, name := range libs {
matched, err := regexp.Match("__\\$"+pattern+"\\$__", []byte(contracts[types[i]].InputBin))
if err != nil {
log.Error("Could not search for pattern", "pattern", pattern, "contract", contracts[types[i]], "err", err)
}
if matched {
contracts[types[i]].Libraries[pattern] = name
// keep track that this type is a library
if _, ok := isLib[name]; !ok {
isLib[name] = struct{}{}
}
}
}
}
// Check if that type has already been identified as a library
for i := 0; i < len(types); i++ {
_, ok := isLib[types[i]]
contracts[types[i]].Library = ok
}
// Generate the contract template data content and render it
data := &tmplData{
Package: pkg,
Contracts: contracts,
Libraries: libs,
}
buffer := new(bytes.Buffer)
@ -138,6 +186,8 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
"bindtype": bindType[lang],
"bindtopictype": bindTopicType[lang],
"namedtype": namedType[lang],
"formatmethod": formatMethod,
"formatevent": formatEvent,
"capitalise": capitalise,
"decapitalise": decapitalise,
}
@ -159,129 +209,67 @@ func Bind(types []string, abis []string, bytecodes []string, pkg string, lang La
// bindType is a set of type binders that convert Solidity types to some supported
// programming language types.
var bindType = map[Lang]func(kind abi.Type) string{
var bindType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{
LangGo: bindTypeGo,
LangJava: bindTypeJava,
}
// Helper function for the binding generators.
// It reads the unmatched characters after the inner type-match,
// (since the inner type is a prefix of the total type declaration),
// looks for valid arrays (possibly a dynamic one) wrapping the inner type,
// and returns the sizes of these arrays.
//
// Returned array sizes are in the same order as solidity signatures; inner array size first.
// Array sizes may also be "", indicating a dynamic array.
func wrapArray(stringKind string, innerLen int, innerMapping string) (string, []string) {
remainder := stringKind[innerLen:]
//find all the sizes
matches := regexp.MustCompile(`\[(\d*)\]`).FindAllStringSubmatch(remainder, -1)
parts := make([]string, 0, len(matches))
for _, match := range matches {
//get group 1 from the regex match
parts = append(parts, match[1])
}
return innerMapping, parts
}
// Translates the array sizes to a Go-lang declaration of a (nested) array of the inner type.
// Simply returns the inner type if arraySizes is empty.
func arrayBindingGo(inner string, arraySizes []string) string {
out := ""
//prepend all array sizes, from outer (end arraySizes) to inner (start arraySizes)
for i := len(arraySizes) - 1; i >= 0; i-- {
out += "[" + arraySizes[i] + "]"
}
out += inner
return out
}
// bindTypeGo converts a Solidity type to a Go one. Since there is no clear mapping
// from all Solidity types to Go ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. *big.Int).
func bindTypeGo(kind abi.Type) string {
stringKind := kind.String()
innerLen, innerMapping := bindUnnestedTypeGo(stringKind)
return arrayBindingGo(wrapArray(stringKind, innerLen, innerMapping))
}
// The inner function of bindTypeGo, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the translated type.
func bindUnnestedTypeGo(stringKind string) (int, string) {
switch {
case strings.HasPrefix(stringKind, "address"):
return len("address"), "common.Address"
case strings.HasPrefix(stringKind, "bytes"):
parts := regexp.MustCompile(`bytes([0-9]*)`).FindStringSubmatch(stringKind)
return len(parts[0]), fmt.Sprintf("[%s]byte", parts[1])
case strings.HasPrefix(stringKind, "int") || strings.HasPrefix(stringKind, "uint"):
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(stringKind)
// bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go one.
func bindBasicTypeGo(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
return "common.Address"
case abi.IntTy, abi.UintTy:
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(kind.String())
switch parts[2] {
case "8", "16", "32", "64":
return len(parts[0]), fmt.Sprintf("%sint%s", parts[1], parts[2])
return fmt.Sprintf("%sint%s", parts[1], parts[2])
}
return len(parts[0]), "*big.Int"
case strings.HasPrefix(stringKind, "bool"):
return len("bool"), "bool"
case strings.HasPrefix(stringKind, "string"):
return len("string"), "string"
return "*big.Int"
case abi.FixedBytesTy:
return fmt.Sprintf("[%d]byte", kind.Size)
case abi.BytesTy:
return "[]byte"
case abi.FunctionTy:
return "[24]byte"
default:
return len(stringKind), stringKind
// string, bool types
return kind.String()
}
}
// Translates the array sizes to a Java declaration of a (nested) array of the inner type.
// Simply returns the inner type if arraySizes is empty.
func arrayBindingJava(inner string, arraySizes []string) string {
// Java array type declarations do not include the length.
return inner + strings.Repeat("[]", len(arraySizes))
}
// bindTypeJava converts a Solidity type to a Java one. Since there is no clear mapping
// from all Solidity types to Java ones (e.g. uint17), those that cannot be exactly
// bindTypeGo converts solidity types to Go ones. Since there is no clear mapping
// from all Solidity types to Go ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. BigDecimal).
func bindTypeJava(kind abi.Type) string {
stringKind := kind.String()
innerLen, innerMapping := bindUnnestedTypeJava(stringKind)
return arrayBindingJava(wrapArray(stringKind, innerLen, innerMapping))
func bindTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
return structs[kind.String()].Name
case abi.ArrayTy:
return fmt.Sprintf("[%d]", kind.Size) + bindTypeGo(*kind.Elem, structs)
case abi.SliceTy:
return "[]" + bindTypeGo(*kind.Elem, structs)
default:
return bindBasicTypeGo(kind)
}
}
// The inner function of bindTypeJava, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the translated type.
func bindUnnestedTypeJava(stringKind string) (int, string) {
switch {
case strings.HasPrefix(stringKind, "address"):
parts := regexp.MustCompile(`address(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return len(stringKind), stringKind
}
if parts[1] == "" {
return len("address"), "Address"
}
return len(parts[0]), "Addresses"
case strings.HasPrefix(stringKind, "bytes"):
parts := regexp.MustCompile(`bytes([0-9]*)`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return len(stringKind), stringKind
}
return len(parts[0]), "byte[]"
case strings.HasPrefix(stringKind, "int") || strings.HasPrefix(stringKind, "uint"):
//Note that uint and int (without digits) are also matched,
// bindBasicTypeJava converts basic solidity types(except array, slice and tuple) to Java one.
func bindBasicTypeJava(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
return "Address"
case abi.IntTy, abi.UintTy:
// Note that uint and int (without digits) are also matched,
// these are size 256, and will translate to BigInt (the default).
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(stringKind)
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(kind.String())
if len(parts) != 3 {
return len(stringKind), stringKind
return kind.String()
}
// All unsigned integers should be translated to BigInt since gomobile doesn't
// support them.
if parts[1] == "u" {
return "BigInt"
}
namedSize := map[string]string{
@ -291,50 +279,146 @@ func bindUnnestedTypeJava(stringKind string) (int, string) {
"64": "long",
}[parts[2]]
//default to BigInt
// default to BigInt
if namedSize == "" {
namedSize = "BigInt"
}
return len(parts[0]), namedSize
case strings.HasPrefix(stringKind, "bool"):
return len("bool"), "boolean"
case strings.HasPrefix(stringKind, "string"):
return len("string"), "String"
return namedSize
case abi.FixedBytesTy, abi.BytesTy:
return "byte[]"
case abi.BoolTy:
return "boolean"
case abi.StringTy:
return "String"
case abi.FunctionTy:
return "byte[24]"
default:
return len(stringKind), stringKind
return kind.String()
}
}
// pluralizeJavaType explicitly converts multidimensional types to predefined
// type in go side.
func pluralizeJavaType(typ string) string {
switch typ {
case "boolean":
return "Bools"
case "String":
return "Strings"
case "Address":
return "Addresses"
case "byte[]":
return "Binaries"
case "BigInt":
return "BigInts"
}
return typ + "[]"
}
// bindTypeJava converts a Solidity type to a Java one. Since there is no clear mapping
// from all Solidity types to Java ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. BigDecimal).
func bindTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
return structs[kind.String()].Name
case abi.ArrayTy, abi.SliceTy:
return pluralizeJavaType(bindTypeJava(*kind.Elem, structs))
default:
return bindBasicTypeJava(kind)
}
}
// bindTopicType is a set of type binders that convert Solidity types to some
// supported programming language topic types.
var bindTopicType = map[Lang]func(kind abi.Type) string{
var bindTopicType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{
LangGo: bindTopicTypeGo,
LangJava: bindTopicTypeJava,
}
// bindTypeGo converts a Solidity topic type to a Go one. It is almost the same
// bindTopicTypeGo converts a Solidity topic type to a Go one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeGo(kind abi.Type) string {
bound := bindTypeGo(kind)
func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeGo(kind, structs)
if bound == "string" || bound == "[]byte" {
bound = "common.Hash"
}
return bound
}
// bindTypeGo converts a Solidity topic type to a Java one. It is almost the same
// bindTopicTypeJava converts a Solidity topic type to a Java one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type) string {
bound := bindTypeJava(kind)
if bound == "String" || bound == "Bytes" {
func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeJava(kind, structs)
if bound == "String" || bound == "byte[]" {
bound = "Hash"
}
return bound
}
// bindStructType is a set of type binders that convert Solidity tuple types to some supported
// programming language struct definition.
var bindStructType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{
LangGo: bindStructTypeGo,
LangJava: bindStructTypeJava,
}
// bindStructTypeGo converts a Solidity tuple type to a Go one and records the mapping
// in the given map.
// Notably, this function will resolve and record nested struct recursively.
func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
if s, exist := structs[kind.String()]; exist {
return s.Name
}
var fields []*tmplField
for i, elem := range kind.TupleElems {
field := bindStructTypeGo(*elem, structs)
fields = append(fields, &tmplField{Type: field, Name: capitalise(kind.TupleRawNames[i]), SolKind: *elem})
}
name := fmt.Sprintf("Struct%d", len(structs))
structs[kind.String()] = &tmplStruct{
Name: name,
Fields: fields,
}
return name
case abi.ArrayTy:
return fmt.Sprintf("[%d]", kind.Size) + bindStructTypeGo(*kind.Elem, structs)
case abi.SliceTy:
return "[]" + bindStructTypeGo(*kind.Elem, structs)
default:
return bindBasicTypeGo(kind)
}
}
// bindStructTypeJava converts a Solidity tuple type to a Java one and records the mapping
// in the given map.
// Notably, this function will resolve and record nested struct recursively.
func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
if s, exist := structs[kind.String()]; exist {
return s.Name
}
var fields []*tmplField
for i, elem := range kind.TupleElems {
field := bindStructTypeJava(*elem, structs)
fields = append(fields, &tmplField{Type: field, Name: decapitalise(kind.TupleRawNames[i]), SolKind: *elem})
}
name := fmt.Sprintf("Class%d", len(structs))
structs[kind.String()] = &tmplStruct{
Name: name,
Fields: fields,
}
return name
case abi.ArrayTy, abi.SliceTy:
return pluralizeJavaType(bindStructTypeJava(*kind.Elem, structs))
default:
return bindBasicTypeJava(kind)
}
}
// namedType is a set of functions that transform language specific types to
// named versions that my be used inside method names.
var namedType = map[Lang]func(string, abi.Type) string{
@ -348,18 +432,8 @@ func namedTypeJava(javaKind string, solKind abi.Type) string {
switch javaKind {
case "byte[]":
return "Binary"
case "byte[][]":
return "Binaries"
case "string":
return "String"
case "string[]":
return "Strings"
case "boolean":
return "Bool"
case "boolean[]":
return "Bools"
case "BigInt[]":
return "BigInts"
default:
parts := regexp.MustCompile(`(u)?int([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(solKind.String())
if len(parts) != 4 {
@ -422,3 +496,63 @@ func structured(args abi.Arguments) bool {
}
return true
}
// resolveArgName converts a raw argument representation into a user friendly format.
func resolveArgName(arg abi.Argument, structs map[string]*tmplStruct) string {
var (
prefix string
embedded string
typ = &arg.Type
)
loop:
for {
switch typ.T {
case abi.SliceTy:
prefix += "[]"
case abi.ArrayTy:
prefix += fmt.Sprintf("[%d]", typ.Size)
default:
embedded = typ.String()
break loop
}
typ = typ.Elem
}
if s, exist := structs[embedded]; exist {
return prefix + s.Name
} else {
return arg.Type.String()
}
}
// formatMethod transforms raw method representation into a user friendly one.
func formatMethod(method abi.Method, structs map[string]*tmplStruct) string {
inputs := make([]string, len(method.Inputs))
for i, input := range method.Inputs {
inputs[i] = fmt.Sprintf("%v %v", resolveArgName(input, structs), input.Name)
}
outputs := make([]string, len(method.Outputs))
for i, output := range method.Outputs {
outputs[i] = resolveArgName(output, structs)
if len(output.Name) > 0 {
outputs[i] += fmt.Sprintf(" %v", output.Name)
}
}
constant := ""
if method.Const {
constant = "constant "
}
return fmt.Sprintf("function %v(%v) %sreturns(%v)", method.Name, strings.Join(inputs, ", "), constant, strings.Join(outputs, ", "))
}
// formatEvent transforms raw event representation into a user friendly one.
func formatEvent(event abi.Event, structs map[string]*tmplStruct) string {
inputs := make([]string, len(event.Inputs))
for i, input := range event.Inputs {
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", resolveArgName(input, structs), input.Name)
} else {
inputs[i] = fmt.Sprintf("%v %v", resolveArgName(input, structs), input.Name)
}
}
return fmt.Sprintf("event %v(%v)", event.Name, strings.Join(inputs, ", "))
}

File diff suppressed because one or more lines are too long

View File

@ -22,6 +22,7 @@ import "github.com/ethereum/go-ethereum/accounts/abi"
type tmplData struct {
Package string // Name of the package to place the generated file in
Contracts map[string]*tmplContract // List of contracts to generate into this file
Libraries map[string]string // Map the bytecode's link pattern to the library name
}
// tmplContract contains the data needed to generate an individual contract binding.
@ -29,10 +30,14 @@ type tmplContract struct {
Type string // Type name of the main contract binding
InputABI string // JSON ABI used as the input to generate the binding from
InputBin string // Optional EVM bytecode used to denetare deploy code from
FuncSigs map[string]string // Optional map: string signature -> 4-byte signature
Constructor abi.Method // Contract constructor for deploy parametrization
Calls map[string]*tmplMethod // Contract calls that only read state data
Transacts map[string]*tmplMethod // Contract calls that write state data
Events map[string]*tmplEvent // Contract events accessors
Libraries map[string]string // Same as tmplData, but filtered to only keep what the contract needs
Structs map[string]*tmplStruct // Contract struct type definitions
Library bool
}
// tmplMethod is a wrapper around an abi.Method that contains a few preprocessed
@ -49,6 +54,21 @@ type tmplEvent struct {
Normalized abi.Event // Normalized version of the parsed fields
}
// tmplField is a wrapper around a struct field with binding language
// struct type definition and relative filed name.
type tmplField struct {
Type string // Field type representation depends on target binding language
Name string // Field name converted from the raw user-defined field name
SolKind abi.Type // Raw abi type information
}
// tmplStruct is a wrapper around an abi.tuple contains a auto-generated
// struct name.
type tmplStruct struct {
Name string // Auto-generated struct name(We can't obtain the raw struct name through abi)
Fields []*tmplField // Struct fields definition depends on the binding language.
}
// tmplSource is language to template mapping containing all the supported
// programming languages the package can generate to.
var tmplSource = map[Lang]string{
@ -89,19 +109,32 @@ var (
)
{{range $contract := .Contracts}}
{{$structs := $contract.Structs}}
// {{.Type}}ABI is the input ABI used to generate the binding from.
const {{.Type}}ABI = "{{.InputABI}}"
{{if $contract.FuncSigs}}
// {{.Type}}FuncSigs maps the 4-byte function signature to its string representation.
var {{.Type}}FuncSigs = map[string]string{
{{range $strsig, $binsig := .FuncSigs}}"{{$binsig}}": "{{$strsig}}",
{{end}}
}
{{end}}
{{if .InputBin}}
// {{.Type}}Bin is the compiled bytecode used for deploying new contracts.
const {{.Type}}Bin = ` + "`" + `{{.InputBin}}` + "`" + `
var {{.Type}}Bin = "0x{{.InputBin}}"
// Deploy{{.Type}} deploys a new Ethereum contract, binding an instance of {{.Type}} to it.
func Deploy{{.Type}}(auth *bind.TransactOpts, backend bind.ContractBackend {{range .Constructor.Inputs}}, {{.Name}} {{bindtype .Type}}{{end}}) (common.Address, *types.Transaction, *{{.Type}}, error) {
func Deploy{{.Type}}(auth *bind.TransactOpts, backend bind.ContractBackend {{range .Constructor.Inputs}}, {{.Name}} {{bindtype .Type $structs}}{{end}}) (common.Address, *types.Transaction, *{{.Type}}, error) {
parsed, err := abi.JSON(strings.NewReader({{.Type}}ABI))
if err != nil {
return common.Address{}, nil, nil, err
}
{{range $pattern, $name := .Libraries}}
{{decapitalise $name}}Addr, _, _, _ := Deploy{{capitalise $name}}(auth, backend)
{{$contract.Type}}Bin = strings.Replace({{$contract.Type}}Bin, "__${{$pattern}}$__", {{decapitalise $name}}Addr.String()[2:], -1)
{{end}}
address, tx, contract, err := bind.DeployContract(auth, parsed, common.FromHex({{.Type}}Bin), backend {{range .Constructor.Inputs}}, {{.Name}}{{end}})
if err != nil {
return common.Address{}, nil, nil, err
@ -114,7 +147,7 @@ var (
type {{.Type}} struct {
{{.Type}}Caller // Read-only binding to the contract
{{.Type}}Transactor // Write-only binding to the contract
{{.Type}}Filterer // Log filterer for contract events
{{.Type}}Filterer // Log filterer for contract events
}
// {{.Type}}Caller is an auto generated read-only Go binding around an Ethereum contract.
@ -252,16 +285,24 @@ var (
return _{{$contract.Type}}.Contract.contract.Transact(opts, method, params...)
}
{{range .Structs}}
// {{.Name}} is an auto generated low-level Go binding around an user-defined struct.
type {{.Name}} struct {
{{range $field := .Fields}}
{{$field.Name}} {{$field.Type}}{{end}}
}
{{end}}
{{range .Calls}}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Caller) {{.Normalized.Name}}(opts *bind.CallOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type}};{{end}} },{{else}}{{range .Normalized.Outputs}}{{bindtype .Type}},{{end}}{{end}} error) {
// Solidity: {{formatmethod .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}Caller) {{.Normalized.Name}}(opts *bind.CallOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} },{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}}{{end}} error) {
{{if .Structured}}ret := new(struct{
{{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type}}
{{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}}
{{end}}
}){{else}}var (
{{range $i, $_ := .Normalized.Outputs}}ret{{$i}} = new({{bindtype .Type}})
{{range $i, $_ := .Normalized.Outputs}}ret{{$i}} = new({{bindtype .Type $structs}})
{{end}}
){{end}}
out := {{if .Structured}}ret{{else}}{{if eq (len .Normalized.Outputs) 1}}ret0{{else}}&[]interface{}{
@ -274,15 +315,15 @@ var (
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type}},{{end}} {{end}} error) {
// Solidity: {{formatmethod .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}} {{end}} error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.CallOpts {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}CallerSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type}},{{end}} {{end}} error) {
// Solidity: {{formatmethod .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}CallerSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) ({{if .Structured}}struct{ {{range .Normalized.Outputs}}{{.Name}} {{bindtype .Type $structs}};{{end}} }, {{else}} {{range .Normalized.Outputs}}{{bindtype .Type $structs}},{{end}} {{end}} error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.CallOpts {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
{{end}}
@ -290,22 +331,22 @@ var (
{{range .Transacts}}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) {{.Normalized.Name}}(opts *bind.TransactOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type}} {{end}}) (*types.Transaction, error) {
// Solidity: {{formatmethod .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}Transactor) {{.Normalized.Name}}(opts *bind.TransactOpts {{range .Normalized.Inputs}}, {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.contract.Transact(opts, "{{.Original.Name}}" {{range .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type}} {{end}}) (*types.Transaction, error) {
// Solidity: {{formatmethod .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}Session) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type}} {{end}}) (*types.Transaction, error) {
// Solidity: {{formatmethod .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}TransactorSession) {{.Normalized.Name}}({{range $i, $_ := .Normalized.Inputs}}{{if ne $i 0}},{{end}} {{.Name}} {{bindtype .Type $structs}} {{end}}) (*types.Transaction, error) {
return _{{$contract.Type}}.Contract.{{.Normalized.Name}}(&_{{$contract.Type}}.TransactOpts {{range $i, $_ := .Normalized.Inputs}}, {{.Name}}{{end}})
}
{{end}}
@ -377,14 +418,14 @@ var (
// {{$contract.Type}}{{.Normalized.Name}} represents a {{.Normalized.Name}} event raised by the {{$contract.Type}} contract.
type {{$contract.Type}}{{.Normalized.Name}} struct { {{range .Normalized.Inputs}}
{{capitalise .Name}} {{if .Indexed}}{{bindtopictype .Type}}{{else}}{{bindtype .Type}}{{end}}; {{end}}
{{capitalise .Name}} {{if .Indexed}}{{bindtopictype .Type $structs}}{{else}}{{bindtype .Type $structs}}{{end}}; {{end}}
Raw types.Log // Blockchain specific contextual infos
}
// Filter{{.Normalized.Name}} is a free log retrieval operation binding the contract event 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Filter{{.Normalized.Name}}(opts *bind.FilterOpts{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type}}{{end}}{{end}}) (*{{$contract.Type}}{{.Normalized.Name}}Iterator, error) {
// Solidity: {{formatevent .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Filter{{.Normalized.Name}}(opts *bind.FilterOpts{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type $structs}}{{end}}{{end}}) (*{{$contract.Type}}{{.Normalized.Name}}Iterator, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
for _, {{.Name}}Item := range {{.Name}} {
@ -400,8 +441,8 @@ var (
// Watch{{.Normalized.Name}} is a free log subscription operation binding the contract event 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Watch{{.Normalized.Name}}(opts *bind.WatchOpts, sink chan<- *{{$contract.Type}}{{.Normalized.Name}}{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type}}{{end}}{{end}}) (event.Subscription, error) {
// Solidity: {{formatevent .Original $structs}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Watch{{.Normalized.Name}}(opts *bind.WatchOpts, sink chan<- *{{$contract.Type}}{{.Normalized.Name}}{{range .Normalized.Inputs}}{{if .Indexed}}, {{.Name}} []{{bindtype .Type $structs}}{{end}}{{end}}) (event.Subscription, error) {
{{range .Normalized.Inputs}}
{{if .Indexed}}var {{.Name}}Rule []interface{}
for _, {{.Name}}Item := range {{.Name}} {
@ -439,6 +480,18 @@ var (
}
}), nil
}
// Parse{{.Normalized.Name}} is a log parse operation binding the contract event 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
func (_{{$contract.Type}} *{{$contract.Type}}Filterer) Parse{{.Normalized.Name}}(log types.Log) (*{{$contract.Type}}{{.Normalized.Name}}, error) {
event := new({{$contract.Type}}{{.Normalized.Name}})
if err := _{{$contract.Type}}.contract.UnpackLog(event, "{{.Original.Name}}", log); err != nil {
return nil, err
}
return event, nil
}
{{end}}
{{end}}
`
@ -452,95 +505,112 @@ const tmplSourceJava = `
package {{.Package}};
import org.ethereum.geth.*;
import org.ethereum.geth.internal.*;
import java.util.*;
{{range $contract := .Contracts}}
public class {{.Type}} {
// ABI is the input ABI used to generate the binding from.
public final static String ABI = "{{.InputABI}}";
{{if .InputBin}}
// BYTECODE is the compiled bytecode used for deploying new contracts.
public final static byte[] BYTECODE = "{{.InputBin}}".getBytes();
// deploy deploys a new Ethereum contract, binding an instance of {{.Type}} to it.
public static {{.Type}} deploy(TransactOpts auth, EthereumClient client{{range .Constructor.Inputs}}, {{bindtype .Type}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Constructor.Inputs)}});
{{range $index, $element := .Constructor.Inputs}}
args.set({{$index}}, Geth.newInterface()); args.get({{$index}}).set{{namedtype (bindtype .Type) .Type}}({{.Name}});
{{end}}
return new {{.Type}}(Geth.deployContract(auth, ABI, BYTECODE, client, args));
}
// Internal constructor used by contract deployment.
private {{.Type}}(BoundContract deployment) {
this.Address = deployment.getAddress();
this.Deployer = deployment.getDeployer();
this.Contract = deployment;
}
{{end}}
// Ethereum address where this contract is located at.
public final Address Address;
// Ethereum transaction in which this contract was deployed (if known!).
public final Transaction Deployer;
// Contract instance bound to a blockchain address.
private final BoundContract Contract;
// Creates a new instance of {{.Type}}, bound to a specific deployed contract.
public {{.Type}}(Address address, EthereumClient client) throws Exception {
this(Geth.bindContract(address, ABI, client));
}
{{range .Calls}}
{{if gt (len .Normalized.Outputs) 1}}
// {{capitalise .Normalized.Name}}Results is the output of a call to {{.Normalized.Name}}.
public class {{capitalise .Normalized.Name}}Results {
{{range $index, $item := .Normalized.Outputs}}public {{bindtype .Type}} {{if ne .Name ""}}{{.Name}}{{else}}Return{{$index}}{{end}};
{{end}}
}
{{$structs := $contract.Structs}}
{{if not .Library}}public {{end}}class {{.Type}} {
// ABI is the input ABI used to generate the binding from.
public final static String ABI = "{{.InputABI}}";
{{if $contract.FuncSigs}}
// {{.Type}}FuncSigs maps the 4-byte function signature to its string representation.
public final static Map<String, String> {{.Type}}FuncSigs;
static {
Hashtable<String, String> temp = new Hashtable<String, String>();
{{range $strsig, $binsig := .FuncSigs}}temp.put("{{$binsig}}", "{{$strsig}}");
{{end}}
{{.Type}}FuncSigs = Collections.unmodifiableMap(temp);
}
{{end}}
{{if .InputBin}}
// BYTECODE is the compiled bytecode used for deploying new contracts.
public final static String BYTECODE = "0x{{.InputBin}}";
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else}}{{range .Normalized.Outputs}}{{bindtype .Type}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}args.set({{$index}}, Geth.newInterface()); args.get({{$index}}).set{{namedtype (bindtype .Type) .Type}}({{.Name}});
{{end}}
// deploy deploys a new Ethereum contract, binding an instance of {{.Type}} to it.
public static {{.Type}} deploy(TransactOpts auth, EthereumClient client{{range .Constructor.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Constructor.Inputs)}});
String bytecode = BYTECODE;
{{if .Libraries}}
Interfaces results = Geth.newInterfaces({{(len .Normalized.Outputs)}});
{{range $index, $item := .Normalized.Outputs}}Interface result{{$index}} = Geth.newInterface(); result{{$index}}.setDefault{{namedtype (bindtype .Type) .Type}}(); results.set({{$index}}, result{{$index}});
{{end}}
if (opts == null) {
opts = Geth.newCallOpts();
}
this.Contract.call(opts, results, "{{.Original.Name}}", args);
{{if gt (len .Normalized.Outputs) 1}}
{{capitalise .Normalized.Name}}Results result = new {{capitalise .Normalized.Name}}Results();
{{range $index, $item := .Normalized.Outputs}}result.{{if ne .Name ""}}{{.Name}}{{else}}Return{{$index}}{{end}} = results.get({{$index}}).get{{namedtype (bindtype .Type) .Type}}();
{{end}}
return result;
{{else}}{{range .Normalized.Outputs}}return results.get(0).get{{namedtype (bindtype .Type) .Type}}();{{end}}
{{end}}
}
// "link" contract to dependent libraries by deploying them first.
{{range $pattern, $name := .Libraries}}
{{capitalise $name}} {{decapitalise $name}}Inst = {{capitalise $name}}.deploy(auth, client);
bytecode = bytecode.replace("__${{$pattern}}$__", {{decapitalise $name}}Inst.Address.getHex().substring(2));
{{end}}
{{end}}
{{range $index, $element := .Constructor.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
return new {{.Type}}(Geth.deployContract(auth, ABI, Geth.decodeFromHex(bytecode), client, args));
}
{{range .Transacts}}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
public Transaction {{.Normalized.Name}}(TransactOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}args.set({{$index}}, Geth.newInterface()); args.get({{$index}}).set{{namedtype (bindtype .Type) .Type}}({{.Name}});
{{end}}
// Internal constructor used by contract deployment.
private {{.Type}}(BoundContract deployment) {
this.Address = deployment.getAddress();
this.Deployer = deployment.getDeployer();
this.Contract = deployment;
}
{{end}}
return this.Contract.transact(opts, "{{.Original.Name}}" , args);
}
// Ethereum address where this contract is located at.
public final Address Address;
// Ethereum transaction in which this contract was deployed (if known!).
public final Transaction Deployer;
// Contract instance bound to a blockchain address.
private final BoundContract Contract;
// Creates a new instance of {{.Type}}, bound to a specific deployed contract.
public {{.Type}}(Address address, EthereumClient client) throws Exception {
this(Geth.bindContract(address, ABI, client));
}
{{range .Calls}}
{{if gt (len .Normalized.Outputs) 1}}
// {{capitalise .Normalized.Name}}Results is the output of a call to {{.Normalized.Name}}.
public class {{capitalise .Normalized.Name}}Results {
{{range $index, $item := .Normalized.Outputs}}public {{bindtype .Type $structs}} {{if ne .Name ""}}{{.Name}}{{else}}Return{{$index}}{{end}};
{{end}}
}
{{end}}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
Interfaces results = Geth.newInterfaces({{(len .Normalized.Outputs)}});
{{range $index, $item := .Normalized.Outputs}}Interface result{{$index}} = Geth.newInterface(); result{{$index}}.setDefault{{namedtype (bindtype .Type $structs) .Type}}(); results.set({{$index}}, result{{$index}});
{{end}}
if (opts == null) {
opts = Geth.newCallOpts();
}
this.Contract.call(opts, results, "{{.Original.Name}}", args);
{{if gt (len .Normalized.Outputs) 1}}
{{capitalise .Normalized.Name}}Results result = new {{capitalise .Normalized.Name}}Results();
{{range $index, $item := .Normalized.Outputs}}result.{{if ne .Name ""}}{{.Name}}{{else}}Return{{$index}}{{end}} = results.get({{$index}}).get{{namedtype (bindtype .Type $structs) .Type}}();
{{end}}
return result;
{{else}}{{range .Normalized.Outputs}}return results.get(0).get{{namedtype (bindtype .Type $structs) .Type}}();{{end}}
{{end}}
}
{{end}}
{{range .Transacts}}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.Id}}.
//
// Solidity: {{.Original.String}}
public Transaction {{.Normalized.Name}}(TransactOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
return this.Contract.transact(opts, "{{.Original.Name}}" , args);
}
{{end}}
}
{{end}}
`

View File

@ -56,7 +56,8 @@ func TestWaitDeployed(t *testing.T) {
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
}, 10000000,
},
10000000,
)
// Create the transaction.

View File

@ -23,9 +23,13 @@ import (
const methoddata = `
[
{ "type" : "function", "name" : "balance", "constant" : true },
{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "transfer", "constant" : false, "inputs" : [ { "name" : "from", "type" : "address" }, { "name" : "to", "type" : "address" }, { "name" : "value", "type" : "uint256" } ], "outputs" : [ { "name" : "success", "type" : "bool" } ] }
{"type": "function", "name": "balance", "constant": true },
{"type": "function", "name": "send", "constant": false, "inputs": [{ "name": "amount", "type": "uint256" }]},
{"type": "function", "name": "transfer", "constant": false, "inputs": [{"name": "from", "type": "address"}, {"name": "to", "type": "address"}, {"name": "value", "type": "uint256"}], "outputs": [{"name": "success", "type": "bool"}]},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple"}],"name":"tuple","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[]"}],"name":"tupleSlice","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[5]"}],"name":"tupleArray","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},
{"constant":false,"inputs":[{"components":[{"name":"x","type":"uint256"},{"name":"y","type":"uint256"}],"name":"a","type":"tuple[5][]"}],"name":"complexTuple","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"}
]`
func TestMethodString(t *testing.T) {
@ -45,6 +49,22 @@ func TestMethodString(t *testing.T) {
method: "transfer",
expectation: "function transfer(address from, address to, uint256 value) returns(bool success)",
},
{
method: "tuple",
expectation: "function tuple((uint256,uint256) a) returns()",
},
{
method: "tupleArray",
expectation: "function tupleArray((uint256,uint256)[5] a) returns()",
},
{
method: "tupleSlice",
expectation: "function tupleSlice((uint256,uint256)[] a) returns()",
},
{
method: "complexTuple",
expectation: "function complexTuple((uint256,uint256)[5][] a) returns()",
},
}
abi, err := JSON(strings.NewReader(methoddata))
@ -59,3 +79,50 @@ func TestMethodString(t *testing.T) {
}
}
}
func TestMethodSig(t *testing.T) {
var cases = []struct {
method string
expect string
}{
{
method: "balance",
expect: "balance()",
},
{
method: "send",
expect: "send(uint256)",
},
{
method: "transfer",
expect: "transfer(address,address,uint256)",
},
{
method: "tuple",
expect: "tuple((uint256,uint256))",
},
{
method: "tupleArray",
expect: "tupleArray((uint256,uint256)[5])",
},
{
method: "tupleSlice",
expect: "tupleSlice((uint256,uint256)[])",
},
{
method: "complexTuple",
expect: "complexTuple((uint256,uint256)[5][])",
},
}
abi, err := JSON(strings.NewReader(methoddata))
if err != nil {
t.Fatal(err)
}
for _, test := range cases {
got := abi.Methods[test.method].Sig()
if got != test.expect {
t.Errorf("expected string to be %s, got %s", test.expect, got)
}
}
}

View File

@ -31,6 +31,14 @@ func indirect(v reflect.Value) reflect.Value {
return v
}
// indirectInterfaceOrPtr recursively dereferences the value until value is not interface.
func indirectInterfaceOrPtr(v reflect.Value) reflect.Value {
if (v.Kind() == reflect.Interface || v.Kind() == reflect.Ptr) && v.Elem().IsValid() {
return indirect(v.Elem())
}
return v
}
// reflectIntKind returns the reflect using the given size and
// unsignedness.
func reflectIntKindAndType(unsigned bool, size int) (reflect.Kind, reflect.Type) {

View File

@ -68,7 +68,6 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
if strings.Count(t, "[") != strings.Count(t, "]") {
return Type{}, fmt.Errorf("invalid arg type in abi")
}
typ.stringKind = t
// if there are brackets, get ready to go into slice/array mode and
@ -92,9 +91,7 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
typ.Kind = reflect.Slice
typ.Elem = &embeddedType
typ.Type = reflect.SliceOf(embeddedType.Type)
if embeddedType.T == TupleTy {
typ.stringKind = embeddedType.stringKind + sliced
}
typ.stringKind = embeddedType.stringKind + sliced
} else if len(intz) == 1 {
// is a array
typ.T = ArrayTy
@ -105,9 +102,7 @@ func NewType(t string, components []ArgumentMarshaling) (typ Type, err error) {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
typ.Type = reflect.ArrayOf(typ.Size, embeddedType.Type)
if embeddedType.T == TupleTy {
typ.stringKind = embeddedType.stringKind + sliced
}
typ.stringKind = embeddedType.stringKind + sliced
} else {
return Type{}, fmt.Errorf("invalid formatting of array type")
}

View File

@ -965,25 +965,21 @@ func TestUnpackTuple(t *testing.T) {
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // ret[a] = 1
buff.Write(common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")) // ret[b] = -1
// If the result is single tuple, use struct as return value container directly.
v := struct {
Ret struct {
A *big.Int
B *big.Int
}
}{Ret: struct {
A *big.Int
B *big.Int
}{new(big.Int), new(big.Int)}}
}{new(big.Int), new(big.Int)}
err = abi.Unpack(&v, "tuple", buff.Bytes())
if err != nil {
t.Error(err)
} else {
if v.Ret.A.Cmp(big.NewInt(1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", 1, v.Ret.A)
if v.A.Cmp(big.NewInt(1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", 1, v.A)
}
if v.Ret.B.Cmp(big.NewInt(-1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", v.Ret.B, -1)
if v.B.Cmp(big.NewInt(-1)) != 0 {
t.Errorf("unexpected value unpacked: want %x, got %x", v.B, -1)
}
}

View File

@ -182,18 +182,21 @@ func (api *ExternalSigner) SignText(account accounts.Account, text []byte) ([]by
func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
res := ethapi.SignTransactionResult{}
to := common.NewMixedcaseAddress(*tx.To())
data := hexutil.Bytes(tx.Data())
var to *common.MixedcaseAddress
if tx.To() != nil {
t := common.NewMixedcaseAddress(*tx.To())
to = &t
}
args := &core.SendTxArgs{
Data: &data,
Nonce: hexutil.Uint64(tx.Nonce()),
Value: hexutil.Big(*tx.Value()),
Gas: hexutil.Uint64(tx.Gas()),
GasPrice: hexutil.Big(*tx.GasPrice()),
To: &to,
To: to,
From: common.NewMixedcaseAddress(account.Address),
}
if err := api.client.Call(&res, "account_signTransaction", args); err != nil {
return nil, err
}

View File

@ -379,9 +379,9 @@ func tmpKeyStore(t *testing.T, encrypted bool) (string, *KeyStore) {
if err != nil {
t.Fatal(err)
}
new := NewPlaintextKeyStore
newKs := NewPlaintextKeyStore
if encrypted {
new = func(kd string) *KeyStore { return NewKeyStore(kd, veryLightScryptN, veryLightScryptP) }
newKs = func(kd string) *KeyStore { return NewKeyStore(kd, veryLightScryptN, veryLightScryptP) }
}
return d, new(d)
return d, newKs(d)
}

View File

@ -20,12 +20,13 @@ import (
"errors"
"runtime"
"sync"
"sync/atomic"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
"github.com/karalabe/usb"
)
// LedgerScheme is the protocol scheme prefixing account and wallet URLs.
@ -64,6 +65,7 @@ type Hub struct {
// TODO(karalabe): remove if hotplug lands on Windows
commsPend int // Number of operations blocking enumeration
commsLock sync.Mutex // Lock protecting the pending counter and enumeration
enumFails uint32 // Number of times enumeration has failed
}
// NewLedgerHub creates a new hardware wallet manager for Ledger devices.
@ -84,14 +86,20 @@ func NewLedgerHub() (*Hub, error) {
}, 0xffa0, 0, newLedgerDriver)
}
// NewTrezorHub creates a new hardware wallet manager for Trezor devices.
func NewTrezorHub() (*Hub, error) {
return newHub(TrezorScheme, 0x534c, []uint16{0x0001 /* Trezor 1 */}, 0xff00, 0, newTrezorDriver)
// NewTrezorHubWithHID creates a new hardware wallet manager for Trezor devices.
func NewTrezorHubWithHID() (*Hub, error) {
return newHub(TrezorScheme, 0x534c, []uint16{0x0001 /* Trezor HID */}, 0xff00, 0, newTrezorDriver)
}
// NewTrezorHubWithWebUSB creates a new hardware wallet manager for Trezor devices with
// firmware version > 1.8.0
func NewTrezorHubWithWebUSB() (*Hub, error) {
return newHub(TrezorScheme, 0x1209, []uint16{0x53c1 /* Trezor WebUSB */}, 0xffff /* No usage id on webusb, don't match unset (0) */, 0, newTrezorDriver)
}
// newHub creates a new hardware wallet manager for generic USB devices.
func newHub(scheme string, vendorID uint16, productIDs []uint16, usageID uint16, endpointID int, makeDriver func(log.Logger) driver) (*Hub, error) {
if !hid.Supported() {
if !usb.Supported() {
return nil, errors.New("unsupported platform")
}
hub := &Hub{
@ -132,8 +140,12 @@ func (hub *Hub) refreshWallets() {
if elapsed < refreshThrottling {
return
}
// If USB enumeration is continually failing, don't keep trying indefinitely
if atomic.LoadUint32(&hub.enumFails) > 2 {
return
}
// Retrieve the current list of USB wallet devices
var devices []hid.DeviceInfo
var devices []usb.DeviceInfo
if runtime.GOOS == "linux" {
// hidapi on Linux opens the device during enumeration to retrieve some infos,
@ -148,8 +160,22 @@ func (hub *Hub) refreshWallets() {
return
}
}
for _, info := range hid.Enumerate(hub.vendorID, 0) {
infos, err := usb.Enumerate(hub.vendorID, 0)
if err != nil {
failcount := atomic.AddUint32(&hub.enumFails, 1)
if runtime.GOOS == "linux" {
// See rationale before the enumeration why this is needed and only on Linux.
hub.commsLock.Unlock()
}
log.Error("Failed to enumerate USB devices", "hub", hub.scheme,
"vendor", hub.vendorID, "failcount", failcount, "err", err)
return
}
atomic.StoreUint32(&hub.enumFails, 0)
for _, info := range infos {
for _, id := range hub.productIDs {
// Windows and Macos use UsageID matching, Linux uses Interface matching
if info.ProductID == id && (info.UsagePage == hub.usageID || info.Interface == hub.endpointID) {
devices = append(devices, info)
break

View File

@ -192,7 +192,13 @@ func (w *trezorDriver) trezorDerive(derivationPath []uint32) (common.Address, er
if _, err := w.trezorExchange(&trezor.EthereumGetAddress{AddressN: derivationPath}, address); err != nil {
return common.Address{}, err
}
return common.BytesToAddress(address.GetAddress()), nil
if addr := address.GetAddressBin(); len(addr) > 0 { // Older firmwares use binary fomats
return common.BytesToAddress(addr), nil
}
if addr := address.GetAddressHex(); len(addr) > 0 { // Newer firmwares use hexadecimal fomats
return common.HexToAddress(addr), nil
}
return common.Address{}, errors.New("missing derived address")
}
// trezorSign sends the transaction to the Trezor wallet, and waits for the user
@ -211,7 +217,10 @@ func (w *trezorDriver) trezorSign(derivationPath []uint32, tx *types.Transaction
DataLength: &length,
}
if to := tx.To(); to != nil {
request.To = (*to)[:] // Non contract deploy, set recipient explicitly
// Non contract deploy, set recipient explicitly
hex := to.Hex()
request.ToHex = &hex // Newer firmwares (old will ignore)
request.ToBin = (*to)[:] // Older firmwares (new will ignore)
}
if length > 1024 { // Send the data chunked if that was requested
request.DataInitialChunk, data = data[:1024], data[1024:]

View File

@ -0,0 +1,811 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: messages-common.proto
package trezor
import (
fmt "fmt"
math "math"
proto "github.com/golang/protobuf/proto"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
type Failure_FailureType int32
const (
Failure_Failure_UnexpectedMessage Failure_FailureType = 1
Failure_Failure_ButtonExpected Failure_FailureType = 2
Failure_Failure_DataError Failure_FailureType = 3
Failure_Failure_ActionCancelled Failure_FailureType = 4
Failure_Failure_PinExpected Failure_FailureType = 5
Failure_Failure_PinCancelled Failure_FailureType = 6
Failure_Failure_PinInvalid Failure_FailureType = 7
Failure_Failure_InvalidSignature Failure_FailureType = 8
Failure_Failure_ProcessError Failure_FailureType = 9
Failure_Failure_NotEnoughFunds Failure_FailureType = 10
Failure_Failure_NotInitialized Failure_FailureType = 11
Failure_Failure_PinMismatch Failure_FailureType = 12
Failure_Failure_FirmwareError Failure_FailureType = 99
)
var Failure_FailureType_name = map[int32]string{
1: "Failure_UnexpectedMessage",
2: "Failure_ButtonExpected",
3: "Failure_DataError",
4: "Failure_ActionCancelled",
5: "Failure_PinExpected",
6: "Failure_PinCancelled",
7: "Failure_PinInvalid",
8: "Failure_InvalidSignature",
9: "Failure_ProcessError",
10: "Failure_NotEnoughFunds",
11: "Failure_NotInitialized",
12: "Failure_PinMismatch",
99: "Failure_FirmwareError",
}
var Failure_FailureType_value = map[string]int32{
"Failure_UnexpectedMessage": 1,
"Failure_ButtonExpected": 2,
"Failure_DataError": 3,
"Failure_ActionCancelled": 4,
"Failure_PinExpected": 5,
"Failure_PinCancelled": 6,
"Failure_PinInvalid": 7,
"Failure_InvalidSignature": 8,
"Failure_ProcessError": 9,
"Failure_NotEnoughFunds": 10,
"Failure_NotInitialized": 11,
"Failure_PinMismatch": 12,
"Failure_FirmwareError": 99,
}
func (x Failure_FailureType) Enum() *Failure_FailureType {
p := new(Failure_FailureType)
*p = x
return p
}
func (x Failure_FailureType) String() string {
return proto.EnumName(Failure_FailureType_name, int32(x))
}
func (x *Failure_FailureType) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(Failure_FailureType_value, data, "Failure_FailureType")
if err != nil {
return err
}
*x = Failure_FailureType(value)
return nil
}
func (Failure_FailureType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{1, 0}
}
//*
// Type of button request
type ButtonRequest_ButtonRequestType int32
const (
ButtonRequest_ButtonRequest_Other ButtonRequest_ButtonRequestType = 1
ButtonRequest_ButtonRequest_FeeOverThreshold ButtonRequest_ButtonRequestType = 2
ButtonRequest_ButtonRequest_ConfirmOutput ButtonRequest_ButtonRequestType = 3
ButtonRequest_ButtonRequest_ResetDevice ButtonRequest_ButtonRequestType = 4
ButtonRequest_ButtonRequest_ConfirmWord ButtonRequest_ButtonRequestType = 5
ButtonRequest_ButtonRequest_WipeDevice ButtonRequest_ButtonRequestType = 6
ButtonRequest_ButtonRequest_ProtectCall ButtonRequest_ButtonRequestType = 7
ButtonRequest_ButtonRequest_SignTx ButtonRequest_ButtonRequestType = 8
ButtonRequest_ButtonRequest_FirmwareCheck ButtonRequest_ButtonRequestType = 9
ButtonRequest_ButtonRequest_Address ButtonRequest_ButtonRequestType = 10
ButtonRequest_ButtonRequest_PublicKey ButtonRequest_ButtonRequestType = 11
ButtonRequest_ButtonRequest_MnemonicWordCount ButtonRequest_ButtonRequestType = 12
ButtonRequest_ButtonRequest_MnemonicInput ButtonRequest_ButtonRequestType = 13
ButtonRequest_ButtonRequest_PassphraseType ButtonRequest_ButtonRequestType = 14
ButtonRequest_ButtonRequest_UnknownDerivationPath ButtonRequest_ButtonRequestType = 15
)
var ButtonRequest_ButtonRequestType_name = map[int32]string{
1: "ButtonRequest_Other",
2: "ButtonRequest_FeeOverThreshold",
3: "ButtonRequest_ConfirmOutput",
4: "ButtonRequest_ResetDevice",
5: "ButtonRequest_ConfirmWord",
6: "ButtonRequest_WipeDevice",
7: "ButtonRequest_ProtectCall",
8: "ButtonRequest_SignTx",
9: "ButtonRequest_FirmwareCheck",
10: "ButtonRequest_Address",
11: "ButtonRequest_PublicKey",
12: "ButtonRequest_MnemonicWordCount",
13: "ButtonRequest_MnemonicInput",
14: "ButtonRequest_PassphraseType",
15: "ButtonRequest_UnknownDerivationPath",
}
var ButtonRequest_ButtonRequestType_value = map[string]int32{
"ButtonRequest_Other": 1,
"ButtonRequest_FeeOverThreshold": 2,
"ButtonRequest_ConfirmOutput": 3,
"ButtonRequest_ResetDevice": 4,
"ButtonRequest_ConfirmWord": 5,
"ButtonRequest_WipeDevice": 6,
"ButtonRequest_ProtectCall": 7,
"ButtonRequest_SignTx": 8,
"ButtonRequest_FirmwareCheck": 9,
"ButtonRequest_Address": 10,
"ButtonRequest_PublicKey": 11,
"ButtonRequest_MnemonicWordCount": 12,
"ButtonRequest_MnemonicInput": 13,
"ButtonRequest_PassphraseType": 14,
"ButtonRequest_UnknownDerivationPath": 15,
}
func (x ButtonRequest_ButtonRequestType) Enum() *ButtonRequest_ButtonRequestType {
p := new(ButtonRequest_ButtonRequestType)
*p = x
return p
}
func (x ButtonRequest_ButtonRequestType) String() string {
return proto.EnumName(ButtonRequest_ButtonRequestType_name, int32(x))
}
func (x *ButtonRequest_ButtonRequestType) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(ButtonRequest_ButtonRequestType_value, data, "ButtonRequest_ButtonRequestType")
if err != nil {
return err
}
*x = ButtonRequest_ButtonRequestType(value)
return nil
}
func (ButtonRequest_ButtonRequestType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{2, 0}
}
//*
// Type of PIN request
type PinMatrixRequest_PinMatrixRequestType int32
const (
PinMatrixRequest_PinMatrixRequestType_Current PinMatrixRequest_PinMatrixRequestType = 1
PinMatrixRequest_PinMatrixRequestType_NewFirst PinMatrixRequest_PinMatrixRequestType = 2
PinMatrixRequest_PinMatrixRequestType_NewSecond PinMatrixRequest_PinMatrixRequestType = 3
)
var PinMatrixRequest_PinMatrixRequestType_name = map[int32]string{
1: "PinMatrixRequestType_Current",
2: "PinMatrixRequestType_NewFirst",
3: "PinMatrixRequestType_NewSecond",
}
var PinMatrixRequest_PinMatrixRequestType_value = map[string]int32{
"PinMatrixRequestType_Current": 1,
"PinMatrixRequestType_NewFirst": 2,
"PinMatrixRequestType_NewSecond": 3,
}
func (x PinMatrixRequest_PinMatrixRequestType) Enum() *PinMatrixRequest_PinMatrixRequestType {
p := new(PinMatrixRequest_PinMatrixRequestType)
*p = x
return p
}
func (x PinMatrixRequest_PinMatrixRequestType) String() string {
return proto.EnumName(PinMatrixRequest_PinMatrixRequestType_name, int32(x))
}
func (x *PinMatrixRequest_PinMatrixRequestType) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(PinMatrixRequest_PinMatrixRequestType_value, data, "PinMatrixRequest_PinMatrixRequestType")
if err != nil {
return err
}
*x = PinMatrixRequest_PinMatrixRequestType(value)
return nil
}
func (PinMatrixRequest_PinMatrixRequestType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{4, 0}
}
//*
// Response: Success of the previous request
// @end
type Success struct {
Message *string `protobuf:"bytes,1,opt,name=message" json:"message,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Success) Reset() { *m = Success{} }
func (m *Success) String() string { return proto.CompactTextString(m) }
func (*Success) ProtoMessage() {}
func (*Success) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{0}
}
func (m *Success) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Success.Unmarshal(m, b)
}
func (m *Success) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Success.Marshal(b, m, deterministic)
}
func (m *Success) XXX_Merge(src proto.Message) {
xxx_messageInfo_Success.Merge(m, src)
}
func (m *Success) XXX_Size() int {
return xxx_messageInfo_Success.Size(m)
}
func (m *Success) XXX_DiscardUnknown() {
xxx_messageInfo_Success.DiscardUnknown(m)
}
var xxx_messageInfo_Success proto.InternalMessageInfo
func (m *Success) GetMessage() string {
if m != nil && m.Message != nil {
return *m.Message
}
return ""
}
//*
// Response: Failure of the previous request
// @end
type Failure struct {
Code *Failure_FailureType `protobuf:"varint,1,opt,name=code,enum=hw.trezor.messages.common.Failure_FailureType" json:"code,omitempty"`
Message *string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Failure) Reset() { *m = Failure{} }
func (m *Failure) String() string { return proto.CompactTextString(m) }
func (*Failure) ProtoMessage() {}
func (*Failure) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{1}
}
func (m *Failure) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Failure.Unmarshal(m, b)
}
func (m *Failure) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Failure.Marshal(b, m, deterministic)
}
func (m *Failure) XXX_Merge(src proto.Message) {
xxx_messageInfo_Failure.Merge(m, src)
}
func (m *Failure) XXX_Size() int {
return xxx_messageInfo_Failure.Size(m)
}
func (m *Failure) XXX_DiscardUnknown() {
xxx_messageInfo_Failure.DiscardUnknown(m)
}
var xxx_messageInfo_Failure proto.InternalMessageInfo
func (m *Failure) GetCode() Failure_FailureType {
if m != nil && m.Code != nil {
return *m.Code
}
return Failure_Failure_UnexpectedMessage
}
func (m *Failure) GetMessage() string {
if m != nil && m.Message != nil {
return *m.Message
}
return ""
}
//*
// Response: Device is waiting for HW button press.
// @auxstart
// @next ButtonAck
type ButtonRequest struct {
Code *ButtonRequest_ButtonRequestType `protobuf:"varint,1,opt,name=code,enum=hw.trezor.messages.common.ButtonRequest_ButtonRequestType" json:"code,omitempty"`
Data *string `protobuf:"bytes,2,opt,name=data" json:"data,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ButtonRequest) Reset() { *m = ButtonRequest{} }
func (m *ButtonRequest) String() string { return proto.CompactTextString(m) }
func (*ButtonRequest) ProtoMessage() {}
func (*ButtonRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{2}
}
func (m *ButtonRequest) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ButtonRequest.Unmarshal(m, b)
}
func (m *ButtonRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ButtonRequest.Marshal(b, m, deterministic)
}
func (m *ButtonRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_ButtonRequest.Merge(m, src)
}
func (m *ButtonRequest) XXX_Size() int {
return xxx_messageInfo_ButtonRequest.Size(m)
}
func (m *ButtonRequest) XXX_DiscardUnknown() {
xxx_messageInfo_ButtonRequest.DiscardUnknown(m)
}
var xxx_messageInfo_ButtonRequest proto.InternalMessageInfo
func (m *ButtonRequest) GetCode() ButtonRequest_ButtonRequestType {
if m != nil && m.Code != nil {
return *m.Code
}
return ButtonRequest_ButtonRequest_Other
}
func (m *ButtonRequest) GetData() string {
if m != nil && m.Data != nil {
return *m.Data
}
return ""
}
//*
// Request: Computer agrees to wait for HW button press
// @auxend
type ButtonAck struct {
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ButtonAck) Reset() { *m = ButtonAck{} }
func (m *ButtonAck) String() string { return proto.CompactTextString(m) }
func (*ButtonAck) ProtoMessage() {}
func (*ButtonAck) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{3}
}
func (m *ButtonAck) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ButtonAck.Unmarshal(m, b)
}
func (m *ButtonAck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ButtonAck.Marshal(b, m, deterministic)
}
func (m *ButtonAck) XXX_Merge(src proto.Message) {
xxx_messageInfo_ButtonAck.Merge(m, src)
}
func (m *ButtonAck) XXX_Size() int {
return xxx_messageInfo_ButtonAck.Size(m)
}
func (m *ButtonAck) XXX_DiscardUnknown() {
xxx_messageInfo_ButtonAck.DiscardUnknown(m)
}
var xxx_messageInfo_ButtonAck proto.InternalMessageInfo
//*
// Response: Device is asking computer to show PIN matrix and awaits PIN encoded using this matrix scheme
// @auxstart
// @next PinMatrixAck
type PinMatrixRequest struct {
Type *PinMatrixRequest_PinMatrixRequestType `protobuf:"varint,1,opt,name=type,enum=hw.trezor.messages.common.PinMatrixRequest_PinMatrixRequestType" json:"type,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PinMatrixRequest) Reset() { *m = PinMatrixRequest{} }
func (m *PinMatrixRequest) String() string { return proto.CompactTextString(m) }
func (*PinMatrixRequest) ProtoMessage() {}
func (*PinMatrixRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{4}
}
func (m *PinMatrixRequest) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_PinMatrixRequest.Unmarshal(m, b)
}
func (m *PinMatrixRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_PinMatrixRequest.Marshal(b, m, deterministic)
}
func (m *PinMatrixRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_PinMatrixRequest.Merge(m, src)
}
func (m *PinMatrixRequest) XXX_Size() int {
return xxx_messageInfo_PinMatrixRequest.Size(m)
}
func (m *PinMatrixRequest) XXX_DiscardUnknown() {
xxx_messageInfo_PinMatrixRequest.DiscardUnknown(m)
}
var xxx_messageInfo_PinMatrixRequest proto.InternalMessageInfo
func (m *PinMatrixRequest) GetType() PinMatrixRequest_PinMatrixRequestType {
if m != nil && m.Type != nil {
return *m.Type
}
return PinMatrixRequest_PinMatrixRequestType_Current
}
//*
// Request: Computer responds with encoded PIN
// @auxend
type PinMatrixAck struct {
Pin *string `protobuf:"bytes,1,req,name=pin" json:"pin,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PinMatrixAck) Reset() { *m = PinMatrixAck{} }
func (m *PinMatrixAck) String() string { return proto.CompactTextString(m) }
func (*PinMatrixAck) ProtoMessage() {}
func (*PinMatrixAck) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{5}
}
func (m *PinMatrixAck) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_PinMatrixAck.Unmarshal(m, b)
}
func (m *PinMatrixAck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_PinMatrixAck.Marshal(b, m, deterministic)
}
func (m *PinMatrixAck) XXX_Merge(src proto.Message) {
xxx_messageInfo_PinMatrixAck.Merge(m, src)
}
func (m *PinMatrixAck) XXX_Size() int {
return xxx_messageInfo_PinMatrixAck.Size(m)
}
func (m *PinMatrixAck) XXX_DiscardUnknown() {
xxx_messageInfo_PinMatrixAck.DiscardUnknown(m)
}
var xxx_messageInfo_PinMatrixAck proto.InternalMessageInfo
func (m *PinMatrixAck) GetPin() string {
if m != nil && m.Pin != nil {
return *m.Pin
}
return ""
}
//*
// Response: Device awaits encryption passphrase
// @auxstart
// @next PassphraseAck
type PassphraseRequest struct {
OnDevice *bool `protobuf:"varint,1,opt,name=on_device,json=onDevice" json:"on_device,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PassphraseRequest) Reset() { *m = PassphraseRequest{} }
func (m *PassphraseRequest) String() string { return proto.CompactTextString(m) }
func (*PassphraseRequest) ProtoMessage() {}
func (*PassphraseRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{6}
}
func (m *PassphraseRequest) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_PassphraseRequest.Unmarshal(m, b)
}
func (m *PassphraseRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_PassphraseRequest.Marshal(b, m, deterministic)
}
func (m *PassphraseRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_PassphraseRequest.Merge(m, src)
}
func (m *PassphraseRequest) XXX_Size() int {
return xxx_messageInfo_PassphraseRequest.Size(m)
}
func (m *PassphraseRequest) XXX_DiscardUnknown() {
xxx_messageInfo_PassphraseRequest.DiscardUnknown(m)
}
var xxx_messageInfo_PassphraseRequest proto.InternalMessageInfo
func (m *PassphraseRequest) GetOnDevice() bool {
if m != nil && m.OnDevice != nil {
return *m.OnDevice
}
return false
}
//*
// Request: Send passphrase back
// @next PassphraseStateRequest
type PassphraseAck struct {
Passphrase *string `protobuf:"bytes,1,opt,name=passphrase" json:"passphrase,omitempty"`
State []byte `protobuf:"bytes,2,opt,name=state" json:"state,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PassphraseAck) Reset() { *m = PassphraseAck{} }
func (m *PassphraseAck) String() string { return proto.CompactTextString(m) }
func (*PassphraseAck) ProtoMessage() {}
func (*PassphraseAck) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{7}
}
func (m *PassphraseAck) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_PassphraseAck.Unmarshal(m, b)
}
func (m *PassphraseAck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_PassphraseAck.Marshal(b, m, deterministic)
}
func (m *PassphraseAck) XXX_Merge(src proto.Message) {
xxx_messageInfo_PassphraseAck.Merge(m, src)
}
func (m *PassphraseAck) XXX_Size() int {
return xxx_messageInfo_PassphraseAck.Size(m)
}
func (m *PassphraseAck) XXX_DiscardUnknown() {
xxx_messageInfo_PassphraseAck.DiscardUnknown(m)
}
var xxx_messageInfo_PassphraseAck proto.InternalMessageInfo
func (m *PassphraseAck) GetPassphrase() string {
if m != nil && m.Passphrase != nil {
return *m.Passphrase
}
return ""
}
func (m *PassphraseAck) GetState() []byte {
if m != nil {
return m.State
}
return nil
}
//*
// Response: Device awaits passphrase state
// @next PassphraseStateAck
type PassphraseStateRequest struct {
State []byte `protobuf:"bytes,1,opt,name=state" json:"state,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PassphraseStateRequest) Reset() { *m = PassphraseStateRequest{} }
func (m *PassphraseStateRequest) String() string { return proto.CompactTextString(m) }
func (*PassphraseStateRequest) ProtoMessage() {}
func (*PassphraseStateRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{8}
}
func (m *PassphraseStateRequest) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_PassphraseStateRequest.Unmarshal(m, b)
}
func (m *PassphraseStateRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_PassphraseStateRequest.Marshal(b, m, deterministic)
}
func (m *PassphraseStateRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_PassphraseStateRequest.Merge(m, src)
}
func (m *PassphraseStateRequest) XXX_Size() int {
return xxx_messageInfo_PassphraseStateRequest.Size(m)
}
func (m *PassphraseStateRequest) XXX_DiscardUnknown() {
xxx_messageInfo_PassphraseStateRequest.DiscardUnknown(m)
}
var xxx_messageInfo_PassphraseStateRequest proto.InternalMessageInfo
func (m *PassphraseStateRequest) GetState() []byte {
if m != nil {
return m.State
}
return nil
}
//*
// Request: Send passphrase state back
// @auxend
type PassphraseStateAck struct {
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PassphraseStateAck) Reset() { *m = PassphraseStateAck{} }
func (m *PassphraseStateAck) String() string { return proto.CompactTextString(m) }
func (*PassphraseStateAck) ProtoMessage() {}
func (*PassphraseStateAck) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{9}
}
func (m *PassphraseStateAck) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_PassphraseStateAck.Unmarshal(m, b)
}
func (m *PassphraseStateAck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_PassphraseStateAck.Marshal(b, m, deterministic)
}
func (m *PassphraseStateAck) XXX_Merge(src proto.Message) {
xxx_messageInfo_PassphraseStateAck.Merge(m, src)
}
func (m *PassphraseStateAck) XXX_Size() int {
return xxx_messageInfo_PassphraseStateAck.Size(m)
}
func (m *PassphraseStateAck) XXX_DiscardUnknown() {
xxx_messageInfo_PassphraseStateAck.DiscardUnknown(m)
}
var xxx_messageInfo_PassphraseStateAck proto.InternalMessageInfo
//*
// Structure representing BIP32 (hierarchical deterministic) node
// Used for imports of private key into the device and exporting public key out of device
// @embed
type HDNodeType struct {
Depth *uint32 `protobuf:"varint,1,req,name=depth" json:"depth,omitempty"`
Fingerprint *uint32 `protobuf:"varint,2,req,name=fingerprint" json:"fingerprint,omitempty"`
ChildNum *uint32 `protobuf:"varint,3,req,name=child_num,json=childNum" json:"child_num,omitempty"`
ChainCode []byte `protobuf:"bytes,4,req,name=chain_code,json=chainCode" json:"chain_code,omitempty"`
PrivateKey []byte `protobuf:"bytes,5,opt,name=private_key,json=privateKey" json:"private_key,omitempty"`
PublicKey []byte `protobuf:"bytes,6,opt,name=public_key,json=publicKey" json:"public_key,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *HDNodeType) Reset() { *m = HDNodeType{} }
func (m *HDNodeType) String() string { return proto.CompactTextString(m) }
func (*HDNodeType) ProtoMessage() {}
func (*HDNodeType) Descriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{10}
}
func (m *HDNodeType) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_HDNodeType.Unmarshal(m, b)
}
func (m *HDNodeType) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_HDNodeType.Marshal(b, m, deterministic)
}
func (m *HDNodeType) XXX_Merge(src proto.Message) {
xxx_messageInfo_HDNodeType.Merge(m, src)
}
func (m *HDNodeType) XXX_Size() int {
return xxx_messageInfo_HDNodeType.Size(m)
}
func (m *HDNodeType) XXX_DiscardUnknown() {
xxx_messageInfo_HDNodeType.DiscardUnknown(m)
}
var xxx_messageInfo_HDNodeType proto.InternalMessageInfo
func (m *HDNodeType) GetDepth() uint32 {
if m != nil && m.Depth != nil {
return *m.Depth
}
return 0
}
func (m *HDNodeType) GetFingerprint() uint32 {
if m != nil && m.Fingerprint != nil {
return *m.Fingerprint
}
return 0
}
func (m *HDNodeType) GetChildNum() uint32 {
if m != nil && m.ChildNum != nil {
return *m.ChildNum
}
return 0
}
func (m *HDNodeType) GetChainCode() []byte {
if m != nil {
return m.ChainCode
}
return nil
}
func (m *HDNodeType) GetPrivateKey() []byte {
if m != nil {
return m.PrivateKey
}
return nil
}
func (m *HDNodeType) GetPublicKey() []byte {
if m != nil {
return m.PublicKey
}
return nil
}
func init() {
proto.RegisterEnum("hw.trezor.messages.common.Failure_FailureType", Failure_FailureType_name, Failure_FailureType_value)
proto.RegisterEnum("hw.trezor.messages.common.ButtonRequest_ButtonRequestType", ButtonRequest_ButtonRequestType_name, ButtonRequest_ButtonRequestType_value)
proto.RegisterEnum("hw.trezor.messages.common.PinMatrixRequest_PinMatrixRequestType", PinMatrixRequest_PinMatrixRequestType_name, PinMatrixRequest_PinMatrixRequestType_value)
proto.RegisterType((*Success)(nil), "hw.trezor.messages.common.Success")
proto.RegisterType((*Failure)(nil), "hw.trezor.messages.common.Failure")
proto.RegisterType((*ButtonRequest)(nil), "hw.trezor.messages.common.ButtonRequest")
proto.RegisterType((*ButtonAck)(nil), "hw.trezor.messages.common.ButtonAck")
proto.RegisterType((*PinMatrixRequest)(nil), "hw.trezor.messages.common.PinMatrixRequest")
proto.RegisterType((*PinMatrixAck)(nil), "hw.trezor.messages.common.PinMatrixAck")
proto.RegisterType((*PassphraseRequest)(nil), "hw.trezor.messages.common.PassphraseRequest")
proto.RegisterType((*PassphraseAck)(nil), "hw.trezor.messages.common.PassphraseAck")
proto.RegisterType((*PassphraseStateRequest)(nil), "hw.trezor.messages.common.PassphraseStateRequest")
proto.RegisterType((*PassphraseStateAck)(nil), "hw.trezor.messages.common.PassphraseStateAck")
proto.RegisterType((*HDNodeType)(nil), "hw.trezor.messages.common.HDNodeType")
}
func init() { proto.RegisterFile("messages-common.proto", fileDescriptor_aaf30d059fdbc38d) }
var fileDescriptor_aaf30d059fdbc38d = []byte{
// 846 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x54, 0xcd, 0x52, 0x23, 0x37,
0x10, 0x2e, 0xff, 0x80, 0xed, 0xb6, 0xd9, 0x08, 0xc5, 0x80, 0x09, 0xb0, 0x38, 0xc3, 0x21, 0x5c,
0xe2, 0x4a, 0xe5, 0x98, 0x53, 0x58, 0x83, 0x2b, 0xd4, 0x16, 0x86, 0x1a, 0xd8, 0xda, 0xa3, 0x4b,
0xd1, 0xf4, 0x32, 0x2a, 0xcf, 0x48, 0x13, 0x8d, 0x06, 0xf0, 0x5e, 0xf2, 0x6a, 0x79, 0x89, 0xbc,
0x42, 0xaa, 0x52, 0xb9, 0xe4, 0x11, 0xb6, 0x34, 0x3f, 0x78, 0xc6, 0x66, 0x39, 0xcd, 0xe8, 0xfb,
0xbe, 0xee, 0x96, 0xba, 0x3f, 0x09, 0x76, 0x42, 0x8c, 0x63, 0x76, 0x8f, 0xf1, 0x8f, 0x5c, 0x85,
0xa1, 0x92, 0xa3, 0x48, 0x2b, 0xa3, 0xe8, 0xbe, 0xff, 0x38, 0x32, 0x1a, 0x3f, 0x2b, 0x3d, 0x2a,
0x04, 0xa3, 0x4c, 0xe0, 0x9c, 0x40, 0xeb, 0x36, 0xe1, 0x1c, 0xe3, 0x98, 0x0e, 0xa0, 0x95, 0xb3,
0x83, 0xda, 0xb0, 0x76, 0xda, 0x71, 0x8b, 0xa5, 0xf3, 0x77, 0x03, 0x5a, 0x13, 0x26, 0x82, 0x44,
0x23, 0x7d, 0x07, 0x4d, 0xae, 0xbc, 0x4c, 0xf2, 0xe6, 0xe7, 0xd1, 0xe8, 0xab, 0xa9, 0x47, 0x79,
0x44, 0xf1, 0xbd, 0x5b, 0x44, 0xe8, 0xa6, 0xb1, 0xe5, 0x4a, 0xf5, 0x6a, 0xa5, 0xff, 0xea, 0xd0,
0x2d, 0xe9, 0xe9, 0x11, 0xec, 0xe7, 0xcb, 0xd9, 0x07, 0x89, 0x4f, 0x11, 0x72, 0x83, 0xde, 0x55,
0x26, 0x26, 0x35, 0xfa, 0x1d, 0xec, 0x16, 0xf4, 0xbb, 0xc4, 0x18, 0x25, 0x2f, 0x72, 0x09, 0xa9,
0xd3, 0x1d, 0xd8, 0x2e, 0xb8, 0x73, 0x66, 0xd8, 0x85, 0xd6, 0x4a, 0x93, 0x06, 0x3d, 0x80, 0xbd,
0x02, 0x3e, 0xe3, 0x46, 0x28, 0x39, 0x66, 0x92, 0x63, 0x10, 0xa0, 0x47, 0x9a, 0x74, 0x0f, 0xbe,
0x2d, 0xc8, 0x1b, 0xb1, 0x4c, 0xb6, 0x41, 0x07, 0xd0, 0x2f, 0x11, 0xcb, 0x90, 0x4d, 0xba, 0x0b,
0xb4, 0xc4, 0x5c, 0xca, 0x07, 0x16, 0x08, 0x8f, 0xb4, 0xe8, 0x21, 0x0c, 0x0a, 0x3c, 0x07, 0x6f,
0xc5, 0xbd, 0x64, 0x26, 0xd1, 0x48, 0xda, 0x95, 0x7c, 0x5a, 0xd9, 0xf6, 0x67, 0xfb, 0xeb, 0x94,
0x8f, 0x34, 0x55, 0xe6, 0x42, 0xaa, 0xe4, 0xde, 0x9f, 0x24, 0xd2, 0x8b, 0x09, 0xac, 0x70, 0x97,
0x52, 0x18, 0xc1, 0x02, 0xf1, 0x19, 0x3d, 0xd2, 0x5d, 0xd9, 0xfa, 0x95, 0x88, 0x43, 0x66, 0xb8,
0x4f, 0x7a, 0x74, 0x1f, 0x76, 0x0a, 0x62, 0x22, 0x74, 0xf8, 0xc8, 0x34, 0x66, 0xb5, 0xb8, 0xf3,
0x4f, 0x13, 0xb6, 0xb2, 0xbe, 0xb9, 0xf8, 0x47, 0x82, 0xb1, 0xa1, 0xd3, 0xca, 0x74, 0x7f, 0x79,
0x65, 0xba, 0x95, 0xb8, 0xea, 0xaa, 0x34, 0x69, 0x0a, 0x4d, 0x8f, 0x19, 0x96, 0x8f, 0x39, 0xfd,
0x77, 0xfe, 0x6f, 0xc0, 0xf6, 0x9a, 0xde, 0xee, 0xbf, 0x02, 0xce, 0xae, 0x8d, 0x8f, 0x9a, 0xd4,
0xa8, 0x03, 0x6f, 0xab, 0xc4, 0x04, 0xf1, 0xfa, 0x01, 0xf5, 0x9d, 0xaf, 0x31, 0xf6, 0x55, 0x60,
0x67, 0x7d, 0x0c, 0x07, 0x55, 0xcd, 0x58, 0xc9, 0x4f, 0x42, 0x87, 0xd7, 0x89, 0x89, 0x12, 0x43,
0x1a, 0xd6, 0x47, 0x55, 0x81, 0x8b, 0x31, 0x9a, 0x73, 0x7c, 0x10, 0x1c, 0x49, 0x73, 0x9d, 0xce,
0xe3, 0x3f, 0x2a, 0x6d, 0xa7, 0x7f, 0x08, 0x83, 0x2a, 0xfd, 0x51, 0x44, 0x98, 0x07, 0x6f, 0xae,
0x07, 0xdf, 0x68, 0x65, 0x90, 0x9b, 0x31, 0x0b, 0x02, 0xd2, 0xb2, 0xa3, 0xae, 0xd2, 0xd6, 0x07,
0x77, 0x4f, 0xa4, 0xbd, 0xbe, 0xeb, 0x62, 0x3e, 0x63, 0x1f, 0xf9, 0x9c, 0x74, 0xec, 0xe8, 0xaa,
0x82, 0x33, 0xcf, 0xd3, 0x18, 0x5b, 0x2b, 0x1c, 0xc0, 0xde, 0x4a, 0xd1, 0xe4, 0xf7, 0x40, 0xf0,
0xf7, 0xb8, 0x20, 0x5d, 0x7a, 0x02, 0xc7, 0x55, 0xf2, 0x4a, 0x62, 0xa8, 0xa4, 0xe0, 0xf6, 0x3c,
0x63, 0x95, 0x48, 0x43, 0x7a, 0xeb, 0xd5, 0x0b, 0xd1, 0xa5, 0xb4, 0x3d, 0xdb, 0xa2, 0x43, 0x38,
0x5c, 0x29, 0xc1, 0xe2, 0x38, 0xf2, 0x35, 0x8b, 0xd3, 0xbb, 0x49, 0xde, 0xd0, 0x1f, 0xe0, 0xa4,
0xaa, 0xf8, 0x20, 0xe7, 0x52, 0x3d, 0xca, 0x73, 0xd4, 0xe2, 0x81, 0xd9, 0xcb, 0x75, 0xc3, 0x8c,
0x4f, 0xbe, 0x71, 0xba, 0xd0, 0xc9, 0x84, 0x67, 0x7c, 0xee, 0xfc, 0x5b, 0x03, 0x62, 0x2d, 0xca,
0x8c, 0x16, 0x4f, 0x85, 0xf1, 0xee, 0xa0, 0x69, 0x16, 0x51, 0x61, 0xbc, 0x5f, 0x5f, 0x31, 0xde,
0x6a, 0xe8, 0x1a, 0x90, 0xd9, 0xcf, 0x66, 0x73, 0xfe, 0x84, 0xfe, 0x4b, 0xac, 0x3d, 0xda, 0x4b,
0xf8, 0x6c, 0x9c, 0x68, 0x8d, 0xd2, 0x90, 0x1a, 0xfd, 0x1e, 0x8e, 0x5e, 0x54, 0x4c, 0xf1, 0x71,
0x22, 0x74, 0x6c, 0x48, 0xdd, 0x1a, 0xf3, 0x6b, 0x92, 0x5b, 0xe4, 0x4a, 0x7a, 0xa4, 0xe1, 0x0c,
0xa1, 0xf7, 0xac, 0x39, 0xe3, 0x73, 0x4a, 0xa0, 0x11, 0x09, 0x39, 0xa8, 0x0d, 0xeb, 0xa7, 0x1d,
0xd7, 0xfe, 0x3a, 0x3f, 0xc1, 0xf6, 0xb2, 0xaf, 0x45, 0x37, 0x0e, 0xa0, 0xa3, 0xe4, 0xcc, 0x4b,
0x1d, 0x96, 0xb6, 0xa4, 0xed, 0xb6, 0x95, 0xcc, 0x1c, 0xe7, 0x5c, 0xc0, 0xd6, 0x32, 0xc2, 0x26,
0x7d, 0x0b, 0x10, 0x3d, 0x03, 0xf9, 0xdb, 0x5d, 0x42, 0x68, 0x1f, 0x36, 0x62, 0xc3, 0x4c, 0xf6,
0xd8, 0xf6, 0xdc, 0x6c, 0xe1, 0x8c, 0x60, 0x77, 0x99, 0xe6, 0xd6, 0x42, 0x45, 0xf5, 0x67, 0x7d,
0xad, 0xac, 0xef, 0x03, 0x5d, 0xd1, 0xdb, 0x61, 0xfe, 0x55, 0x03, 0xf8, 0xed, 0x7c, 0xaa, 0xbc,
0xec, 0xbd, 0xee, 0xc3, 0x86, 0x87, 0x91, 0xf1, 0xd3, 0x13, 0x6e, 0xb9, 0xd9, 0x82, 0x0e, 0xa1,
0xfb, 0x49, 0xc8, 0x7b, 0xd4, 0x91, 0x16, 0xd2, 0x0c, 0xea, 0x29, 0x57, 0x86, 0xec, 0x81, 0xb9,
0x2f, 0x02, 0x6f, 0x26, 0x93, 0x70, 0xd0, 0x48, 0xf9, 0x76, 0x0a, 0x4c, 0x93, 0x90, 0x1e, 0x01,
0x70, 0x9f, 0x09, 0x39, 0x4b, 0x9f, 0xa6, 0xe6, 0xb0, 0x7e, 0xda, 0x73, 0x3b, 0x29, 0x32, 0xb6,
0x6f, 0xcc, 0x31, 0x74, 0xa3, 0xd4, 0x6f, 0x38, 0x9b, 0xe3, 0x62, 0xb0, 0x91, 0x6e, 0x1a, 0x72,
0xe8, 0x3d, 0x2e, 0x6c, 0x7c, 0x94, 0xde, 0x8e, 0x94, 0xdf, 0x4c, 0xf9, 0x4e, 0x54, 0xdc, 0x97,
0x2f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xb2, 0x7d, 0x20, 0xa6, 0x35, 0x07, 0x00, 0x00,
}

View File

@ -0,0 +1,147 @@
// This file originates from the SatoshiLabs Trezor `common` repository at:
// https://github.com/trezor/trezor-common/blob/master/protob/messages-common.proto
// dated 28.05.2019, commit 893fd219d4a01bcffa0cd9cfa631856371ec5aa9.
syntax = "proto2";
package hw.trezor.messages.common;
/**
* Response: Success of the previous request
* @end
*/
message Success {
optional string message = 1; // human readable description of action or request-specific payload
}
/**
* Response: Failure of the previous request
* @end
*/
message Failure {
optional FailureType code = 1; // computer-readable definition of the error state
optional string message = 2; // human-readable message of the error state
enum FailureType {
Failure_UnexpectedMessage = 1;
Failure_ButtonExpected = 2;
Failure_DataError = 3;
Failure_ActionCancelled = 4;
Failure_PinExpected = 5;
Failure_PinCancelled = 6;
Failure_PinInvalid = 7;
Failure_InvalidSignature = 8;
Failure_ProcessError = 9;
Failure_NotEnoughFunds = 10;
Failure_NotInitialized = 11;
Failure_PinMismatch = 12;
Failure_FirmwareError = 99;
}
}
/**
* Response: Device is waiting for HW button press.
* @auxstart
* @next ButtonAck
*/
message ButtonRequest {
optional ButtonRequestType code = 1;
optional string data = 2;
/**
* Type of button request
*/
enum ButtonRequestType {
ButtonRequest_Other = 1;
ButtonRequest_FeeOverThreshold = 2;
ButtonRequest_ConfirmOutput = 3;
ButtonRequest_ResetDevice = 4;
ButtonRequest_ConfirmWord = 5;
ButtonRequest_WipeDevice = 6;
ButtonRequest_ProtectCall = 7;
ButtonRequest_SignTx = 8;
ButtonRequest_FirmwareCheck = 9;
ButtonRequest_Address = 10;
ButtonRequest_PublicKey = 11;
ButtonRequest_MnemonicWordCount = 12;
ButtonRequest_MnemonicInput = 13;
ButtonRequest_PassphraseType = 14;
ButtonRequest_UnknownDerivationPath = 15;
}
}
/**
* Request: Computer agrees to wait for HW button press
* @auxend
*/
message ButtonAck {
}
/**
* Response: Device is asking computer to show PIN matrix and awaits PIN encoded using this matrix scheme
* @auxstart
* @next PinMatrixAck
*/
message PinMatrixRequest {
optional PinMatrixRequestType type = 1;
/**
* Type of PIN request
*/
enum PinMatrixRequestType {
PinMatrixRequestType_Current = 1;
PinMatrixRequestType_NewFirst = 2;
PinMatrixRequestType_NewSecond = 3;
}
}
/**
* Request: Computer responds with encoded PIN
* @auxend
*/
message PinMatrixAck {
required string pin = 1; // matrix encoded PIN entered by user
}
/**
* Response: Device awaits encryption passphrase
* @auxstart
* @next PassphraseAck
*/
message PassphraseRequest {
optional bool on_device = 1; // passphrase is being entered on the device
}
/**
* Request: Send passphrase back
* @next PassphraseStateRequest
*/
message PassphraseAck {
optional string passphrase = 1;
optional bytes state = 2; // expected device state
}
/**
* Response: Device awaits passphrase state
* @next PassphraseStateAck
*/
message PassphraseStateRequest {
optional bytes state = 1; // actual device state
}
/**
* Request: Send passphrase state back
* @auxend
*/
message PassphraseStateAck {
}
/**
* Structure representing BIP32 (hierarchical deterministic) node
* Used for imports of private key into the device and exporting public key out of device
* @embed
*/
message HDNodeType {
required uint32 depth = 1;
required uint32 fingerprint = 2;
required uint32 child_num = 3;
required bytes chain_code = 4;
optional bytes private_key = 5;
optional bytes public_key = 6;
}

View File

@ -0,0 +1,698 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: messages-ethereum.proto
package trezor
import (
fmt "fmt"
math "math"
proto "github.com/golang/protobuf/proto"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
//*
// Request: Ask device for public key corresponding to address_n path
// @start
// @next EthereumPublicKey
// @next Failure
type EthereumGetPublicKey struct {
AddressN []uint32 `protobuf:"varint,1,rep,name=address_n,json=addressN" json:"address_n,omitempty"`
ShowDisplay *bool `protobuf:"varint,2,opt,name=show_display,json=showDisplay" json:"show_display,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumGetPublicKey) Reset() { *m = EthereumGetPublicKey{} }
func (m *EthereumGetPublicKey) String() string { return proto.CompactTextString(m) }
func (*EthereumGetPublicKey) ProtoMessage() {}
func (*EthereumGetPublicKey) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{0}
}
func (m *EthereumGetPublicKey) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumGetPublicKey.Unmarshal(m, b)
}
func (m *EthereumGetPublicKey) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumGetPublicKey.Marshal(b, m, deterministic)
}
func (m *EthereumGetPublicKey) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumGetPublicKey.Merge(m, src)
}
func (m *EthereumGetPublicKey) XXX_Size() int {
return xxx_messageInfo_EthereumGetPublicKey.Size(m)
}
func (m *EthereumGetPublicKey) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumGetPublicKey.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumGetPublicKey proto.InternalMessageInfo
func (m *EthereumGetPublicKey) GetAddressN() []uint32 {
if m != nil {
return m.AddressN
}
return nil
}
func (m *EthereumGetPublicKey) GetShowDisplay() bool {
if m != nil && m.ShowDisplay != nil {
return *m.ShowDisplay
}
return false
}
//*
// Response: Contains public key derived from device private seed
// @end
type EthereumPublicKey struct {
Node *HDNodeType `protobuf:"bytes,1,opt,name=node" json:"node,omitempty"`
Xpub *string `protobuf:"bytes,2,opt,name=xpub" json:"xpub,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumPublicKey) Reset() { *m = EthereumPublicKey{} }
func (m *EthereumPublicKey) String() string { return proto.CompactTextString(m) }
func (*EthereumPublicKey) ProtoMessage() {}
func (*EthereumPublicKey) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{1}
}
func (m *EthereumPublicKey) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumPublicKey.Unmarshal(m, b)
}
func (m *EthereumPublicKey) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumPublicKey.Marshal(b, m, deterministic)
}
func (m *EthereumPublicKey) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumPublicKey.Merge(m, src)
}
func (m *EthereumPublicKey) XXX_Size() int {
return xxx_messageInfo_EthereumPublicKey.Size(m)
}
func (m *EthereumPublicKey) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumPublicKey.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumPublicKey proto.InternalMessageInfo
func (m *EthereumPublicKey) GetNode() *HDNodeType {
if m != nil {
return m.Node
}
return nil
}
func (m *EthereumPublicKey) GetXpub() string {
if m != nil && m.Xpub != nil {
return *m.Xpub
}
return ""
}
//*
// Request: Ask device for Ethereum address corresponding to address_n path
// @start
// @next EthereumAddress
// @next Failure
type EthereumGetAddress struct {
AddressN []uint32 `protobuf:"varint,1,rep,name=address_n,json=addressN" json:"address_n,omitempty"`
ShowDisplay *bool `protobuf:"varint,2,opt,name=show_display,json=showDisplay" json:"show_display,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumGetAddress) Reset() { *m = EthereumGetAddress{} }
func (m *EthereumGetAddress) String() string { return proto.CompactTextString(m) }
func (*EthereumGetAddress) ProtoMessage() {}
func (*EthereumGetAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{2}
}
func (m *EthereumGetAddress) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumGetAddress.Unmarshal(m, b)
}
func (m *EthereumGetAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumGetAddress.Marshal(b, m, deterministic)
}
func (m *EthereumGetAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumGetAddress.Merge(m, src)
}
func (m *EthereumGetAddress) XXX_Size() int {
return xxx_messageInfo_EthereumGetAddress.Size(m)
}
func (m *EthereumGetAddress) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumGetAddress.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumGetAddress proto.InternalMessageInfo
func (m *EthereumGetAddress) GetAddressN() []uint32 {
if m != nil {
return m.AddressN
}
return nil
}
func (m *EthereumGetAddress) GetShowDisplay() bool {
if m != nil && m.ShowDisplay != nil {
return *m.ShowDisplay
}
return false
}
//*
// Response: Contains an Ethereum address derived from device private seed
// @end
type EthereumAddress struct {
AddressBin []byte `protobuf:"bytes,1,opt,name=addressBin" json:"addressBin,omitempty"`
AddressHex *string `protobuf:"bytes,2,opt,name=addressHex" json:"addressHex,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumAddress) Reset() { *m = EthereumAddress{} }
func (m *EthereumAddress) String() string { return proto.CompactTextString(m) }
func (*EthereumAddress) ProtoMessage() {}
func (*EthereumAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{3}
}
func (m *EthereumAddress) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumAddress.Unmarshal(m, b)
}
func (m *EthereumAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumAddress.Marshal(b, m, deterministic)
}
func (m *EthereumAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumAddress.Merge(m, src)
}
func (m *EthereumAddress) XXX_Size() int {
return xxx_messageInfo_EthereumAddress.Size(m)
}
func (m *EthereumAddress) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumAddress.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumAddress proto.InternalMessageInfo
func (m *EthereumAddress) GetAddressBin() []byte {
if m != nil {
return m.AddressBin
}
return nil
}
func (m *EthereumAddress) GetAddressHex() string {
if m != nil && m.AddressHex != nil {
return *m.AddressHex
}
return ""
}
//*
// Request: Ask device to sign transaction
// All fields are optional from the protocol's point of view. Each field defaults to value `0` if missing.
// Note: the first at most 1024 bytes of data MUST be transmitted as part of this message.
// @start
// @next EthereumTxRequest
// @next Failure
type EthereumSignTx struct {
AddressN []uint32 `protobuf:"varint,1,rep,name=address_n,json=addressN" json:"address_n,omitempty"`
Nonce []byte `protobuf:"bytes,2,opt,name=nonce" json:"nonce,omitempty"`
GasPrice []byte `protobuf:"bytes,3,opt,name=gas_price,json=gasPrice" json:"gas_price,omitempty"`
GasLimit []byte `protobuf:"bytes,4,opt,name=gas_limit,json=gasLimit" json:"gas_limit,omitempty"`
ToBin []byte `protobuf:"bytes,5,opt,name=toBin" json:"toBin,omitempty"`
ToHex *string `protobuf:"bytes,11,opt,name=toHex" json:"toHex,omitempty"`
Value []byte `protobuf:"bytes,6,opt,name=value" json:"value,omitempty"`
DataInitialChunk []byte `protobuf:"bytes,7,opt,name=data_initial_chunk,json=dataInitialChunk" json:"data_initial_chunk,omitempty"`
DataLength *uint32 `protobuf:"varint,8,opt,name=data_length,json=dataLength" json:"data_length,omitempty"`
ChainId *uint32 `protobuf:"varint,9,opt,name=chain_id,json=chainId" json:"chain_id,omitempty"`
TxType *uint32 `protobuf:"varint,10,opt,name=tx_type,json=txType" json:"tx_type,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumSignTx) Reset() { *m = EthereumSignTx{} }
func (m *EthereumSignTx) String() string { return proto.CompactTextString(m) }
func (*EthereumSignTx) ProtoMessage() {}
func (*EthereumSignTx) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{4}
}
func (m *EthereumSignTx) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumSignTx.Unmarshal(m, b)
}
func (m *EthereumSignTx) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumSignTx.Marshal(b, m, deterministic)
}
func (m *EthereumSignTx) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumSignTx.Merge(m, src)
}
func (m *EthereumSignTx) XXX_Size() int {
return xxx_messageInfo_EthereumSignTx.Size(m)
}
func (m *EthereumSignTx) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumSignTx.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumSignTx proto.InternalMessageInfo
func (m *EthereumSignTx) GetAddressN() []uint32 {
if m != nil {
return m.AddressN
}
return nil
}
func (m *EthereumSignTx) GetNonce() []byte {
if m != nil {
return m.Nonce
}
return nil
}
func (m *EthereumSignTx) GetGasPrice() []byte {
if m != nil {
return m.GasPrice
}
return nil
}
func (m *EthereumSignTx) GetGasLimit() []byte {
if m != nil {
return m.GasLimit
}
return nil
}
func (m *EthereumSignTx) GetToBin() []byte {
if m != nil {
return m.ToBin
}
return nil
}
func (m *EthereumSignTx) GetToHex() string {
if m != nil && m.ToHex != nil {
return *m.ToHex
}
return ""
}
func (m *EthereumSignTx) GetValue() []byte {
if m != nil {
return m.Value
}
return nil
}
func (m *EthereumSignTx) GetDataInitialChunk() []byte {
if m != nil {
return m.DataInitialChunk
}
return nil
}
func (m *EthereumSignTx) GetDataLength() uint32 {
if m != nil && m.DataLength != nil {
return *m.DataLength
}
return 0
}
func (m *EthereumSignTx) GetChainId() uint32 {
if m != nil && m.ChainId != nil {
return *m.ChainId
}
return 0
}
func (m *EthereumSignTx) GetTxType() uint32 {
if m != nil && m.TxType != nil {
return *m.TxType
}
return 0
}
//*
// Response: Device asks for more data from transaction payload, or returns the signature.
// If data_length is set, device awaits that many more bytes of payload.
// Otherwise, the signature_* fields contain the computed transaction signature. All three fields will be present.
// @end
// @next EthereumTxAck
type EthereumTxRequest struct {
DataLength *uint32 `protobuf:"varint,1,opt,name=data_length,json=dataLength" json:"data_length,omitempty"`
SignatureV *uint32 `protobuf:"varint,2,opt,name=signature_v,json=signatureV" json:"signature_v,omitempty"`
SignatureR []byte `protobuf:"bytes,3,opt,name=signature_r,json=signatureR" json:"signature_r,omitempty"`
SignatureS []byte `protobuf:"bytes,4,opt,name=signature_s,json=signatureS" json:"signature_s,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumTxRequest) Reset() { *m = EthereumTxRequest{} }
func (m *EthereumTxRequest) String() string { return proto.CompactTextString(m) }
func (*EthereumTxRequest) ProtoMessage() {}
func (*EthereumTxRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{5}
}
func (m *EthereumTxRequest) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumTxRequest.Unmarshal(m, b)
}
func (m *EthereumTxRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumTxRequest.Marshal(b, m, deterministic)
}
func (m *EthereumTxRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumTxRequest.Merge(m, src)
}
func (m *EthereumTxRequest) XXX_Size() int {
return xxx_messageInfo_EthereumTxRequest.Size(m)
}
func (m *EthereumTxRequest) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumTxRequest.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumTxRequest proto.InternalMessageInfo
func (m *EthereumTxRequest) GetDataLength() uint32 {
if m != nil && m.DataLength != nil {
return *m.DataLength
}
return 0
}
func (m *EthereumTxRequest) GetSignatureV() uint32 {
if m != nil && m.SignatureV != nil {
return *m.SignatureV
}
return 0
}
func (m *EthereumTxRequest) GetSignatureR() []byte {
if m != nil {
return m.SignatureR
}
return nil
}
func (m *EthereumTxRequest) GetSignatureS() []byte {
if m != nil {
return m.SignatureS
}
return nil
}
//*
// Request: Transaction payload data.
// @next EthereumTxRequest
type EthereumTxAck struct {
DataChunk []byte `protobuf:"bytes,1,opt,name=data_chunk,json=dataChunk" json:"data_chunk,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumTxAck) Reset() { *m = EthereumTxAck{} }
func (m *EthereumTxAck) String() string { return proto.CompactTextString(m) }
func (*EthereumTxAck) ProtoMessage() {}
func (*EthereumTxAck) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{6}
}
func (m *EthereumTxAck) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumTxAck.Unmarshal(m, b)
}
func (m *EthereumTxAck) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumTxAck.Marshal(b, m, deterministic)
}
func (m *EthereumTxAck) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumTxAck.Merge(m, src)
}
func (m *EthereumTxAck) XXX_Size() int {
return xxx_messageInfo_EthereumTxAck.Size(m)
}
func (m *EthereumTxAck) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumTxAck.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumTxAck proto.InternalMessageInfo
func (m *EthereumTxAck) GetDataChunk() []byte {
if m != nil {
return m.DataChunk
}
return nil
}
//*
// Request: Ask device to sign message
// @start
// @next EthereumMessageSignature
// @next Failure
type EthereumSignMessage struct {
AddressN []uint32 `protobuf:"varint,1,rep,name=address_n,json=addressN" json:"address_n,omitempty"`
Message []byte `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumSignMessage) Reset() { *m = EthereumSignMessage{} }
func (m *EthereumSignMessage) String() string { return proto.CompactTextString(m) }
func (*EthereumSignMessage) ProtoMessage() {}
func (*EthereumSignMessage) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{7}
}
func (m *EthereumSignMessage) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumSignMessage.Unmarshal(m, b)
}
func (m *EthereumSignMessage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumSignMessage.Marshal(b, m, deterministic)
}
func (m *EthereumSignMessage) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumSignMessage.Merge(m, src)
}
func (m *EthereumSignMessage) XXX_Size() int {
return xxx_messageInfo_EthereumSignMessage.Size(m)
}
func (m *EthereumSignMessage) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumSignMessage.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumSignMessage proto.InternalMessageInfo
func (m *EthereumSignMessage) GetAddressN() []uint32 {
if m != nil {
return m.AddressN
}
return nil
}
func (m *EthereumSignMessage) GetMessage() []byte {
if m != nil {
return m.Message
}
return nil
}
//*
// Response: Signed message
// @end
type EthereumMessageSignature struct {
AddressBin []byte `protobuf:"bytes,1,opt,name=addressBin" json:"addressBin,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=signature" json:"signature,omitempty"`
AddressHex *string `protobuf:"bytes,3,opt,name=addressHex" json:"addressHex,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumMessageSignature) Reset() { *m = EthereumMessageSignature{} }
func (m *EthereumMessageSignature) String() string { return proto.CompactTextString(m) }
func (*EthereumMessageSignature) ProtoMessage() {}
func (*EthereumMessageSignature) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{8}
}
func (m *EthereumMessageSignature) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumMessageSignature.Unmarshal(m, b)
}
func (m *EthereumMessageSignature) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumMessageSignature.Marshal(b, m, deterministic)
}
func (m *EthereumMessageSignature) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumMessageSignature.Merge(m, src)
}
func (m *EthereumMessageSignature) XXX_Size() int {
return xxx_messageInfo_EthereumMessageSignature.Size(m)
}
func (m *EthereumMessageSignature) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumMessageSignature.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumMessageSignature proto.InternalMessageInfo
func (m *EthereumMessageSignature) GetAddressBin() []byte {
if m != nil {
return m.AddressBin
}
return nil
}
func (m *EthereumMessageSignature) GetSignature() []byte {
if m != nil {
return m.Signature
}
return nil
}
func (m *EthereumMessageSignature) GetAddressHex() string {
if m != nil && m.AddressHex != nil {
return *m.AddressHex
}
return ""
}
//*
// Request: Ask device to verify message
// @start
// @next Success
// @next Failure
type EthereumVerifyMessage struct {
AddressBin []byte `protobuf:"bytes,1,opt,name=addressBin" json:"addressBin,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=signature" json:"signature,omitempty"`
Message []byte `protobuf:"bytes,3,opt,name=message" json:"message,omitempty"`
AddressHex *string `protobuf:"bytes,4,opt,name=addressHex" json:"addressHex,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EthereumVerifyMessage) Reset() { *m = EthereumVerifyMessage{} }
func (m *EthereumVerifyMessage) String() string { return proto.CompactTextString(m) }
func (*EthereumVerifyMessage) ProtoMessage() {}
func (*EthereumVerifyMessage) Descriptor() ([]byte, []int) {
return fileDescriptor_cb33f46ba915f15c, []int{9}
}
func (m *EthereumVerifyMessage) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EthereumVerifyMessage.Unmarshal(m, b)
}
func (m *EthereumVerifyMessage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EthereumVerifyMessage.Marshal(b, m, deterministic)
}
func (m *EthereumVerifyMessage) XXX_Merge(src proto.Message) {
xxx_messageInfo_EthereumVerifyMessage.Merge(m, src)
}
func (m *EthereumVerifyMessage) XXX_Size() int {
return xxx_messageInfo_EthereumVerifyMessage.Size(m)
}
func (m *EthereumVerifyMessage) XXX_DiscardUnknown() {
xxx_messageInfo_EthereumVerifyMessage.DiscardUnknown(m)
}
var xxx_messageInfo_EthereumVerifyMessage proto.InternalMessageInfo
func (m *EthereumVerifyMessage) GetAddressBin() []byte {
if m != nil {
return m.AddressBin
}
return nil
}
func (m *EthereumVerifyMessage) GetSignature() []byte {
if m != nil {
return m.Signature
}
return nil
}
func (m *EthereumVerifyMessage) GetMessage() []byte {
if m != nil {
return m.Message
}
return nil
}
func (m *EthereumVerifyMessage) GetAddressHex() string {
if m != nil && m.AddressHex != nil {
return *m.AddressHex
}
return ""
}
func init() {
proto.RegisterType((*EthereumGetPublicKey)(nil), "hw.trezor.messages.ethereum.EthereumGetPublicKey")
proto.RegisterType((*EthereumPublicKey)(nil), "hw.trezor.messages.ethereum.EthereumPublicKey")
proto.RegisterType((*EthereumGetAddress)(nil), "hw.trezor.messages.ethereum.EthereumGetAddress")
proto.RegisterType((*EthereumAddress)(nil), "hw.trezor.messages.ethereum.EthereumAddress")
proto.RegisterType((*EthereumSignTx)(nil), "hw.trezor.messages.ethereum.EthereumSignTx")
proto.RegisterType((*EthereumTxRequest)(nil), "hw.trezor.messages.ethereum.EthereumTxRequest")
proto.RegisterType((*EthereumTxAck)(nil), "hw.trezor.messages.ethereum.EthereumTxAck")
proto.RegisterType((*EthereumSignMessage)(nil), "hw.trezor.messages.ethereum.EthereumSignMessage")
proto.RegisterType((*EthereumMessageSignature)(nil), "hw.trezor.messages.ethereum.EthereumMessageSignature")
proto.RegisterType((*EthereumVerifyMessage)(nil), "hw.trezor.messages.ethereum.EthereumVerifyMessage")
}
func init() { proto.RegisterFile("messages-ethereum.proto", fileDescriptor_cb33f46ba915f15c) }
var fileDescriptor_cb33f46ba915f15c = []byte{
// 593 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x54, 0x4d, 0x6f, 0xd3, 0x40,
0x10, 0x95, 0x9b, 0xb4, 0x49, 0x26, 0x0d, 0x1f, 0xa6, 0x55, 0x17, 0x0a, 0x34, 0x18, 0x21, 0xe5,
0x00, 0x3e, 0x70, 0x43, 0xe2, 0xd2, 0x52, 0x44, 0x2b, 0x4a, 0x55, 0xdc, 0xa8, 0x57, 0x6b, 0x63,
0x6f, 0xe3, 0x55, 0x9d, 0xdd, 0xe0, 0x5d, 0xb7, 0x0e, 0x7f, 0x82, 0x23, 0xff, 0x87, 0x5f, 0x86,
0xf6, 0x2b, 0x71, 0x52, 0x54, 0x0e, 0xbd, 0x65, 0xde, 0xbc, 0x7d, 0xf3, 0x66, 0xf4, 0x62, 0xd8,
0x99, 0x10, 0x21, 0xf0, 0x98, 0x88, 0x77, 0x44, 0x66, 0xa4, 0x20, 0xe5, 0x24, 0x9c, 0x16, 0x5c,
0x72, 0x7f, 0x37, 0xbb, 0x09, 0x65, 0x41, 0x7e, 0xf2, 0x22, 0x74, 0x94, 0xd0, 0x51, 0x9e, 0x6d,
0xcf, 0x5f, 0x25, 0x7c, 0x32, 0xe1, 0xcc, 0xbc, 0x09, 0x2e, 0x60, 0xeb, 0xb3, 0xa5, 0x7c, 0x21,
0xf2, 0xac, 0x1c, 0xe5, 0x34, 0xf9, 0x4a, 0x66, 0xfe, 0x2e, 0x74, 0x70, 0x9a, 0x16, 0x44, 0x88,
0x98, 0x21, 0xaf, 0xdf, 0x18, 0xf4, 0xa2, 0xb6, 0x05, 0x4e, 0xfd, 0x57, 0xb0, 0x29, 0x32, 0x7e,
0x13, 0xa7, 0x54, 0x4c, 0x73, 0x3c, 0x43, 0x6b, 0x7d, 0x6f, 0xd0, 0x8e, 0xba, 0x0a, 0x3b, 0x34,
0x50, 0x30, 0x82, 0xc7, 0x4e, 0x77, 0x21, 0xfa, 0x01, 0x9a, 0x8c, 0xa7, 0x04, 0x79, 0x7d, 0x6f,
0xd0, 0x7d, 0xff, 0x26, 0xfc, 0x87, 0x5f, 0x6b, 0xee, 0xe8, 0xf0, 0x94, 0xa7, 0x64, 0x38, 0x9b,
0x92, 0x48, 0x3f, 0xf1, 0x7d, 0x68, 0x56, 0xd3, 0x72, 0xa4, 0x47, 0x75, 0x22, 0xfd, 0x3b, 0x18,
0x82, 0x5f, 0xf3, 0xbe, 0x6f, 0xdc, 0xdd, 0xdb, 0xf9, 0x77, 0x78, 0xe8, 0x54, 0x9d, 0xe4, 0x4b,
0x00, 0xab, 0x70, 0x40, 0x99, 0x76, 0xbf, 0x19, 0xd5, 0x90, 0x5a, 0xff, 0x88, 0x54, 0xd6, 0x62,
0x0d, 0x09, 0xfe, 0xac, 0xc1, 0x03, 0xa7, 0x79, 0x4e, 0xc7, 0x6c, 0x58, 0xdd, 0xed, 0x72, 0x0b,
0xd6, 0x19, 0x67, 0x09, 0xd1, 0x52, 0x9b, 0x91, 0x29, 0xd4, 0x93, 0x31, 0x16, 0xf1, 0xb4, 0xa0,
0x09, 0x41, 0x0d, 0xdd, 0x69, 0x8f, 0xb1, 0x38, 0x53, 0xb5, 0x6b, 0xe6, 0x74, 0x42, 0x25, 0x6a,
0xce, 0x9b, 0x27, 0xaa, 0x56, 0x7a, 0x92, 0x2b, 0xeb, 0xeb, 0x46, 0x4f, 0x17, 0x06, 0x55, 0x86,
0xbb, 0xda, 0xb0, 0x29, 0x14, 0x7a, 0x8d, 0xf3, 0x92, 0xa0, 0x0d, 0xc3, 0xd5, 0x85, 0xff, 0x16,
0xfc, 0x14, 0x4b, 0x1c, 0x53, 0x46, 0x25, 0xc5, 0x79, 0x9c, 0x64, 0x25, 0xbb, 0x42, 0x2d, 0x4d,
0x79, 0xa4, 0x3a, 0xc7, 0xa6, 0xf1, 0x49, 0xe1, 0xfe, 0x1e, 0x74, 0x35, 0x3b, 0x27, 0x6c, 0x2c,
0x33, 0xd4, 0xee, 0x7b, 0x83, 0x5e, 0x04, 0x0a, 0x3a, 0xd1, 0x88, 0xff, 0x14, 0xda, 0x49, 0x86,
0x29, 0x8b, 0x69, 0x8a, 0x3a, 0xba, 0xdb, 0xd2, 0xf5, 0x71, 0xea, 0xef, 0x40, 0x4b, 0x56, 0xb1,
0x9c, 0x4d, 0x09, 0x02, 0xdd, 0xd9, 0x90, 0x95, 0xca, 0x41, 0xf0, 0xdb, 0x5b, 0x44, 0x6a, 0x58,
0x45, 0xe4, 0x47, 0x49, 0x84, 0x5c, 0x1d, 0xe5, 0xdd, 0x1a, 0xb5, 0x07, 0x5d, 0x41, 0xc7, 0x0c,
0xcb, 0xb2, 0x20, 0xf1, 0xb5, 0xbe, 0x68, 0x2f, 0x82, 0x39, 0x74, 0xb1, 0x4c, 0x28, 0xec, 0x61,
0x17, 0x84, 0x68, 0x99, 0x20, 0xec, 0x71, 0x17, 0x84, 0xf3, 0x20, 0x84, 0xde, 0xc2, 0xd8, 0x7e,
0x72, 0xe5, 0xbf, 0x00, 0xed, 0xc0, 0x5e, 0xc9, 0xe4, 0xa5, 0xa3, 0x10, 0x7d, 0x9e, 0xe0, 0x04,
0x9e, 0xd4, 0xd3, 0xf0, 0xcd, 0x64, 0xff, 0xee, 0x48, 0x20, 0x68, 0xd9, 0xff, 0x88, 0x0d, 0x85,
0x2b, 0x83, 0x0a, 0x90, 0x53, 0xb3, 0x4a, 0xe7, 0xce, 0xda, 0x7f, 0x83, 0xfb, 0x1c, 0x3a, 0xf3,
0x3d, 0xac, 0xee, 0x02, 0x58, 0x89, 0x75, 0xe3, 0x56, 0xac, 0x7f, 0x79, 0xb0, 0xed, 0x46, 0x5f,
0x90, 0x82, 0x5e, 0xce, 0xdc, 0x2a, 0xf7, 0x9b, 0x5b, 0xdb, 0xb5, 0xb1, 0xb4, 0xeb, 0x8a, 0xa3,
0xe6, 0xaa, 0xa3, 0x83, 0x8f, 0xf0, 0x3a, 0xe1, 0x93, 0x50, 0x60, 0xc9, 0x45, 0x46, 0x73, 0x3c,
0x12, 0xee, 0x03, 0x93, 0xd3, 0x91, 0xf9, 0xe2, 0x8d, 0xca, 0xcb, 0x83, 0xed, 0xa1, 0x06, 0xad,
0x5b, 0xb7, 0xc2, 0xdf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x8a, 0xce, 0x81, 0xc8, 0x59, 0x05, 0x00,
0x00,
}

View File

@ -0,0 +1,131 @@
// This file originates from the SatoshiLabs Trezor `common` repository at:
// https://github.com/trezor/trezor-common/blob/master/protob/messages-ethereum.proto
// dated 28.05.2019, commit 893fd219d4a01bcffa0cd9cfa631856371ec5aa9.
syntax = "proto2";
package hw.trezor.messages.ethereum;
// Sugar for easier handling in Java
option java_package = "com.satoshilabs.trezor.lib.protobuf";
option java_outer_classname = "TrezorMessageEthereum";
import "messages-common.proto";
/**
* Request: Ask device for public key corresponding to address_n path
* @start
* @next EthereumPublicKey
* @next Failure
*/
message EthereumGetPublicKey {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional bool show_display = 2; // optionally show on display before sending the result
}
/**
* Response: Contains public key derived from device private seed
* @end
*/
message EthereumPublicKey {
optional hw.trezor.messages.common.HDNodeType node = 1; // BIP32 public node
optional string xpub = 2; // serialized form of public node
}
/**
* Request: Ask device for Ethereum address corresponding to address_n path
* @start
* @next EthereumAddress
* @next Failure
*/
message EthereumGetAddress {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional bool show_display = 2; // optionally show on display before sending the result
}
/**
* Response: Contains an Ethereum address derived from device private seed
* @end
*/
message EthereumAddress {
optional bytes addressBin = 1; // Ethereum address as 20 bytes (legacy firmwares)
optional string addressHex = 2; // Ethereum address as hex string (newer firmwares)
}
/**
* Request: Ask device to sign transaction
* All fields are optional from the protocol's point of view. Each field defaults to value `0` if missing.
* Note: the first at most 1024 bytes of data MUST be transmitted as part of this message.
* @start
* @next EthereumTxRequest
* @next Failure
*/
message EthereumSignTx {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional bytes nonce = 2; // <=256 bit unsigned big endian
optional bytes gas_price = 3; // <=256 bit unsigned big endian (in wei)
optional bytes gas_limit = 4; // <=256 bit unsigned big endian
optional bytes toBin = 5; // recipient address (20 bytes, legacy firmware)
optional string toHex = 11; // recipient address (hex string, newer firmware)
optional bytes value = 6; // <=256 bit unsigned big endian (in wei)
optional bytes data_initial_chunk = 7; // The initial data chunk (<= 1024 bytes)
optional uint32 data_length = 8; // Length of transaction payload
optional uint32 chain_id = 9; // Chain Id for EIP 155
optional uint32 tx_type = 10; // (only for Wanchain)
}
/**
* Response: Device asks for more data from transaction payload, or returns the signature.
* If data_length is set, device awaits that many more bytes of payload.
* Otherwise, the signature_* fields contain the computed transaction signature. All three fields will be present.
* @end
* @next EthereumTxAck
*/
message EthereumTxRequest {
optional uint32 data_length = 1; // Number of bytes being requested (<= 1024)
optional uint32 signature_v = 2; // Computed signature (recovery parameter, limited to 27 or 28)
optional bytes signature_r = 3; // Computed signature R component (256 bit)
optional bytes signature_s = 4; // Computed signature S component (256 bit)
}
/**
* Request: Transaction payload data.
* @next EthereumTxRequest
*/
message EthereumTxAck {
optional bytes data_chunk = 1; // Bytes from transaction payload (<= 1024 bytes)
}
/**
* Request: Ask device to sign message
* @start
* @next EthereumMessageSignature
* @next Failure
*/
message EthereumSignMessage {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
optional bytes message = 2; // message to be signed
}
/**
* Response: Signed message
* @end
*/
message EthereumMessageSignature {
optional bytes addressBin = 1; // address used to sign the message (20 bytes, legacy firmware)
optional bytes signature = 2; // signature of the message
optional string addressHex = 3; // address used to sign the message (hex string, newer firmware)
}
/**
* Request: Ask device to verify message
* @start
* @next Success
* @next Failure
*/
message EthereumVerifyMessage {
optional bytes addressBin = 1; // address to verify (20 bytes, legacy firmware)
optional bytes signature = 2; // signature to verify
optional bytes message = 3; // message to verify
optional string addressHex = 4; // address to verify (hex string, newer firmware)
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,289 @@
// This file originates from the SatoshiLabs Trezor `common` repository at:
// https://github.com/trezor/trezor-common/blob/master/protob/messages-management.proto
// dated 28.05.2019, commit 893fd219d4a01bcffa0cd9cfa631856371ec5aa9.
syntax = "proto2";
package hw.trezor.messages.management;
// Sugar for easier handling in Java
option java_package = "com.satoshilabs.trezor.lib.protobuf";
option java_outer_classname = "TrezorMessageManagement";
import "messages-common.proto";
/**
* Request: Reset device to default state and ask for device details
* @start
* @next Features
*/
message Initialize {
optional bytes state = 1; // assumed device state, clear session if set and different
optional bool skip_passphrase = 2; // this session should always assume empty passphrase
}
/**
* Request: Ask for device details (no device reset)
* @start
* @next Features
*/
message GetFeatures {
}
/**
* Response: Reports various information about the device
* @end
*/
message Features {
optional string vendor = 1; // name of the manufacturer, e.g. "trezor.io"
optional uint32 major_version = 2; // major version of the firmware/bootloader, e.g. 1
optional uint32 minor_version = 3; // minor version of the firmware/bootloader, e.g. 0
optional uint32 patch_version = 4; // patch version of the firmware/bootloader, e.g. 0
optional bool bootloader_mode = 5; // is device in bootloader mode?
optional string device_id = 6; // device's unique identifier
optional bool pin_protection = 7; // is device protected by PIN?
optional bool passphrase_protection = 8; // is node/mnemonic encrypted using passphrase?
optional string language = 9; // device language
optional string label = 10; // device description label
optional bool initialized = 12; // does device contain seed?
optional bytes revision = 13; // SCM revision of firmware
optional bytes bootloader_hash = 14; // hash of the bootloader
optional bool imported = 15; // was storage imported from an external source?
optional bool pin_cached = 16; // is PIN already cached in session?
optional bool passphrase_cached = 17; // is passphrase already cached in session?
optional bool firmware_present = 18; // is valid firmware loaded?
optional bool needs_backup = 19; // does storage need backup? (equals to Storage.needs_backup)
optional uint32 flags = 20; // device flags (equals to Storage.flags)
optional string model = 21; // device hardware model
optional uint32 fw_major = 22; // reported firmware version if in bootloader mode
optional uint32 fw_minor = 23; // reported firmware version if in bootloader mode
optional uint32 fw_patch = 24; // reported firmware version if in bootloader mode
optional string fw_vendor = 25; // reported firmware vendor if in bootloader mode
optional bytes fw_vendor_keys = 26; // reported firmware vendor keys (their hash)
optional bool unfinished_backup = 27; // report unfinished backup (equals to Storage.unfinished_backup)
optional bool no_backup = 28; // report no backup (equals to Storage.no_backup)
}
/**
* Request: clear session (removes cached PIN, passphrase, etc).
* @start
* @next Success
*/
message ClearSession {
}
/**
* Request: change language and/or label of the device
* @start
* @next Success
* @next Failure
*/
message ApplySettings {
optional string language = 1;
optional string label = 2;
optional bool use_passphrase = 3;
optional bytes homescreen = 4;
optional PassphraseSourceType passphrase_source = 5;
optional uint32 auto_lock_delay_ms = 6;
optional uint32 display_rotation = 7; // in degrees from North
/**
* Structure representing passphrase source
*/
enum PassphraseSourceType {
ASK = 0;
DEVICE = 1;
HOST = 2;
}
}
/**
* Request: set flags of the device
* @start
* @next Success
* @next Failure
*/
message ApplyFlags {
optional uint32 flags = 1; // bitmask, can only set bits, not unset
}
/**
* Request: Starts workflow for setting/changing/removing the PIN
* @start
* @next Success
* @next Failure
*/
message ChangePin {
optional bool remove = 1; // is PIN removal requested?
}
/**
* Request: Test if the device is alive, device sends back the message in Success response
* @start
* @next Success
*/
message Ping {
optional string message = 1; // message to send back in Success message
optional bool button_protection = 2; // ask for button press
optional bool pin_protection = 3; // ask for PIN if set in device
optional bool passphrase_protection = 4; // ask for passphrase if set in device
}
/**
* Request: Abort last operation that required user interaction
* @start
* @next Failure
*/
message Cancel {
}
/**
* Request: Request a sample of random data generated by hardware RNG. May be used for testing.
* @start
* @next Entropy
* @next Failure
*/
message GetEntropy {
required uint32 size = 1; // size of requested entropy
}
/**
* Response: Reply with random data generated by internal RNG
* @end
*/
message Entropy {
required bytes entropy = 1; // chunk of random generated bytes
}
/**
* Request: Request device to wipe all sensitive data and settings
* @start
* @next Success
* @next Failure
*/
message WipeDevice {
}
/**
* Request: Load seed and related internal settings from the computer
* @start
* @next Success
* @next Failure
*/
message LoadDevice {
optional string mnemonic = 1; // seed encoded as BIP-39 mnemonic (12, 18 or 24 words)
optional hw.trezor.messages.common.HDNodeType node = 2; // BIP-32 node
optional string pin = 3; // set PIN protection
optional bool passphrase_protection = 4; // enable master node encryption using passphrase
optional string language = 5 [default='english']; // device language
optional string label = 6; // device label
optional bool skip_checksum = 7; // do not test mnemonic for valid BIP-39 checksum
optional uint32 u2f_counter = 8; // U2F counter
}
/**
* Request: Ask device to do initialization involving user interaction
* @start
* @next EntropyRequest
* @next Failure
*/
message ResetDevice {
optional bool display_random = 1; // display entropy generated by the device before asking for additional entropy
optional uint32 strength = 2 [default=256]; // strength of seed in bits
optional bool passphrase_protection = 3; // enable master node encryption using passphrase
optional bool pin_protection = 4; // enable PIN protection
optional string language = 5 [default='english']; // device language
optional string label = 6; // device label
optional uint32 u2f_counter = 7; // U2F counter
optional bool skip_backup = 8; // postpone seed backup to BackupDevice workflow
optional bool no_backup = 9; // indicate that no backup is going to be made
}
/**
* Request: Perform backup of the device seed if not backed up using ResetDevice
* @start
* @next Success
*/
message BackupDevice {
}
/**
* Response: Ask for additional entropy from host computer
* @next EntropyAck
*/
message EntropyRequest {
}
/**
* Request: Provide additional entropy for seed generation function
* @next Success
*/
message EntropyAck {
optional bytes entropy = 1; // 256 bits (32 bytes) of random data
}
/**
* Request: Start recovery workflow asking user for specific words of mnemonic
* Used to recovery device safely even on untrusted computer.
* @start
* @next WordRequest
*/
message RecoveryDevice {
optional uint32 word_count = 1; // number of words in BIP-39 mnemonic
optional bool passphrase_protection = 2; // enable master node encryption using passphrase
optional bool pin_protection = 3; // enable PIN protection
optional string language = 4 [default='english']; // device language
optional string label = 5; // device label
optional bool enforce_wordlist = 6; // enforce BIP-39 wordlist during the process
// 7 reserved for unused recovery method
optional RecoveryDeviceType type = 8; // supported recovery type
optional uint32 u2f_counter = 9; // U2F counter
optional bool dry_run = 10; // perform dry-run recovery workflow (for safe mnemonic validation)
/**
* Type of recovery procedure. These should be used as bitmask, e.g.,
* `RecoveryDeviceType_ScrambledWords | RecoveryDeviceType_Matrix`
* listing every method supported by the host computer.
*
* Note that ScrambledWords must be supported by every implementation
* for backward compatibility; there is no way to not support it.
*/
enum RecoveryDeviceType {
// use powers of two when extending this field
RecoveryDeviceType_ScrambledWords = 0; // words in scrambled order
RecoveryDeviceType_Matrix = 1; // matrix recovery type
}
}
/**
* Response: Device is waiting for user to enter word of the mnemonic
* Its position is shown only on device's internal display.
* @next WordAck
*/
message WordRequest {
optional WordRequestType type = 1;
/**
* Type of Recovery Word request
*/
enum WordRequestType {
WordRequestType_Plain = 0;
WordRequestType_Matrix9 = 1;
WordRequestType_Matrix6 = 2;
}
}
/**
* Request: Computer replies with word from the mnemonic
* @next WordRequest
* @next Success
* @next Failure
*/
message WordAck {
required string word = 1; // one word of mnemonic on asked position
}
/**
* Request: Set U2F counter
* @start
* @next Success
*/
message SetU2FCounter {
optional uint32 u2f_counter = 1; // counter
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
// Copyright 2017 The go-ethereum Authors
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
@ -16,11 +16,35 @@
// This file contains the implementation for interacting with the Trezor hardware
// wallets. The wire protocol spec can be found on the SatoshiLabs website:
// https://doc.satoshilabs.com/trezor-tech/api-protobuf.html
// https://wiki.trezor.io/Developers_guide-Message_Workflows
//go:generate protoc --go_out=import_path=trezor:. types.proto messages.proto
// !!! STAHP !!!
//
// Before you touch the protocol files, you need to be aware of a breaking change
// that occurred between firmware versions 1.7.3->1.8.0 (Model One) and 2.0.10->
// 2.1.0 (Model T). The Ethereum address representation was changed from the 20
// byte binary blob to a 42 byte hex string. The upstream protocol buffer files
// only support the new format, so blindly pulling in a new spec will break old
// devices!
//
// The Trezor devs had the foresight to add the string version as a new message
// code instead of replacing the binary one. This means that the proto file can
// actually define both the old and the new versions as optional. Please ensure
// that you add back the old addresses everywhere (to avoid name clash. use the
// addressBin and addressHex names).
//
// If in doubt, reach out to @karalabe.
// Package trezor contains the wire protocol wrapper in Go.
// To regenerate the protocol files in this package:
// - Download the latest protoc https://github.com/protocolbuffers/protobuf/releases
// - Build with the usual `./configure && make` and ensure it's on your $PATH
// - Delete all the .proto and .pb.go files, pull in fresh ones from Trezor
// - Grab the latest Go plugin `go get -u github.com/golang/protobuf/protoc-gen-go`
// - Vendor in the latest Go plugin `govendor fetch github.com/golang/protobuf/...`
//go:generate protoc -I/usr/local/include:. --go_out=import_path=trezor:. messages.proto messages-common.proto messages-management.proto messages-ethereum.proto
// Package trezor contains the wire protocol.
package trezor
import (

File diff suppressed because it is too large Load Diff

View File

@ -1,278 +0,0 @@
// This file originates from the SatoshiLabs Trezor `common` repository at:
// https://github.com/trezor/trezor-common/blob/master/protob/types.proto
// dated 28.07.2017, commit dd8ec3231fb5f7992360aff9bdfe30bb58130f4b.
syntax = "proto2";
/**
* Types for TREZOR communication
*
* @author Marek Palatinus <slush@satoshilabs.com>
* @version 1.2
*/
// Sugar for easier handling in Java
option java_package = "com.satoshilabs.trezor.lib.protobuf";
option java_outer_classname = "TrezorType";
import "google/protobuf/descriptor.proto";
/**
* Options for specifying message direction and type of wire (normal/debug)
*/
extend google.protobuf.EnumValueOptions {
optional bool wire_in = 50002; // message can be transmitted via wire from PC to TREZOR
optional bool wire_out = 50003; // message can be transmitted via wire from TREZOR to PC
optional bool wire_debug_in = 50004; // message can be transmitted via debug wire from PC to TREZOR
optional bool wire_debug_out = 50005; // message can be transmitted via debug wire from TREZOR to PC
optional bool wire_tiny = 50006; // message is handled by TREZOR when the USB stack is in tiny mode
optional bool wire_bootloader = 50007; // message is only handled by TREZOR Bootloader
}
/**
* Type of failures returned by Failure message
* @used_in Failure
*/
enum FailureType {
Failure_UnexpectedMessage = 1;
Failure_ButtonExpected = 2;
Failure_DataError = 3;
Failure_ActionCancelled = 4;
Failure_PinExpected = 5;
Failure_PinCancelled = 6;
Failure_PinInvalid = 7;
Failure_InvalidSignature = 8;
Failure_ProcessError = 9;
Failure_NotEnoughFunds = 10;
Failure_NotInitialized = 11;
Failure_FirmwareError = 99;
}
/**
* Type of script which will be used for transaction output
* @used_in TxOutputType
*/
enum OutputScriptType {
PAYTOADDRESS = 0; // used for all addresses (bitcoin, p2sh, witness)
PAYTOSCRIPTHASH = 1; // p2sh address (deprecated; use PAYTOADDRESS)
PAYTOMULTISIG = 2; // only for change output
PAYTOOPRETURN = 3; // op_return
PAYTOWITNESS = 4; // only for change output
PAYTOP2SHWITNESS = 5; // only for change output
}
/**
* Type of script which will be used for transaction output
* @used_in TxInputType
*/
enum InputScriptType {
SPENDADDRESS = 0; // standard p2pkh address
SPENDMULTISIG = 1; // p2sh multisig address
EXTERNAL = 2; // reserved for external inputs (coinjoin)
SPENDWITNESS = 3; // native segwit
SPENDP2SHWITNESS = 4; // segwit over p2sh (backward compatible)
}
/**
* Type of information required by transaction signing process
* @used_in TxRequest
*/
enum RequestType {
TXINPUT = 0;
TXOUTPUT = 1;
TXMETA = 2;
TXFINISHED = 3;
TXEXTRADATA = 4;
}
/**
* Type of button request
* @used_in ButtonRequest
*/
enum ButtonRequestType {
ButtonRequest_Other = 1;
ButtonRequest_FeeOverThreshold = 2;
ButtonRequest_ConfirmOutput = 3;
ButtonRequest_ResetDevice = 4;
ButtonRequest_ConfirmWord = 5;
ButtonRequest_WipeDevice = 6;
ButtonRequest_ProtectCall = 7;
ButtonRequest_SignTx = 8;
ButtonRequest_FirmwareCheck = 9;
ButtonRequest_Address = 10;
ButtonRequest_PublicKey = 11;
}
/**
* Type of PIN request
* @used_in PinMatrixRequest
*/
enum PinMatrixRequestType {
PinMatrixRequestType_Current = 1;
PinMatrixRequestType_NewFirst = 2;
PinMatrixRequestType_NewSecond = 3;
}
/**
* Type of recovery procedure. These should be used as bitmask, e.g.,
* `RecoveryDeviceType_ScrambledWords | RecoveryDeviceType_Matrix`
* listing every method supported by the host computer.
*
* Note that ScrambledWords must be supported by every implementation
* for backward compatibility; there is no way to not support it.
*
* @used_in RecoveryDevice
*/
enum RecoveryDeviceType {
// use powers of two when extending this field
RecoveryDeviceType_ScrambledWords = 0; // words in scrambled order
RecoveryDeviceType_Matrix = 1; // matrix recovery type
}
/**
* Type of Recovery Word request
* @used_in WordRequest
*/
enum WordRequestType {
WordRequestType_Plain = 0;
WordRequestType_Matrix9 = 1;
WordRequestType_Matrix6 = 2;
}
/**
* Structure representing BIP32 (hierarchical deterministic) node
* Used for imports of private key into the device and exporting public key out of device
* @used_in PublicKey
* @used_in LoadDevice
* @used_in DebugLinkState
* @used_in Storage
*/
message HDNodeType {
required uint32 depth = 1;
required uint32 fingerprint = 2;
required uint32 child_num = 3;
required bytes chain_code = 4;
optional bytes private_key = 5;
optional bytes public_key = 6;
}
message HDNodePathType {
required HDNodeType node = 1; // BIP-32 node in deserialized form
repeated uint32 address_n = 2; // BIP-32 path to derive the key from node
}
/**
* Structure representing Coin
* @used_in Features
*/
message CoinType {
optional string coin_name = 1;
optional string coin_shortcut = 2;
optional uint32 address_type = 3 [default=0];
optional uint64 maxfee_kb = 4;
optional uint32 address_type_p2sh = 5 [default=5];
optional string signed_message_header = 8;
optional uint32 xpub_magic = 9 [default=76067358]; // default=0x0488b21e
optional uint32 xprv_magic = 10 [default=76066276]; // default=0x0488ade4
optional bool segwit = 11;
optional uint32 forkid = 12;
}
/**
* Type of redeem script used in input
* @used_in TxInputType
*/
message MultisigRedeemScriptType {
repeated HDNodePathType pubkeys = 1; // pubkeys from multisig address (sorted lexicographically)
repeated bytes signatures = 2; // existing signatures for partially signed input
optional uint32 m = 3; // "m" from n, how many valid signatures is necessary for spending
}
/**
* Structure representing transaction input
* @used_in SimpleSignTx
* @used_in TransactionType
*/
message TxInputType {
repeated uint32 address_n = 1; // BIP-32 path to derive the key from master node
required bytes prev_hash = 2; // hash of previous transaction output to spend by this input
required uint32 prev_index = 3; // index of previous output to spend
optional bytes script_sig = 4; // script signature, unset for tx to sign
optional uint32 sequence = 5 [default=4294967295]; // sequence (default=0xffffffff)
optional InputScriptType script_type = 6 [default=SPENDADDRESS]; // defines template of input script
optional MultisigRedeemScriptType multisig = 7; // Filled if input is going to spend multisig tx
optional uint64 amount = 8; // amount of previous transaction output (for segwit only)
}
/**
* Structure representing transaction output
* @used_in SimpleSignTx
* @used_in TransactionType
*/
message TxOutputType {
optional string address = 1; // target coin address in Base58 encoding
repeated uint32 address_n = 2; // BIP-32 path to derive the key from master node; has higher priority than "address"
required uint64 amount = 3; // amount to spend in satoshis
required OutputScriptType script_type = 4; // output script type
optional MultisigRedeemScriptType multisig = 5; // defines multisig address; script_type must be PAYTOMULTISIG
optional bytes op_return_data = 6; // defines op_return data; script_type must be PAYTOOPRETURN, amount must be 0
}
/**
* Structure representing compiled transaction output
* @used_in TransactionType
*/
message TxOutputBinType {
required uint64 amount = 1;
required bytes script_pubkey = 2;
}
/**
* Structure representing transaction
* @used_in SimpleSignTx
*/
message TransactionType {
optional uint32 version = 1;
repeated TxInputType inputs = 2;
repeated TxOutputBinType bin_outputs = 3;
repeated TxOutputType outputs = 5;
optional uint32 lock_time = 4;
optional uint32 inputs_cnt = 6;
optional uint32 outputs_cnt = 7;
optional bytes extra_data = 8;
optional uint32 extra_data_len = 9;
}
/**
* Structure representing request details
* @used_in TxRequest
*/
message TxRequestDetailsType {
optional uint32 request_index = 1; // device expects TxAck message from the computer
optional bytes tx_hash = 2; // tx_hash of requested transaction
optional uint32 extra_data_len = 3; // length of requested extra data
optional uint32 extra_data_offset = 4; // offset of requested extra data
}
/**
* Structure representing serialized data
* @used_in TxRequest
*/
message TxRequestSerializedType {
optional uint32 signature_index = 1; // 'signature' field contains signed input of this index
optional bytes signature = 2; // signature of the signature_index input
optional bytes serialized_tx = 3; // part of serialized and signed transaction
}
/**
* Structure representing identity data
* @used_in IdentityType
*/
message IdentityType {
optional string proto = 1; // proto part of URI
optional string user = 2; // user part of URI
optional string host = 3; // host part of URI
optional string port = 4; // port part of URI
optional string path = 5; // path part of URI
optional uint32 index = 6 [default=0]; // identity index
}

View File

@ -31,7 +31,7 @@ import (
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
"github.com/karalabe/usb"
)
// Maximum time between wallet health checks to detect USB unplugs.
@ -77,8 +77,8 @@ type wallet struct {
driver driver // Hardware implementation of the low level device operations
url *accounts.URL // Textual URL uniquely identifying this wallet
info hid.DeviceInfo // Known USB device infos about the wallet
device *hid.Device // USB device advertising itself as a hardware wallet
info usb.DeviceInfo // Known USB device infos about the wallet
device usb.Device // USB device advertising itself as a hardware wallet
accounts []accounts.Account // List of derive accounts pinned on the hardware wallet
paths map[common.Address]accounts.DerivationPath // Known derivation paths for signing operations

View File

@ -23,8 +23,8 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://dl.google.com/go/go1.12.5.windows-%GETH_ARCH%.zip
- 7z x go1.12.5.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://dl.google.com/go/go1.12.7.windows-%GETH_ARCH%.zip
- 7z x go1.12.7.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version

View File

@ -1,5 +0,0 @@
{{.Name}} ({{.VersionString}}) {{.Distro}}; urgency=low
* git build of {{.Env.Commit}}
-- {{.Author}} {{.Time}}

View File

@ -1,19 +0,0 @@
Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.11
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum
{{range .Executables}}
Package: {{$.ExeName .}}
Conflicts: {{$.ExeConflicts .}}
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Built-Using: ${misc:Built-Using}
Description: {{.Description}}
{{.Description}}
{{end}}

View File

@ -1,14 +0,0 @@
Copyright 2018 The go-ethereum Authors
go-ethereum is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
go-ethereum is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.

View File

@ -1 +0,0 @@
AUTHORS

View File

@ -1 +0,0 @@
build/bin/{{.BinaryName}} usr/bin

View File

@ -1,16 +0,0 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
# Launchpad rejects Go's access to $HOME/.cache, use custom folder
export GOCACHE=/tmp/go-build
override_dh_auto_build:
build/env.sh /usr/lib/go-1.11/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:
%:
dh $@

View File

@ -11,8 +11,8 @@ Vcs-Browser: https://github.com/ethereum/go-ethereum
Package: {{.Name}}
Architecture: any
Depends: ${misc:Depends}, {{.ExeList}}
Description: Meta-package to install geth, swarm, and other tools
Meta-package to install geth, swarm and other tools
Description: Meta-package to install geth and other tools
Meta-package to install geth and other tools
{{range .Executables}}
Package: {{$.ExeName .}}

View File

@ -72,8 +72,6 @@ var (
"internal/jsre/deps",
"log/",
"common/bitutil/bitutil",
// don't license generated files
"contracts/chequebook/contract/code.go",
}
// paths with this prefix are licensed as GPL. all other files are LGPL.

View File

@ -18,97 +18,199 @@ package main
import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/compiler"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
const (
commandHelperTemplate = `{{.Name}}{{if .Subcommands}} command{{end}}{{if .Flags}} [command options]{{end}} [arguments...]
{{if .Description}}{{.Description}}
{{end}}{{if .Subcommands}}
SUBCOMMANDS:
{{range .Subcommands}}{{.Name}}{{with .ShortName}}, {{.}}{{end}}{{ "\t" }}{{.Usage}}
{{end}}{{end}}{{if .Flags}}
OPTIONS:
{{range $.Flags}}{{"\t"}}{{.}}
{{end}}
{{end}}`
)
var (
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind, - for STDIN")
binFlag = flag.String("bin", "", "Path to the Ethereum contract bytecode (generate deploy method)")
typFlag = flag.String("type", "", "Struct name for the binding (default = package name)")
// Git SHA1 commit hash of the release (set via linker flags)
gitCommit = ""
gitDate = ""
solFlag = flag.String("sol", "", "Path to the Ethereum contract Solidity source to build and bind")
solcFlag = flag.String("solc", "solc", "Solidity compiler to use if source builds are requested")
excFlag = flag.String("exc", "", "Comma separated types to exclude from binding")
app *cli.App
vyFlag = flag.String("vy", "", "Path to the Ethereum contract Vyper source to build and bind")
vyperFlag = flag.String("vyper", "vyper", "Vyper compiler to use if source builds are requested")
pkgFlag = flag.String("pkg", "", "Package name to generate the binding into")
outFlag = flag.String("out", "", "Output file for the generated binding (default = stdout)")
langFlag = flag.String("lang", "go", "Destination language for the bindings (go, java, objc)")
// Flags needed by abigen
abiFlag = cli.StringFlag{
Name: "abi",
Usage: "Path to the Ethereum contract ABI json to bind, - for STDIN",
}
binFlag = cli.StringFlag{
Name: "bin",
Usage: "Path to the Ethereum contract bytecode (generate deploy method)",
}
typeFlag = cli.StringFlag{
Name: "type",
Usage: "Struct name for the binding (default = package name)",
}
jsonFlag = cli.StringFlag{
Name: "combined-json",
Usage: "Path to the combined-json file generated by compiler",
}
solFlag = cli.StringFlag{
Name: "sol",
Usage: "Path to the Ethereum contract Solidity source to build and bind",
}
solcFlag = cli.StringFlag{
Name: "solc",
Usage: "Solidity compiler to use if source builds are requested",
Value: "solc",
}
vyFlag = cli.StringFlag{
Name: "vy",
Usage: "Path to the Ethereum contract Vyper source to build and bind",
}
vyperFlag = cli.StringFlag{
Name: "vyper",
Usage: "Vyper compiler to use if source builds are requested",
Value: "vyper",
}
excFlag = cli.StringFlag{
Name: "exc",
Usage: "Comma separated types to exclude from binding",
}
pkgFlag = cli.StringFlag{
Name: "pkg",
Usage: "Package name to generate the binding into",
}
outFlag = cli.StringFlag{
Name: "out",
Usage: "Output file for the generated binding (default = stdout)",
}
langFlag = cli.StringFlag{
Name: "lang",
Usage: "Destination language for the bindings (go, java, objc)",
Value: "go",
}
)
func main() {
// Parse and ensure all needed inputs are specified
flag.Parse()
if *abiFlag == "" && *solFlag == "" && *vyFlag == "" {
fmt.Printf("No contract ABI (--abi), Solidity source (--sol), or Vyper source (--vy) specified\n")
os.Exit(-1)
} else if (*abiFlag != "" || *binFlag != "" || *typFlag != "") && (*solFlag != "" || *vyFlag != "") {
fmt.Printf("Contract ABI (--abi), bytecode (--bin) and type (--type) flags are mutually exclusive with the Solidity (--sol) and Vyper (--vy) flags\n")
os.Exit(-1)
} else if *solFlag != "" && *vyFlag != "" {
fmt.Printf("Solidity (--sol) and Vyper (--vy) flags are mutually exclusive\n")
os.Exit(-1)
func init() {
app = utils.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Flags = []cli.Flag{
abiFlag,
binFlag,
typeFlag,
jsonFlag,
solFlag,
solcFlag,
vyFlag,
vyperFlag,
excFlag,
pkgFlag,
outFlag,
langFlag,
}
if *pkgFlag == "" {
fmt.Printf("No destination package specified (--pkg)\n")
os.Exit(-1)
app.Action = utils.MigrateFlags(abigen)
cli.CommandHelpTemplate = commandHelperTemplate
}
func abigen(c *cli.Context) error {
utils.CheckExclusive(c, abiFlag, jsonFlag, solFlag, vyFlag) // Only one source can be selected.
if c.GlobalString(pkgFlag.Name) == "" {
utils.Fatalf("No destination package specified (--pkg)")
}
var lang bind.Lang
switch *langFlag {
switch c.GlobalString(langFlag.Name) {
case "go":
lang = bind.LangGo
case "java":
lang = bind.LangJava
case "objc":
lang = bind.LangObjC
utils.Fatalf("Objc binding generation is uncompleted")
default:
fmt.Printf("Unsupported destination language \"%s\" (--lang)\n", *langFlag)
os.Exit(-1)
utils.Fatalf("Unsupported destination language \"%s\" (--lang)", c.GlobalString(langFlag.Name))
}
// If the entire solidity code was specified, build and bind based on that
var (
abis []string
bins []string
types []string
sigs []map[string]string
libs = make(map[string]string)
)
if *solFlag != "" || *vyFlag != "" || *abiFlag == "-" {
if c.GlobalString(abiFlag.Name) != "" {
// Load up the ABI, optional bytecode and type name from the parameters
var (
abi []byte
err error
)
input := c.GlobalString(abiFlag.Name)
if input == "-" {
abi, err = ioutil.ReadAll(os.Stdin)
} else {
abi, err = ioutil.ReadFile(input)
}
if err != nil {
utils.Fatalf("Failed to read input ABI: %v", err)
}
abis = append(abis, string(abi))
var bin []byte
if binFile := c.GlobalString(binFlag.Name); binFile != "" {
if bin, err = ioutil.ReadFile(binFile); err != nil {
utils.Fatalf("Failed to read input bytecode: %v", err)
}
if strings.Contains(string(bin), "//") {
utils.Fatalf("Contract has additional library references, please use other mode(e.g. --combined-json) to catch library infos")
}
}
bins = append(bins, string(bin))
kind := c.GlobalString(typeFlag.Name)
if kind == "" {
kind = c.GlobalString(pkgFlag.Name)
}
types = append(types, kind)
} else {
// Generate the list of types to exclude from binding
exclude := make(map[string]bool)
for _, kind := range strings.Split(*excFlag, ",") {
for _, kind := range strings.Split(c.GlobalString(excFlag.Name), ",") {
exclude[strings.ToLower(kind)] = true
}
var contracts map[string]*compiler.Contract
var err error
var contracts map[string]*compiler.Contract
switch {
case *solFlag != "":
contracts, err = compiler.CompileSolidity(*solcFlag, *solFlag)
case c.GlobalIsSet(solFlag.Name):
contracts, err = compiler.CompileSolidity(c.GlobalString(solcFlag.Name), c.GlobalString(solFlag.Name))
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
utils.Fatalf("Failed to build Solidity contract: %v", err)
}
case *vyFlag != "":
contracts, err = compiler.CompileVyper(*vyperFlag, *vyFlag)
case c.GlobalIsSet(vyFlag.Name):
contracts, err = compiler.CompileVyper(c.GlobalString(vyperFlag.Name), c.GlobalString(vyFlag.Name))
if err != nil {
fmt.Printf("Failed to build Vyper contract: %v\n", err)
os.Exit(-1)
utils.Fatalf("Failed to build Vyper contract: %v", err)
}
default:
contracts, err = contractsFromStdin()
case c.GlobalIsSet(jsonFlag.Name):
jsonOutput, err := ioutil.ReadFile(c.GlobalString(jsonFlag.Name))
if err != nil {
fmt.Printf("Failed to read input ABIs from STDIN: %v\n", err)
os.Exit(-1)
utils.Fatalf("Failed to read combined-json from compiler: %v", err)
}
contracts, err = compiler.ParseCombinedJSON(jsonOutput, "", "", "", "")
if err != nil {
utils.Fatalf("Failed to read contract information from json output: %v", err)
}
}
// Gather all non-excluded contract for binding
@ -118,61 +220,39 @@ func main() {
}
abi, err := json.Marshal(contract.Info.AbiDefinition) // Flatten the compiler parse
if err != nil {
fmt.Printf("Failed to parse ABIs from compiler output: %v\n", err)
os.Exit(-1)
utils.Fatalf("Failed to parse ABIs from compiler output: %v", err)
}
abis = append(abis, string(abi))
bins = append(bins, contract.Code)
sigs = append(sigs, contract.Hashes)
nameParts := strings.Split(name, ":")
types = append(types, nameParts[len(nameParts)-1])
}
} else {
// Otherwise load up the ABI, optional bytecode and type name from the parameters
abi, err := ioutil.ReadFile(*abiFlag)
if err != nil {
fmt.Printf("Failed to read input ABI: %v\n", err)
os.Exit(-1)
libPattern := crypto.Keccak256Hash([]byte(name)).String()[2:36]
libs[libPattern] = nameParts[len(nameParts)-1]
}
abis = append(abis, string(abi))
var bin []byte
if *binFlag != "" {
if bin, err = ioutil.ReadFile(*binFlag); err != nil {
fmt.Printf("Failed to read input bytecode: %v\n", err)
os.Exit(-1)
}
}
bins = append(bins, string(bin))
kind := *typFlag
if kind == "" {
kind = *pkgFlag
}
types = append(types, kind)
}
// Generate the contract binding
code, err := bind.Bind(types, abis, bins, *pkgFlag, lang)
code, err := bind.Bind(types, abis, bins, sigs, c.GlobalString(pkgFlag.Name), lang, libs)
if err != nil {
fmt.Printf("Failed to generate ABI binding: %v\n", err)
os.Exit(-1)
utils.Fatalf("Failed to generate ABI binding: %v", err)
}
// Either flush it out to a file or display on the standard output
if *outFlag == "" {
if !c.GlobalIsSet(outFlag.Name) {
fmt.Printf("%s\n", code)
return
return nil
}
if err := ioutil.WriteFile(*outFlag, []byte(code), 0600); err != nil {
fmt.Printf("Failed to write ABI binding: %v\n", err)
os.Exit(-1)
if err := ioutil.WriteFile(c.GlobalString(outFlag.Name), []byte(code), 0600); err != nil {
utils.Fatalf("Failed to write ABI binding: %v", err)
}
return nil
}
func contractsFromStdin() (map[string]*compiler.Contract, error) {
bytes, err := ioutil.ReadAll(os.Stdin)
if err != nil {
return nil, err
func main() {
log.Root().SetHandler(log.LvlFilterHandler(log.LvlInfo, log.StreamHandler(os.Stderr, log.TerminalFormat(true))))
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
return compiler.ParseCombinedJSON(bytes, "", "", "", "")
}

View File

@ -143,7 +143,7 @@ func printNotice(nodeKey *ecdsa.PublicKey, addr net.UDPAddr) {
addr.IP = net.IP{127, 0, 0, 1}
}
n := enode.NewV4(nodeKey, addr.IP, 0, addr.Port)
fmt.Println(n.String())
fmt.Println(n.URLv4())
fmt.Println("Note: you're using cmd/bootnode, a developer tool.")
fmt.Println("We recommend using a regular node as bootstrap node for production deployments.")
}

View File

@ -0,0 +1,120 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"strconv"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/external"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/contracts/checkpointoracle"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
"gopkg.in/urfave/cli.v1"
)
// newClient creates a client with specified remote URL.
func newClient(ctx *cli.Context) *ethclient.Client {
client, err := ethclient.Dial(ctx.GlobalString(nodeURLFlag.Name))
if err != nil {
utils.Fatalf("Failed to connect to Ethereum node: %v", err)
}
return client
}
// newRPCClient creates a rpc client with specified node URL.
func newRPCClient(url string) *rpc.Client {
client, err := rpc.Dial(url)
if err != nil {
utils.Fatalf("Failed to connect to Ethereum node: %v", err)
}
return client
}
// getContractAddr retrieves the register contract address through
// rpc request.
func getContractAddr(client *rpc.Client) common.Address {
var addr string
if err := client.Call(&addr, "les_getCheckpointContractAddress"); err != nil {
utils.Fatalf("Failed to fetch checkpoint oracle address: %v", err)
}
return common.HexToAddress(addr)
}
// getCheckpoint retrieves the specified checkpoint or the latest one
// through rpc request.
func getCheckpoint(ctx *cli.Context, client *rpc.Client) *params.TrustedCheckpoint {
var checkpoint *params.TrustedCheckpoint
if ctx.GlobalIsSet(indexFlag.Name) {
var result [3]string
index := uint64(ctx.GlobalInt64(indexFlag.Name))
if err := client.Call(&result, "les_getCheckpoint", index); err != nil {
utils.Fatalf("Failed to get local checkpoint %v, please ensure the les API is exposed", err)
}
checkpoint = &params.TrustedCheckpoint{
SectionIndex: index,
SectionHead: common.HexToHash(result[0]),
CHTRoot: common.HexToHash(result[1]),
BloomRoot: common.HexToHash(result[2]),
}
} else {
var result [4]string
err := client.Call(&result, "les_latestCheckpoint")
if err != nil {
utils.Fatalf("Failed to get local checkpoint %v, please ensure the les API is exposed", err)
}
index, err := strconv.ParseUint(result[0], 0, 64)
if err != nil {
utils.Fatalf("Failed to parse checkpoint index %v", err)
}
checkpoint = &params.TrustedCheckpoint{
SectionIndex: index,
SectionHead: common.HexToHash(result[1]),
CHTRoot: common.HexToHash(result[2]),
BloomRoot: common.HexToHash(result[3]),
}
}
return checkpoint
}
// newContract creates a registrar contract instance with specified
// contract address or the default contracts for mainnet or testnet.
func newContract(client *rpc.Client) (common.Address, *checkpointoracle.CheckpointOracle) {
addr := getContractAddr(client)
if addr == (common.Address{}) {
utils.Fatalf("No specified registrar contract address")
}
contract, err := checkpointoracle.NewCheckpointOracle(addr, ethclient.NewClient(client))
if err != nil {
utils.Fatalf("Failed to setup registrar contract %s: %v", addr, err)
}
return addr, contract
}
// newClefSigner sets up a clef backend and returns a clef transaction signer.
func newClefSigner(ctx *cli.Context) *bind.TransactOpts {
clef, err := external.NewExternalSigner(ctx.String(clefURLFlag.Name))
if err != nil {
utils.Fatalf("Failed to create clef signer %v", err)
}
return bind.NewClefTransactor(clef, accounts.Account{Address: common.HexToAddress(ctx.String(signerFlag.Name))})
}

View File

@ -0,0 +1,311 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"context"
"encoding/binary"
"fmt"
"math/big"
"strings"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/contracts/checkpointoracle"
"github.com/ethereum/go-ethereum/contracts/checkpointoracle/contract"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
"gopkg.in/urfave/cli.v1"
)
var commandDeploy = cli.Command{
Name: "deploy",
Usage: "Deploy a new checkpoint oracle contract",
Flags: []cli.Flag{
nodeURLFlag,
clefURLFlag,
signerFlag,
signersFlag,
thresholdFlag,
},
Action: utils.MigrateFlags(deploy),
}
var commandSign = cli.Command{
Name: "sign",
Usage: "Sign the checkpoint with the specified key",
Flags: []cli.Flag{
nodeURLFlag,
clefURLFlag,
signerFlag,
indexFlag,
hashFlag,
oracleFlag,
},
Action: utils.MigrateFlags(sign),
}
var commandPublish = cli.Command{
Name: "publish",
Usage: "Publish a checkpoint into the oracle",
Flags: []cli.Flag{
nodeURLFlag,
clefURLFlag,
signerFlag,
indexFlag,
signaturesFlag,
},
Action: utils.MigrateFlags(publish),
}
// deploy deploys the checkpoint registrar contract.
//
// Note the network where the contract is deployed depends on
// the network where the connected node is located.
func deploy(ctx *cli.Context) error {
// Gather all the addresses that should be permitted to sign
var addrs []common.Address
for _, account := range strings.Split(ctx.String(signersFlag.Name), ",") {
if trimmed := strings.TrimSpace(account); !common.IsHexAddress(trimmed) {
utils.Fatalf("Invalid account in --signers: '%s'", trimmed)
}
addrs = append(addrs, common.HexToAddress(account))
}
// Retrieve and validate the signing threshold
needed := ctx.Int(thresholdFlag.Name)
if needed == 0 || needed > len(addrs) {
utils.Fatalf("Invalid signature threshold %d", needed)
}
// Print a summary to ensure the user understands what they're signing
fmt.Printf("Deploying new checkpoint oracle:\n\n")
for i, addr := range addrs {
fmt.Printf("Admin %d => %s\n", i+1, addr.Hex())
}
fmt.Printf("\nSignatures needed to publish: %d\n", needed)
// setup clef signer, create an abigen transactor and an RPC client
transactor, client := newClefSigner(ctx), newClient(ctx)
// Deploy the checkpoint oracle
fmt.Println("Sending deploy request to Clef...")
oracle, tx, _, err := contract.DeployCheckpointOracle(transactor, client, addrs, big.NewInt(int64(params.CheckpointFrequency)),
big.NewInt(int64(params.CheckpointProcessConfirmations)), big.NewInt(int64(needed)))
if err != nil {
utils.Fatalf("Failed to deploy checkpoint oracle %v", err)
}
log.Info("Deployed checkpoint oracle", "address", oracle, "tx", tx.Hash().Hex())
return nil
}
// sign creates the signature for specific checkpoint
// with local key. Only contract admins have the permission to
// sign checkpoint.
func sign(ctx *cli.Context) error {
var (
offline bool // The indicator whether we sign checkpoint by offline.
chash common.Hash
cindex uint64
address common.Address
node *rpc.Client
oracle *checkpointoracle.CheckpointOracle
)
if !ctx.GlobalIsSet(nodeURLFlag.Name) {
// Offline mode signing
offline = true
if !ctx.IsSet(hashFlag.Name) {
utils.Fatalf("Please specify the checkpoint hash (--hash) to sign in offline mode")
}
chash = common.HexToHash(ctx.String(hashFlag.Name))
if !ctx.IsSet(indexFlag.Name) {
utils.Fatalf("Please specify checkpoint index (--index) to sign in offline mode")
}
cindex = ctx.Uint64(indexFlag.Name)
if !ctx.IsSet(oracleFlag.Name) {
utils.Fatalf("Please specify oracle address (--oracle) to sign in offline mode")
}
address = common.HexToAddress(ctx.String(oracleFlag.Name))
} else {
// Interactive mode signing, retrieve the data from the remote node
node = newRPCClient(ctx.GlobalString(nodeURLFlag.Name))
checkpoint := getCheckpoint(ctx, node)
chash, cindex, address = checkpoint.Hash(), checkpoint.SectionIndex, getContractAddr(node)
// Check the validity of checkpoint
reqCtx, cancelFn := context.WithTimeout(context.Background(), 10*time.Second)
defer cancelFn()
head, err := ethclient.NewClient(node).HeaderByNumber(reqCtx, nil)
if err != nil {
return err
}
num := head.Number.Uint64()
if num < ((cindex+1)*params.CheckpointFrequency + params.CheckpointProcessConfirmations) {
utils.Fatalf("Invalid future checkpoint")
}
_, oracle = newContract(node)
latest, _, h, err := oracle.Contract().GetLatestCheckpoint(nil)
if err != nil {
return err
}
if cindex < latest {
utils.Fatalf("Checkpoint is too old")
}
if cindex == latest && (latest != 0 || h.Uint64() != 0) {
utils.Fatalf("Stale checkpoint, latest registered %d, given %d", latest, cindex)
}
}
var (
signature string
signer string
)
// isAdmin checks whether the specified signer is admin.
isAdmin := func(addr common.Address) error {
signers, err := oracle.Contract().GetAllAdmin(nil)
if err != nil {
return err
}
for _, s := range signers {
if s == addr {
return nil
}
}
return fmt.Errorf("signer %v is not the admin", addr.Hex())
}
// Print to the user the data thy are about to sign
fmt.Printf("Oracle => %s\n", address.Hex())
fmt.Printf("Index %4d => %s\n", cindex, chash.Hex())
// Sign checkpoint in clef mode.
signer = ctx.String(signerFlag.Name)
if !offline {
if err := isAdmin(common.HexToAddress(signer)); err != nil {
return err
}
}
clef := newRPCClient(ctx.String(clefURLFlag.Name))
p := make(map[string]string)
buf := make([]byte, 8)
binary.BigEndian.PutUint64(buf, cindex)
p["address"] = address.Hex()
p["message"] = hexutil.Encode(append(buf, chash.Bytes()...))
fmt.Println("Sending signing request to Clef...")
if err := clef.Call(&signature, "account_signData", accounts.MimetypeDataWithValidator, signer, p); err != nil {
utils.Fatalf("Failed to sign checkpoint, err %v", err)
}
fmt.Printf("Signer => %s\n", signer)
fmt.Printf("Signature => %s\n", signature)
return nil
}
// sighash calculates the hash of the data to sign for the checkpoint oracle.
func sighash(index uint64, oracle common.Address, hash common.Hash) []byte {
buf := make([]byte, 8)
binary.BigEndian.PutUint64(buf, index)
data := append([]byte{0x19, 0x00}, append(oracle[:], append(buf, hash[:]...)...)...)
return crypto.Keccak256(data)
}
// ecrecover calculates the sender address from a sighash and signature combo.
func ecrecover(sighash []byte, sig []byte) common.Address {
sig[64] -= 27
defer func() { sig[64] += 27 }()
signer, err := crypto.SigToPub(sighash, sig)
if err != nil {
utils.Fatalf("Failed to recover sender from signature %x: %v", sig, err)
}
return crypto.PubkeyToAddress(*signer)
}
// publish registers the specified checkpoint which generated by connected node
// with a authorised private key.
func publish(ctx *cli.Context) error {
// Print the checkpoint oracle's current status to make sure we're interacting
// with the correct network and contract.
status(ctx)
// Gather the signatures from the CLI
var sigs [][]byte
for _, sig := range strings.Split(ctx.String(signaturesFlag.Name), ",") {
trimmed := strings.TrimPrefix(strings.TrimSpace(sig), "0x")
if len(trimmed) != 130 {
utils.Fatalf("Invalid signature in --signature: '%s'", trimmed)
} else {
sigs = append(sigs, common.Hex2Bytes(trimmed))
}
}
// Retrieve the checkpoint we want to sign to sort the signatures
var (
client = newRPCClient(ctx.GlobalString(nodeURLFlag.Name))
addr, oracle = newContract(client)
checkpoint = getCheckpoint(ctx, client)
sighash = sighash(checkpoint.SectionIndex, addr, checkpoint.Hash())
)
for i := 0; i < len(sigs); i++ {
for j := i + 1; j < len(sigs); j++ {
signerA := ecrecover(sighash, sigs[i])
signerB := ecrecover(sighash, sigs[j])
if bytes.Compare(signerA.Bytes(), signerB.Bytes()) > 0 {
sigs[i], sigs[j] = sigs[j], sigs[i]
}
}
}
// Retrieve recent header info to protect replay attack
reqCtx, cancelFn := context.WithTimeout(context.Background(), 10*time.Second)
defer cancelFn()
head, err := ethclient.NewClient(client).HeaderByNumber(reqCtx, nil)
if err != nil {
return err
}
num := head.Number.Uint64()
recent, err := ethclient.NewClient(client).HeaderByNumber(reqCtx, big.NewInt(int64(num-128)))
if err != nil {
return err
}
// Print a summary of the operation that's going to be performed
fmt.Printf("Publishing %d => %s:\n\n", checkpoint.SectionIndex, checkpoint.Hash().Hex())
for i, sig := range sigs {
fmt.Printf("Signer %d => %s\n", i+1, ecrecover(sighash, sig).Hex())
}
fmt.Println()
fmt.Printf("Sentry number => %d\nSentry hash => %s\n", recent.Number, recent.Hash().Hex())
// Publish the checkpoint into the oracle
fmt.Println("Sending publish request to Clef...")
tx, err := oracle.RegisterCheckpoint(newClefSigner(ctx), checkpoint.SectionIndex, checkpoint.Hash().Bytes(), recent.Number, recent.Hash(), sigs)
if err != nil {
utils.Fatalf("Register contract failed %v", err)
}
log.Info("Successfully registered checkpoint", "tx", tx.Hash().Hex())
return nil
}

View File

@ -0,0 +1,117 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// checkpoint-admin is a utility that can be used to query checkpoint information
// and register stable checkpoints into an oracle contract.
package main
import (
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/fdlimit"
"github.com/ethereum/go-ethereum/log"
"gopkg.in/urfave/cli.v1"
)
const (
commandHelperTemplate = `{{.Name}}{{if .Subcommands}} command{{end}}{{if .Flags}} [command options]{{end}} [arguments...]
{{if .Description}}{{.Description}}
{{end}}{{if .Subcommands}}
SUBCOMMANDS:
{{range .Subcommands}}{{.Name}}{{with .ShortName}}, {{.}}{{end}}{{ "\t" }}{{.Usage}}
{{end}}{{end}}{{if .Flags}}
OPTIONS:
{{range $.Flags}}{{"\t"}}{{.}}
{{end}}
{{end}}`
)
var (
// Git SHA1 commit hash of the release (set via linker flags)
gitCommit = ""
gitDate = ""
)
var app *cli.App
func init() {
app = utils.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Commands = []cli.Command{
commandStatus,
commandDeploy,
commandSign,
commandPublish,
}
app.Flags = []cli.Flag{
oracleFlag,
nodeURLFlag,
}
cli.CommandHelpTemplate = commandHelperTemplate
}
// Commonly used command line flags.
var (
indexFlag = cli.Int64Flag{
Name: "index",
Usage: "Checkpoint index (query latest from remote node if not specified)",
}
hashFlag = cli.StringFlag{
Name: "hash",
Usage: "Checkpoint hash (query latest from remote node if not specified)",
}
oracleFlag = cli.StringFlag{
Name: "oracle",
Usage: "Checkpoint oracle address (query from remote node if not specified)",
}
thresholdFlag = cli.Int64Flag{
Name: "threshold",
Usage: "Minimal number of signatures required to approve a checkpoint",
}
nodeURLFlag = cli.StringFlag{
Name: "rpc",
Value: "http://localhost:8545",
Usage: "The rpc endpoint of a local or remote geth node",
}
clefURLFlag = cli.StringFlag{
Name: "clef",
Value: "http://localhost:8550",
Usage: "The rpc endpoint of clef",
}
signerFlag = cli.StringFlag{
Name: "signer",
Usage: "Signer address for clef signing",
}
signersFlag = cli.StringFlag{
Name: "signers",
Usage: "Comma separated accounts of trusted checkpoint signers",
}
signaturesFlag = cli.StringFlag{
Name: "signatures",
Usage: "Comma separated checkpoint signatures to submit",
}
)
func main() {
log.Root().SetHandler(log.LvlFilterHandler(log.LvlInfo, log.StreamHandler(os.Stderr, log.TerminalFormat(true))))
fdlimit.Raise(2048)
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}

View File

@ -0,0 +1,61 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"gopkg.in/urfave/cli.v1"
)
var commandStatus = cli.Command{
Name: "status",
Usage: "Fetches the signers and checkpoint status of the oracle contract",
Flags: []cli.Flag{
nodeURLFlag,
},
Action: utils.MigrateFlags(status),
}
// status fetches the admin list of specified registrar contract.
func status(ctx *cli.Context) error {
// Create a wrapper around the checkpoint oracle contract
addr, oracle := newContract(newRPCClient(ctx.GlobalString(nodeURLFlag.Name)))
fmt.Printf("Oracle => %s\n", addr.Hex())
fmt.Println()
// Retrieve the list of authorized signers (admins)
admins, err := oracle.Contract().GetAllAdmin(nil)
if err != nil {
return err
}
for i, admin := range admins {
fmt.Printf("Admin %d => %s\n", i+1, admin.Hex())
}
fmt.Println()
// Retrieve the latest checkpoint
index, checkpoint, height, err := oracle.Contract().GetLatestCheckpoint(nil)
if err != nil {
return err
}
fmt.Printf("Checkpoint (published at #%d) %d => %s\n", height, index, common.Hash(checkpoint).Hex())
return nil
}

View File

@ -1,41 +1,38 @@
Clef
----
Clef can be used to sign transactions and data and is meant as a replacement for geth's account management.
This allows DApps not to depend on geth's account management. When a DApp wants to sign data it can send the data to
the signer, the signer will then provide the user with context and asks the user for permission to sign the data. If
the users grants the signing request the signer will send the signature back to the DApp.
This setup allows a DApp to connect to a remote Ethereum node and send transactions that are locally signed. This can
help in situations when a DApp is connected to a remote node because a local Ethereum node is not available, not
synchronised with the chain or a particular Ethereum node that has no built-in (or limited) account management.
Clef can run as a daemon on the same machine, or off a usb-stick like [usb armory](https://inversepath.com/usbarmory),
or a separate VM in a [QubesOS](https://www.qubes-os.org/) type os setup.
# Clef
Check out
Clef can be used to sign transactions and data and is meant as a(n eventual) replacement for Geth's account management. This allows DApps to not depend on Geth's account management. When a DApp wants to sign data (or a transaction), it can send the content to Clef, which will then provide the user with context and asks for permission to sign the content. If the users grants the signing request, Clef will send the signature back to the DApp.
* the [tutorial](tutorial.md) for some concrete examples on how the signer works.
* the [setup docs](docs/setup.md) for some information on how to configure it to work on QubesOS or USBArmory.
* the [data types](datatypes.md) for detailed information on the json types used in the communication between
clef and an external UI
This setup allows a DApp to connect to a remote Ethereum node and send transactions that are locally signed. This can help in situations when a DApp is connected to an untrusted remote Ethereum node, because a local one is not available, not synchronised with the chain, or is a node that has no built-in (or limited) account management.
Clef can run as a daemon on the same machine, off a usb-stick like [USB armory](https://inversepath.com/usbarmory), or even a separate VM in a [QubesOS](https://www.qubes-os.org/) type setup.
Check out the
* [CLI tutorial](tutorial.md) for some concrete examples on how Clef works.
* [Setup docs](docs/setup.md) for infos on how to configure Clef on QubesOS or USB Armory.
* [Data types](datatypes.md) for details on the communication messages between Clef and an external UI.
## Command line flags
Clef accepts the following command line options:
```
COMMANDS:
init Initialize the signer, generate secret storage
attest Attest that a js-file is to be used
setpw Store a credential for a keystore file
delpw Remove a credential for a keystore file
gendoc Generate documentation about json-rpc format
help Shows a list of commands or help for one command
GLOBAL OPTIONS:
--loglevel value log level to emit to the screen (default: 4)
--keystore value Directory for the keystore (default: "$HOME/.ethereum/keystore")
--configdir value Directory for Clef configuration (default: "$HOME/.clef")
--chainid value Chain id to use for signing (1=mainnet, 3=ropsten, 4=rinkeby, 5=Goerli) (default: 1)
--chainid value Chain id to use for signing (1=mainnet, 3=Ropsten, 4=Rinkeby, 5=Goerli) (default: 1)
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--nousb Disables monitoring for and managing USB hardware wallets
--pcscdpath value Path to the smartcard daemon (pcscd) socket file (default: "/run/pcscd/pcscd.comm")
--rpcaddr value HTTP-RPC server listening interface (default: "localhost")
--rpcvhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--ipcdisable Disable the IPC-RPC server
@ -43,60 +40,48 @@ GLOBAL OPTIONS:
--rpc Enable the HTTP-RPC server
--rpcport value HTTP-RPC server listening port (default: 8550)
--signersecret value A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash
--4bytedb value File containing 4byte-identifiers (default: "./4byte.json")
--4bytedb-custom value File used for writing new 4byte-identifiers submitted via API (default: "./4byte-custom.json")
--auditlog value File used to emit audit logs. Set to "" to disable (default: "audit.log")
--rules value Enable rule-engine (default: "rules.json")
--rules value Path to the rule file to auto-authorize requests with
--stdio-ui Use STDIN/STDOUT as a channel for an external UI. This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user interface, and can be used when Clef is started by an external process.
--stdio-ui-test Mechanism to test interface between Clef and UI. Requires 'stdio-ui'.
--advanced If enabled, issues warnings instead of rejections for suspicious requests. Default off
--help, -h show help
--version, -v print the version
```
Example:
```
signer -keystore /my/keystore -chainid 4
```
```
$ clef -keystore /my/keystore -chainid 4
```
## Security model
The security model of the signer is as follows:
The security model of Clef is as follows:
* One critical component (the signer binary / daemon) is responsible for handling cryptographic operations: signing, private keys, encryption/decryption of keystore files.
* The signer binary has a well-defined 'external' API.
* One critical component (the Clef binary / daemon) is responsible for handling cryptographic operations: signing, private keys, encryption/decryption of keystore files.
* Clef has a well-defined 'external' API.
* The 'external' API is considered UNTRUSTED.
* The signer binary also communicates with whatever process that invoked the binary, via stdin/stdout.
* Clef also communicates with whatever process that invoked the binary, via stdin/stdout.
* This channel is considered 'trusted'. Over this channel, approvals and passwords are communicated.
The general flow for signing a transaction using e.g. geth is as follows:
The general flow for signing a transaction using e.g. Geth is as follows:
![image](sign_flow.png)
In this case, `geth` would be started with `--externalsigner=http://localhost:8550` and would relay requests to `eth.sendTransaction`.
In this case, `geth` would be started with `--signer http://localhost:8550` and would relay requests to `eth.sendTransaction`.
## TODOs
Some snags and todos
* [ ] The signer should take a startup param "--no-change", for UIs that do not contain the capability
to perform changes to things, only approve/deny. Such a UI should be able to start the signer in
a more secure mode by telling it that it only wants approve/deny capabilities.
* [x] It would be nice if the signer could collect new 4byte-id:s/method selectors, and have a
secondary database for those (`4byte_custom.json`). Users could then (optionally) submit their collections for
inclusion upstream.
* It should be possible to configure the signer to check if an account is indeed known to it, before
passing on to the UI. The reason it currently does not, is that it would make it possible to enumerate
accounts if it immediately returned "unknown account".
* [x] It should be possible to configure the signer to auto-allow listing (certain) accounts, instead of asking every time.
* [x] Done Upon startup, the signer should spit out some info to the caller (particularly important when executed in `stdio-ui`-mode),
invoking methods with the following info:
* [ ] Clef should take a startup param "--no-change", for UIs that do not contain the capability to perform changes to things, only approve/deny. Such a UI should be able to start the signer in a more secure mode by telling it that it only wants approve/deny capabilities.
* [x] It would be nice if Clef could collect new 4byte-id:s/method selectors, and have a secondary database for those (`4byte_custom.json`). Users could then (optionally) submit their collections for inclusion upstream.
* [ ] It should be possible to configure Clef to check if an account is indeed known to it, before passing on to the UI. The reason it currently does not, is that it would make it possible to enumerate accounts if it immediately returned "unknown account" (side channel attack).
* [x] It should be possible to configure Clef to auto-allow listing (certain) accounts, instead of asking every time.
* [x] Done Upon startup, Clef should spit out some info to the caller (particularly important when executed in `stdio-ui`-mode), invoking methods with the following info:
* [x] Version info about the signer
* [x] Address of API (http/ipc)
* [x] Address of API (HTTP/IPC)
* [ ] List of known accounts
* [ ] Have a default timeout on signing operations, so that if the user has not answered within e.g. 60 seconds, the request is rejected.
* [ ] `account_signRawTransaction`
@ -109,21 +94,16 @@ invoking methods with the following info:
* the number of unique recipients
* Geth todos
- The signer should pass the `Origin` header as call-info to the UI. As of right now, the way that info about the request is
put together is a bit of a hack into the http server. This could probably be greatly improved
- Relay: Geth should be started in `geth --external_signer localhost:8550`.
- Currently, the Geth APIs use `common.Address` in the arguments to transaction submission (e.g `to` field). This
type is 20 `bytes`, and is incapable of carrying checksum information. The signer uses `common.MixedcaseAddress`, which
retains the original input.
- The Geth api should switch to use the same type, and relay `to`-account verbatim to the external api.
- The signer should pass the `Origin` header as call-info to the UI. As of right now, the way that info about the request is put together is a bit of a hack into the HTTP server. This could probably be greatly improved.
- Relay: Geth should be started in `geth --signer localhost:8550`.
- Currently, the Geth APIs use `common.Address` in the arguments to transaction submission (e.g `to` field). This type is 20 `bytes`, and is incapable of carrying checksum information. The signer uses `common.MixedcaseAddress`, which retains the original input.
- The Geth API should switch to use the same type, and relay `to`-account verbatim to the external API.
* [x] Storage
* [x] An encrypted key-value storage should be implemented
* [x] An encrypted key-value storage should be implemented.
* See [rules.md](rules.md) for more info about this.
* Another potential thing to introduce is pairing.
* To prevent spurious requests which users just accept, implement a way to "pair" the caller with the signer (external API).
* Thus geth/mist/cpp would cryptographically handshake and afterwards the caller would be allowed to make signing requests.
* Thus Geth/cpp would cryptographically handshake and afterwards the caller would be allowed to make signing requests.
* This feature would make the addition of rules less dangerous.
* Wallets / accounts. Add API methods for wallets.
@ -132,37 +112,31 @@ put together is a bit of a hack into the http server. This could probably be gre
### External API
The signer listens to HTTP requests on `rpcaddr`:`rpcport`, with the same JSONRPC standard as Geth. The messages are
expected to be JSON [jsonrpc 2.0 standard](http://www.jsonrpc.org/specification).
Clef listens to HTTP requests on `rpcaddr`:`rpcport` (or to IPC on `ipcpath`), with the same JSON-RPC standard as Geth. The messages are expected to be [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Some of these call can require user interaction. Clients must be aware that responses
may be delayed significantly or may never be received if a users decides to ignore the confirmation request.
Some of these call can require user interaction. Clients must be aware that responses may be delayed significantly or may never be received if a users decides to ignore the confirmation request.
The External API is **untrusted** : it does not accept credentials over this api, nor does it expect
that requests have any authority.
The External API is **untrusted**: it does not accept credentials over this API, nor does it expect that requests have any authority.
### UI API
### Internal UI API
The signer has one native console-based UI, for operation without any standalone tools.
However, there is also an API to communicate with an external UI. To enable that UI,
the signer needs to be executed with the `--stdio-ui` option, which allocates the
`stdin`/`stdout` for the UI-api.
Clef has one native console-based UI, for operation without any standalone tools. However, there is also an API to communicate with an external UI. To enable that UI, the signer needs to be executed with the `--stdio-ui` option, which allocates `stdin` / `stdout` for the UI API.
An example (insecure) proof-of-concept of has been implemented in `pythonsigner.py`.
The model is as follows:
* The user starts the UI app (`pythonsigner.py`).
* The UI app starts the `signer` with `--stdio-ui`, and listens to the
* The UI app starts `clef` with `--stdio-ui`, and listens to the
process output for confirmation-requests.
* The `signer` opens the external http api.
* When the `signer` receives requests, it sends a `jsonrpc` request via `stdout`.
* The UI app prompts the user accordingly, and responds to the `signer`
* The `signer` signs (or not), and responds to the original request.
* `clef` opens the external HTTP API.
* When the `signer` receives requests, it sends a JSON-RPC request via `stdout`.
* The UI app prompts the user accordingly, and responds to `clef`.
* `clef` signs (or not), and responds to the original request.
## External API
See the [external api changelog](extapi_changelog.md) for information about changes to this API.
See the [external API changelog](extapi_changelog.md) for information about changes to this API.
### Encoding
- number: positive integers that are hex encoded
@ -187,7 +161,7 @@ None
#### Result
- address [string]: account address that is derived from the generated key
- url [string]: location of the keyfile
#### Sample call
```json
{
@ -221,9 +195,9 @@ None
#### Result
- array with account records:
- account.address [string]: account address that is derived from the generated key
- account.type [string]: type of the
- account.type [string]: type of the
- account.url [string]: location of the account
#### Sample call
```json
{
@ -272,7 +246,7 @@ Response
#### Result
- signed transaction in RLP encoded form [data]
#### Sample call
```json
{
@ -372,7 +346,7 @@ Bash example:
#### Result
- calculated signature [data]
#### Sample call
```json
{
@ -407,7 +381,7 @@ Response
#### Result
- calculated signature [data]
#### Sample call
```json
{
@ -505,7 +479,7 @@ Derive the address from the account that was used to sign data with content type
#### Result
- derived account [address]
#### Sample call
```json
{
@ -534,16 +508,16 @@ Response
#### Import account
Import a private key into the keystore. The imported key is expected to be encrypted according to the web3 keystore
format.
#### Arguments
- account [object]: key in [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) (retrieved with account_export)
- account [object]: key in [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) (retrieved with account_export)
#### Result
- imported key [object]:
- key.address [address]: address of the imported key
- key.type [string]: type of the account
- key.url [string]: key URL
#### Sample call
```json
{
@ -594,14 +568,14 @@ Response
#### Export account from keystore
Export a private key from the keystore. The exported private key is encrypted with the original passphrase. When the
key is imported later this passphrase is required.
#### Arguments
- account [address]: export private key that is associated with this account
#### Result
- exported key, see [web3 keystore format](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) for
more information
#### Sample call
```json
{
@ -643,8 +617,6 @@ Response
}
```
## UI API
These methods needs to be implemented by a UI listener.
@ -655,7 +627,7 @@ See `pythonsigner`, which can be invoked via `python3 pythonsigner.py test` to p
All methods in this API uses object-based parameters, so that there can be no mixups of parameters: each piece of data is accessed by key.
See the [ui api changelog](intapi_changelog.md) for information about changes to this API.
See the [ui API changelog](intapi_changelog.md) for information about changes to this API.
OBS! A slight deviation from `json` standard is in place: every request and response should be confined to a single line.
Whereas the `json` specification allows for linebreaks, linebreaks __should not__ be used in this communication channel, to make
@ -909,7 +881,7 @@ TLDR; Use this method to keep track of signed transactions, instead of using the
### OnSignerStartup / `ui_onSignerStartup`
This method provide the UI with information about what API version the signer uses (both internal and external) aswell as build-info and external api,
This method provide the UI with information about what API version the signer uses (both internal and external) aswell as build-info and external API,
in k/v-form.
Example call:
@ -953,9 +925,9 @@ A UI should conform to the following rules.
along with the UI.
### UI Implementations
### UI Implementations
There are a couple of implementation for a UI. We'll try to keep this list up to date.
There are a couple of implementation for a UI. We'll try to keep this list up to date.
| Name | Repo | UI type| No external resources| Blocky support| Verifies permissions | Hash information | No secondary storage | Statically linked| Can modify parameters|
| ---- | ---- | -------| ---- | ---- | ---- |---- | ---- | ---- | ---- |

View File

@ -11,7 +11,7 @@ Example:
"content_type": "text/plain",
"address": "0xDEADbEeF000000000000000000000000DeaDbeEf",
"raw_data": "GUV0aGVyZXVtIFNpZ25lZCBNZXNzYWdlOgoxMWhlbGxvIHdvcmxk",
"message": [
"messages": [
{
"name": "message",
"value": "\u0019Ethereum Signed Message:\n11hello world",
@ -133,7 +133,7 @@ This occurs _after_ successful completion of the entire signing procedure, but r
A ruleset that implements a rate limitation needs to know what transactions are sent out to the external interface. By hooking into this methods, the ruleset can maintain track of that count.
**OBS:** Note that if an attacker can restore your `clef` data to a previous point in time (e.g through a backup), the attacker can reset such windows, even if he/she is unable to decrypt the content.
**OBS:** Note that if an attacker can restore your `clef` data to a previous point in time (e.g through a backup), the attacker can reset such windows, even if he/she is unable to decrypt the content.
The `OnApproved` method cannot be responded to, it's purely informative
@ -179,7 +179,7 @@ Example:
```
### ListRequest
Sent when a request has been made to list addresses. The UI is provided with the full `account`s, including local directory names. Note: this information is not passed back to the external caller, who only sees the `address`es.
Sent when a request has been made to list addresses. The UI is provided with the full `account`s, including local directory names. Note: this information is not passed back to the external caller, who only sees the `address`es.
Example:
```json

View File

@ -1,4 +1,15 @@
### Changelog for external API
## Changelog for external API
The API uses [semantic versioning](https://semver.org/).
TL;DR: Given a version number MAJOR.MINOR.PATCH, increment the:
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
### 6.0.0
@ -14,15 +25,15 @@ The addition of `contentType` makes it possible to use the method for different
* signing clique headers,
* signing plain personal messages,
* The external method `account_signTypedData` implements [EIP-712](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-712.md) and makes it possible to sign typed data.
#### 4.0.0
* The external `account_Ecrecover`-method was removed.
* The external `account_Ecrecover`-method was removed.
* The external `account_Import`-method was removed.
#### 3.0.0
* The external `account_List`-method was changed to not expose `url`, which contained info about the local filesystem. It now returns only a list of addresses.
* The external `account_List`-method was changed to not expose `url`, which contained info about the local filesystem. It now returns only a list of addresses.
#### 2.0.0
@ -33,15 +44,3 @@ makes the `accounts_signTransaction` identical to the old `eth_signTransaction`.
#### 1.0.0
Initial release.
### Versioning
The API uses [semantic versioning](https://semver.org/).
TLDR; Given a version number MAJOR.MINOR.PATCH, increment the:
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

View File

@ -1,24 +1,39 @@
### Changelog for internal API (ui-api)
## Changelog for internal API (ui-api)
### 6.0.0
The API uses [semantic versioning](https://semver.org/).
Removed `password` from responses to operations which require them. This is for two reasons,
TL;DR: Given a version number MAJOR.MINOR.PATCH, increment the:
- Consistency between how rulesets operate and how manual processing works. A rule can `Approve` but require the actual password to be stored in the clef storage.
With this change, the same stored password can be used even if rulesets are not enabled, but storage is.
- It also removes the usability-shortcut that a UI might otherwise want to implement; remembering passwords. Since we now will not require the
password on every `Approve`, there's no need for the UI to cache it locally.
- In a future update, we'll likely add `clef_storePassword` to the internal API, so the user can store it via his UI (currently only CLI works).
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
### 7.0.0
- The `message` field was renamed to `messages` in all data signing request methods to better reflect that it's a list, not a value.
- The `storage.Put` and `storage.Get` methods in the rule execution engine were lower-cased to `storage.put` and `storage.get` to be consistent with JavaScript call conventions.
### 6.0.0
Removed `password` from responses to operations which require them. This is for two reasons,
- Consistency between how rulesets operate and how manual processing works. A rule can `Approve` but require the actual password to be stored in the clef storage.
With this change, the same stored password can be used even if rulesets are not enabled, but storage is.
- It also removes the usability-shortcut that a UI might otherwise want to implement; remembering passwords. Since we now will not require the
password on every `Approve`, there's no need for the UI to cache it locally.
- In a future update, we'll likely add `clef_storePassword` to the internal API, so the user can store it via his UI (currently only CLI works).
Affected datatypes:
- `SignTxResponse`
- `SignDataResponse`
- `NewAccountResponse`
If `clef` requires a password, the `OnInputRequired` will be used to collect it.
If `clef` requires a password, the `OnInputRequired` will be used to collect it.
### 5.0.0
### 5.0.0
Changed the namespace format to adhere to the legacy ethereum format: `name_methodName`. Changes:
@ -38,7 +53,7 @@ Changed the namespace format to adhere to the legacy ethereum format: `name_meth
### 4.0.0
* Bidirectional communication implemented, so the UI can query `clef` via the stdin/stdout RPC channel. Methods implemented are:
- `clef_listWallets`
- `clef_listWallets`
- `clef_listAccounts`
- `clef_listWallets`
- `clef_deriveAccount`
@ -48,10 +63,10 @@ Changed the namespace format to adhere to the legacy ethereum format: `name_meth
- `clef_setChainId`
- `clef_export`
- `clef_import`
* The type `Account` was modified (the json-field `type` was removed), to consist of
```golang
* The type `Account` was modified (the json-field `type` was removed), to consist of
```go
type Account struct {
Address common.Address `json:"address"` // Ethereum account address derived from the key
URL URL `json:"url"` // Optional resource locator within a backend
@ -64,7 +79,7 @@ type Account struct {
* Make `ShowError`, `OnApprovedTx`, `OnSignerStartup` be json-rpc [notifications](https://www.jsonrpc.org/specification#notification):
> A Notification is a Request object without an "id" member. A Request object that is a Notification signifies the Client's lack of interest in the corresponding Response object, and as such no Response object needs to be returned to the client. The Server MUST NOT reply to a Notification, including those that are within a batch request.
>
>
> Notifications are not confirmable by definition, since they do not have a Response object to be returned. As such, the Client would not be aware of any errors (like e.g. "Invalid params","Internal error"
### 3.1.0
@ -79,15 +94,17 @@ type Account struct {
* Add `OnInputRequired(info UserInputRequest)` to internal API. This method is used when Clef needs user input, e.g. passwords.
The following structures are used:
```golang
UserInputRequest struct {
Prompt string `json:"prompt"`
Title string `json:"title"`
IsPassword bool `json:"isPassword"`
}
UserInputResponse struct {
Text string `json:"text"`
}
```go
UserInputRequest struct {
Prompt string `json:"prompt"`
Title string `json:"title"`
IsPassword bool `json:"isPassword"`
}
UserInputResponse struct {
Text string `json:"text"`
}
```
### 2.0.0
@ -161,15 +178,3 @@ Example call:
#### 1.0.0
Initial release.
### Versioning
The API uses [semantic versioning](https://semver.org/).
TLDR; Given a version number MAJOR.MINOR.PATCH, increment the:
* MAJOR version when you make incompatible API changes,
* MINOR version when you add functionality in a backwards-compatible manner, and
* PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

View File

@ -33,6 +33,7 @@ import (
"path/filepath"
"runtime"
"strings"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
@ -92,7 +93,7 @@ var (
chainIdFlag = cli.Int64Flag{
Name: "chainid",
Value: params.MainnetChainConfig.ChainID.Int64(),
Usage: "Chain id to use for signing (1=mainnet, 3=ropsten, 4=rinkeby, 5=Goerli)",
Usage: "Chain id to use for signing (1=mainnet, 3=Ropsten, 4=Rinkeby, 5=Goerli)",
}
rpcPortFlag = cli.IntFlag{
Name: "rpcport",
@ -115,8 +116,7 @@ var (
}
ruleFlag = cli.StringFlag{
Name: "rules",
Usage: "Enable rule-engine",
Value: "",
Usage: "Path to the rule file to auto-authorize requests with",
}
stdiouiFlag = cli.BoolFlag{
Name: "stdio-ui",
@ -159,7 +159,6 @@ incoming requests.
Whenever you make an edit to the rule file, you need to use attestation to tell
Clef that the file is 'safe' to execute.`,
}
setCredentialCommand = cli.Command{
Action: utils.MigrateFlags(setCredential),
Name: "setpw",
@ -171,8 +170,20 @@ Clef that the file is 'safe' to execute.`,
signerSecretFlag,
},
Description: `
The setpw command stores a password for a given address (keyfile). If you enter a blank passphrase, it will
remove any stored credential for that address (keyfile)
The setpw command stores a password for a given address (keyfile).
`}
delCredentialCommand = cli.Command{
Action: utils.MigrateFlags(removeCredential),
Name: "delpw",
Usage: "Remove a credential for a keystore file",
ArgsUsage: "<address>",
Flags: []cli.Flag{
logLevelFlag,
configdirFlag,
signerSecretFlag,
},
Description: `
The delpw command removes a password for a given address (keyfile).
`}
gendocCommand = cli.Command{
Action: GenDoc,
@ -193,6 +204,7 @@ func init() {
chainIdFlag,
utils.LightKDFFlag,
utils.NoUSBFlag,
utils.SmartCardDaemonPathFlag,
utils.RPCListenAddrFlag,
utils.RPCVirtualHostsFlag,
utils.IPCDisabledFlag,
@ -208,9 +220,9 @@ func init() {
advancedMode,
}
app.Action = signer
app.Commands = []cli.Command{initCommand, attestCommand, setCredentialCommand, gendocCommand}
app.Commands = []cli.Command{initCommand, attestCommand, setCredentialCommand, delCredentialCommand, gendocCommand}
}
func main() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
@ -219,11 +231,20 @@ func main() {
}
func initializeSecrets(c *cli.Context) error {
// Get past the legal message
if err := initialize(c); err != nil {
return err
}
// Ensure the master key does not yet exist, we're not willing to overwrite
configDir := c.GlobalString(configdirFlag.Name)
if err := os.Mkdir(configDir, 0700); err != nil && !os.IsExist(err) {
return err
}
location := filepath.Join(configDir, "masterseed.json")
if _, err := os.Stat(location); err == nil {
return fmt.Errorf("master key %v already exists, will not overwrite", location)
}
// Key file does not exist yet, generate a new one and encrypt it
masterSeed := make([]byte, 256)
num, err := io.ReadFull(rand.Reader, masterSeed)
if err != nil {
@ -232,18 +253,18 @@ func initializeSecrets(c *cli.Context) error {
if num != len(masterSeed) {
return fmt.Errorf("failed to read enough random")
}
n, p := keystore.StandardScryptN, keystore.StandardScryptP
if c.GlobalBool(utils.LightKDFFlag.Name) {
n, p = keystore.LightScryptN, keystore.LightScryptP
}
text := "The master seed of clef is locked with a password. Please give a password. Do not forget this password."
text := "The master seed of clef will be locked with a password.\nPlease specify a password. Do not forget this password!"
var password string
for {
password = getPassPhrase(text, true)
if err := core.ValidatePasswordFormat(password); err != nil {
fmt.Printf("invalid password: %v\n", err)
} else {
fmt.Println()
break
}
}
@ -251,28 +272,27 @@ func initializeSecrets(c *cli.Context) error {
if err != nil {
return fmt.Errorf("failed to encrypt master seed: %v", err)
}
err = os.Mkdir(configDir, 0700)
if err != nil && !os.IsExist(err) {
// Double check the master key path to ensure nothing wrote there in between
if err = os.Mkdir(configDir, 0700); err != nil && !os.IsExist(err) {
return err
}
location := filepath.Join(configDir, "masterseed.json")
if _, err := os.Stat(location); err == nil {
return fmt.Errorf("file %v already exists, will not overwrite", location)
return fmt.Errorf("master key %v already exists, will not overwrite", location)
}
err = ioutil.WriteFile(location, cipherSeed, 0400)
if err != nil {
// Write the file and print the usual warning message
if err = ioutil.WriteFile(location, cipherSeed, 0400); err != nil {
return err
}
fmt.Printf("A master seed has been generated into %s\n", location)
fmt.Printf(`
This is required to be able to store credentials, such as :
This is required to be able to store credentials, such as:
* Passwords for keystores (used by rule engine)
* Storage for javascript rules
* Hash of rule-file
* Storage for JavaScript auto-signing rules
* Hash of JavaScript rule-file
You should treat that file with utmost secrecy, and make a backup of it.
NOTE: This file does not contain your accounts. Those need to be backed up separately!
You should treat 'masterseed.json' with utmost secrecy and make a backup of it!
* The password is necessary but not enough, you need to back up the master seed too!
* The master seed does not contain your accounts, those need to be backed up separately!
`)
return nil
@ -303,14 +323,18 @@ func attestFile(ctx *cli.Context) error {
func setCredential(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an address to be passed as an argument.")
utils.Fatalf("This command requires an address to be passed as an argument")
}
if err := initialize(ctx); err != nil {
return err
}
address := ctx.Args().First()
password := getPassPhrase("Enter a passphrase to store with this address.", true)
addr := ctx.Args().First()
if !common.IsHexAddress(addr) {
utils.Fatalf("Invalid address specified: %s", addr)
}
address := common.HexToAddress(addr)
password := getPassPhrase("Please enter a passphrase to store for this address:", true)
fmt.Println()
stretchedKey, err := readMasterKey(ctx, nil)
if err != nil {
@ -320,10 +344,38 @@ func setCredential(ctx *cli.Context) error {
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
// Initialize the encrypted storages
pwStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
pwStorage.Put(address, password)
log.Info("Credential store updated", "key", address)
pwStorage.Put(address.Hex(), password)
log.Info("Credential store updated", "set", address)
return nil
}
func removeCredential(ctx *cli.Context) error {
if len(ctx.Args()) < 1 {
utils.Fatalf("This command requires an address to be passed as an argument")
}
if err := initialize(ctx); err != nil {
return err
}
addr := ctx.Args().First()
if !common.IsHexAddress(addr) {
utils.Fatalf("Invalid address specified: %s", addr)
}
address := common.HexToAddress(addr)
stretchedKey, err := readMasterKey(ctx, nil)
if err != nil {
utils.Fatalf(err.Error())
}
configDir := ctx.GlobalString(configdirFlag.Name)
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
pwStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
pwStorage.Del(address.Hex())
log.Info("Credential store updated", "unset", address)
return nil
}
@ -338,13 +390,17 @@ func initialize(c *cli.Context) error {
if !confirm(legalWarning) {
return fmt.Errorf("aborted by user")
}
fmt.Println()
}
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(c.Int(logLevelFlag.Name)), log.StreamHandler(logOutput, log.TerminalFormat(true))))
return nil
}
func signer(c *cli.Context) error {
// If we have some unrecognized command, bail out
if args := c.Args(); len(args) > 0 {
return fmt.Errorf("invalid command: %q", args[0])
}
if err := initialize(c); err != nil {
return err
}
@ -374,7 +430,7 @@ func signer(c *cli.Context) error {
configDir := c.GlobalString(configdirFlag.Name)
if stretchedKey, err := readMasterKey(c, ui); err != nil {
log.Info("No master seed provided, rules disabled", "error", err)
log.Warn("Failed to open master, rules disabled", "err", err)
} else {
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
@ -388,17 +444,17 @@ func signer(c *cli.Context) error {
jsStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "jsstorage.json"), jskey)
configStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "config.json"), confkey)
//Do we have a rule-file?
// Do we have a rule-file?
if ruleFile := c.GlobalString(ruleFlag.Name); ruleFile != "" {
ruleJS, err := ioutil.ReadFile(c.GlobalString(ruleFile))
ruleJS, err := ioutil.ReadFile(ruleFile)
if err != nil {
log.Info("Could not load rulefile, rules not enabled", "file", "rulefile")
log.Warn("Could not load rules, disabling", "file", ruleFile, "err", err)
} else {
shasum := sha256.Sum256(ruleJS)
foundShaSum := hex.EncodeToString(shasum[:])
storedShasum := configStorage.Get("ruleset_sha256")
storedShasum, _ := configStorage.Get("ruleset_sha256")
if storedShasum != foundShaSum {
log.Info("Could not validate ruleset hash, rules not enabled", "got", foundShaSum, "expected", storedShasum)
log.Warn("Rule hash not attested, disabling", "hash", foundShaSum, "attested", storedShasum)
} else {
// Initialize rules
ruleEngine, err := rules.NewRuleEvaluator(ui, jsStorage)
@ -418,10 +474,11 @@ func signer(c *cli.Context) error {
lightKdf = c.GlobalBool(utils.LightKDFFlag.Name)
advanced = c.GlobalBool(advancedMode.Name)
nousb = c.GlobalBool(utils.NoUSBFlag.Name)
scpath = c.GlobalString(utils.SmartCardDaemonPathFlag.Name)
)
log.Info("Starting signer", "chainid", chainId, "keystore", ksLoc,
"light-kdf", lightKdf, "advanced", advanced)
am := core.StartClefAccountManager(ksLoc, nousb, lightKdf)
am := core.StartClefAccountManager(ksLoc, nousb, lightKdf, scpath)
apiImpl := core.NewSignerAPI(am, chainId, nousb, ui, db, advanced, pwStorage)
// Establish the bidirectional communication, by creating a new UI backend and registering
@ -449,7 +506,6 @@ func signer(c *cli.Context) error {
Version: "1.0"},
}
if c.GlobalBool(utils.RPCEnabledFlag.Name) {
vhosts := splitAndTrim(c.GlobalString(utils.RPCVirtualHostsFlag.Name))
cors := splitAndTrim(c.GlobalString(utils.RPCCORSDomainFlag.Name))
@ -466,7 +522,6 @@ func signer(c *cli.Context) error {
listener.Close()
log.Info("HTTP endpoint closed", "url", httpEndpoint)
}()
}
if !c.GlobalBool(utils.IPCDisabledFlag.Name) {
if c.IsSet(utils.IPCPathFlag.Name) {
@ -493,8 +548,8 @@ func signer(c *cli.Context) error {
}
ui.OnSignerStartup(core.StartupInfo{
Info: map[string]interface{}{
"extapi_version": core.ExternalAPIVersion,
"intapi_version": core.InternalAPIVersion,
"extapi_version": core.ExternalAPIVersion,
"extapi_http": extapiURL,
"extapi_ipc": ipcapiURL,
},
@ -589,7 +644,6 @@ func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
if len(masterSeed) < 256 {
return nil, fmt.Errorf("master seed of insufficient length, expected >255 bytes, got %d", len(masterSeed))
}
// Create vault location
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), masterSeed)[:10]))
err = os.Mkdir(vaultLocation, 0700)
@ -617,13 +671,12 @@ func checkFile(filename string) error {
// confirm displays a text and asks for user confirmation
func confirm(text string) bool {
fmt.Printf(text)
fmt.Printf("\nEnter 'ok' to proceed:\n>")
fmt.Printf("\nEnter 'ok' to proceed:\n> ")
text, err := bufio.NewReader(os.Stdin).ReadString('\n')
if err != nil {
log.Crit("Failed to read user input", "err", err)
}
if text := strings.TrimSpace(text); text == "ok" {
return true
}
@ -638,6 +691,10 @@ func testExternalUI(api *core.SignerAPI) {
errs := make([]string, 0)
a := common.HexToAddress("0xdeadbeef000000000000000000000000deadbeef")
addErr := func(errStr string) {
log.Info("Test error", "err", errStr)
errs = append(errs, errStr)
}
queryUser := func(q string) string {
resp, err := api.UI.OnInputRequired(core.UserInputRequest{
@ -645,36 +702,39 @@ func testExternalUI(api *core.SignerAPI) {
Prompt: q,
})
if err != nil {
errs = append(errs, err.Error())
addErr(err.Error())
}
return resp.Text
}
expectResponse := func(testcase, question, expect string) {
if got := queryUser(question); got != expect {
errs = append(errs, fmt.Sprintf("%s: got %v, expected %v", testcase, got, expect))
addErr(fmt.Sprintf("%s: got %v, expected %v", testcase, got, expect))
}
}
expectApprove := func(testcase string, err error) {
if err == nil || err == accounts.ErrUnknownAccount {
return
}
errs = append(errs, fmt.Sprintf("%v: expected no error, got %v", testcase, err.Error()))
addErr(fmt.Sprintf("%v: expected no error, got %v", testcase, err.Error()))
}
expectDeny := func(testcase string, err error) {
if err == nil || err != core.ErrRequestDenied {
errs = append(errs, fmt.Sprintf("%v: expected ErrRequestDenied, got %v", testcase, err))
addErr(fmt.Sprintf("%v: expected ErrRequestDenied, got %v", testcase, err))
}
}
var delay = 1 * time.Second
// Test display of info and error
{
api.UI.ShowInfo("If you see this message, enter 'yes' to next question")
time.Sleep(delay)
expectResponse("showinfo", "Did you see the message? [yes/no]", "yes")
api.UI.ShowError("If you see this message, enter 'yes' to the next question")
time.Sleep(delay)
expectResponse("showerror", "Did you see the message? [yes/no]", "yes")
}
{ // Sign data test - clique header
api.UI.ShowInfo("Please approve the next request for signing a clique header")
time.Sleep(delay)
cliqueHeader := types.Header{
common.HexToHash("0000H45H"),
common.HexToHash("0000H45H"),
@ -700,14 +760,27 @@ func testExternalUI(api *core.SignerAPI) {
_, err = api.SignData(ctx, accounts.MimetypeClique, *addr, hexutil.Encode(cliqueRlp))
expectApprove("signdata - clique header", err)
}
{ // Sign data test - typed data
api.UI.ShowInfo("Please approve the next request for signing EIP-712 typed data")
time.Sleep(delay)
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
data := `{"types":{"EIP712Domain":[{"name":"name","type":"string"},{"name":"version","type":"string"},{"name":"chainId","type":"uint256"},{"name":"verifyingContract","type":"address"}],"Person":[{"name":"name","type":"string"},{"name":"test","type":"uint8"},{"name":"wallet","type":"address"}],"Mail":[{"name":"from","type":"Person"},{"name":"to","type":"Person"},{"name":"contents","type":"string"}]},"primaryType":"Mail","domain":{"name":"Ether Mail","version":"1","chainId":"1","verifyingContract":"0xCCCcccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC"},"message":{"from":{"name":"Cow","test":"3","wallet":"0xcD2a3d9F938E13CD947Ec05AbC7FE734Df8DD826"},"to":{"name":"Bob","wallet":"0xbBbBBBBbbBBBbbbBbbBbbbbBBbBbbbbBbBbbBBbB","test":"2"},"contents":"Hello, Bob!"}}`
//_, err := api.SignData(ctx, accounts.MimetypeTypedData, *addr, hexutil.Encode([]byte(data)))
var typedData core.TypedData
err := json.Unmarshal([]byte(data), &typedData)
_, err = api.SignTypedData(ctx, *addr, typedData)
expectApprove("sign 712 typed data", err)
}
{ // Sign data test - plain text
api.UI.ShowInfo("Please approve the next request for signing text")
time.Sleep(delay)
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
_, err := api.SignData(ctx, accounts.MimetypeTextPlain, *addr, hexutil.Encode([]byte("hello world")))
expectApprove("signdata - text", err)
}
{ // Sign data test - plain text reject
api.UI.ShowInfo("Please deny the next request for signing text")
time.Sleep(delay)
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
_, err := api.SignData(ctx, accounts.MimetypeTextPlain, *addr, hexutil.Encode([]byte("hello world")))
expectDeny("signdata - text", err)
@ -715,6 +788,7 @@ func testExternalUI(api *core.SignerAPI) {
{ // Sign transaction
api.UI.ShowInfo("Please reject next transaction")
time.Sleep(delay)
data := hexutil.Bytes([]byte{})
to := common.NewMixedcaseAddress(a)
tx := core.SendTxArgs{
@ -733,16 +807,19 @@ func testExternalUI(api *core.SignerAPI) {
}
{ // Listing
api.UI.ShowInfo("Please reject listing-request")
time.Sleep(delay)
_, err := api.List(ctx)
expectDeny("list", err)
}
{ // Import
api.UI.ShowInfo("Please reject new account-request")
time.Sleep(delay)
_, err := api.New(ctx)
expectDeny("newaccount", err)
}
{ // Metadata
api.UI.ShowInfo("Please check if you see the Origin in next listing (approve or deny)")
time.Sleep(delay)
api.List(context.WithValue(ctx, "Origin", "origin.com"))
expectResponse("metadata - origin", "Did you see origin (origin.com)? [yes/no] ", "yes")
}
@ -837,14 +914,14 @@ func GenDoc(ctx *cli.Context) {
"of the work in canonicalizing and making sense of the data, and it's up to the UI to present" +
"the user with the contents of the `message`"
sighash, msg := accounts.TextAndHash([]byte("hello world"))
message := []*core.NameValueType{{"message", msg, accounts.MimetypeTextPlain}}
messages := []*core.NameValueType{{"message", msg, accounts.MimetypeTextPlain}}
add("SignDataRequest", desc, &core.SignDataRequest{
Address: common.NewMixedcaseAddress(a),
Meta: meta,
ContentType: accounts.MimetypeTextPlain,
Rawdata: []byte(msg),
Message: message,
Messages: messages,
Hash: sighash})
}
{ // Sign plain text response
@ -955,29 +1032,3 @@ These data types are defined in the channel between clef and the UI`)
fmt.Println(elem)
}
}
/**
//Create Account
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_new","params":["test"],"id":67}' localhost:8550
// List accounts
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_list","params":[""],"id":67}' http://localhost:8550/
// Make Transaction
// safeSend(0x12)
// 4401a6e40000000000000000000000000000000000000000000000000000000000000012
// supplied abi
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x82A2A876D39022B3019932D30Cd9c97ad5616813","gas":"0x333","gasPrice":"0x123","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x10", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"test"],"id":67}' http://localhost:8550/
// Not supplied
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x82A2A876D39022B3019932D30Cd9c97ad5616813","gas":"0x333","gasPrice":"0x123","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x10", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"}],"id":67}' http://localhost:8550/
// Sign data
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_sign","params":["0x694267f14675d7e1b9494fd8d72fefe1755710fa","bazonk gaz baz"],"id":67}' http://localhost:8550/
**/

View File

@ -42,7 +42,6 @@ class PipeTransport(ServerTransport):
self.output.write("\n")
class StdIOHandler():
def __init__(self):
pass
@ -76,7 +75,7 @@ class StdIOHandler():
:param transaction: transaction info
:param call_info: info abou the call, e.g. if ABI info could not be
:param meta: metadata about the request, e.g. where the call comes from
:return:
:return:
"""
transaction = req.get('transaction')
_from = req.get('from')
@ -158,8 +157,7 @@ class StdIOHandler():
return
def main(args):
cmd = ["./clef", "--stdio-ui"]
cmd = ["clef", "--stdio-ui"]
if len(args) > 0 and args[0] == "test":
cmd.extend(["--stdio-ui-test"])
print("cmd: {}".format(" ".join(cmd)))

View File

@ -19,32 +19,30 @@ The section below deals with both of them
A ruleset file is implemented as a `js` file. Under the hood, the ruleset-engine is a `SignerUI`, implementing the same methods as the `json-rpc` methods
defined in the UI protocol. Example:
```javascript
function asBig(str){
if(str.slice(0,2) == "0x"){ return new BigNumber(str.slice(2),16)}
return new BigNumber(str)
```js
function asBig(str) {
if (str.slice(0, 2) == "0x") {
return new BigNumber(str.slice(2), 16)
}
return new BigNumber(str)
}
// Approve transactions to a certain contract if value is below a certain limit
function ApproveTx(req){
var limit = big.Newint("0xb1a2bc2ec50000")
function ApproveTx(req) {
var limit = big.Newint("0xb1a2bc2ec50000")
var value = asBig(req.transaction.value);
if(req.transaction.to.toLowerCase()=="0xae967917c465db8578ca9024c205720b1a3651a9")
&& value.lt(limit) ){
return "Approve"
}
// If we return "Reject", it will be rejected.
// By not returning anything, it will be passed to the next UI, for manual processing
if (req.transaction.to.toLowerCase() == "0xae967917c465db8578ca9024c205720b1a3651a9") && value.lt(limit)) {
return "Approve"
}
// If we return "Reject", it will be rejected.
// By not returning anything, it will be passed to the next UI, for manual processing
}
//Approve listings if request made from IPC
// Approve listings if request made from IPC
function ApproveListing(req){
if (req.metadata.scheme == "ipc"){ return "Approve"}
}
```
Whenever the external API is called (and the ruleset is enabled), the `signer` calls the UI, which is an instance of a ruleset-engine. The ruleset-engine
@ -140,97 +138,97 @@ This is now implemented (with ephemeral non-encrypted storage for now, so not ye
## Example 1: ruleset for a rate-limited window
```javascript
function big(str){
if(str.slice(0,2) == "0x"){ return new BigNumber(str.slice(2),16)}
return new BigNumber(str)
```js
function big(str) {
if (str.slice(0, 2) == "0x") {
return new BigNumber(str.slice(2), 16)
}
return new BigNumber(str)
}
// Time window: 1 week
var window = 1000* 3600*24*7;
// Time window: 1 week
var window = 1000* 3600*24*7;
// Limit : 1 ether
var limit = new BigNumber("1e18");
// Limit : 1 ether
var limit = new BigNumber("1e18");
function isLimitOk(transaction){
var value = big(transaction.value)
// Start of our window function
var windowstart = new Date().getTime() - window;
function isLimitOk(transaction) {
var value = big(transaction.value)
// Start of our window function
var windowstart = new Date().getTime() - window;
var txs = [];
var stored = storage.Get('txs');
if(stored != ""){
txs = JSON.parse(stored)
}
// First, remove all that have passed out of the time-window
var newtxs = txs.filter(function(tx){return tx.tstamp > windowstart});
console.log(txs, newtxs.length);
// Secondly, aggregate the current sum
sum = new BigNumber(0)
sum = newtxs.reduce(function(agg, tx){ return big(tx.value).plus(agg)}, sum);
console.log("ApproveTx > Sum so far", sum);
console.log("ApproveTx > Requested", value.toNumber());
// Would we exceed weekly limit ?
return sum.plus(value).lt(limit)
var txs = [];
var stored = storage.get('txs');
if (stored != "") {
txs = JSON.parse(stored)
}
function ApproveTx(r){
if (isLimitOk(r.transaction)){
return "Approve"
}
return "Nope"
}
// First, remove all that have passed out of the time-window
var newtxs = txs.filter(function(tx){return tx.tstamp > windowstart});
console.log(txs, newtxs.length);
/**
* OnApprovedTx(str) is called when a transaction has been approved and signed. The parameter
* 'response_str' contains the return value that will be sent to the external caller.
* The return value from this method is ignore - the reason for having this callback is to allow the
* ruleset to keep track of approved transactions.
*
* When implementing rate-limited rules, this callback should be used.
* If a rule responds with neither 'Approve' nor 'Reject' - the tx goes to manual processing. If the user
* then accepts the transaction, this method will be called.
*
* TLDR; Use this method to keep track of signed transactions, instead of using the data in ApproveTx.
*/
function OnApprovedTx(resp){
var value = big(resp.tx.value)
var txs = []
// Load stored transactions
var stored = storage.Get('txs');
if(stored != ""){
txs = JSON.parse(stored)
}
// Add this to the storage
txs.push({tstamp: new Date().getTime(), value: value});
storage.Put("txs", JSON.stringify(txs));
}
// Secondly, aggregate the current sum
sum = new BigNumber(0)
sum = newtxs.reduce(function(agg, tx){ return big(tx.value).plus(agg)}, sum);
console.log("ApproveTx > Sum so far", sum);
console.log("ApproveTx > Requested", value.toNumber());
// Would we exceed weekly limit ?
return sum.plus(value).lt(limit)
}
function ApproveTx(r) {
if (isLimitOk(r.transaction)) {
return "Approve"
}
return "Nope"
}
/**
* OnApprovedTx(str) is called when a transaction has been approved and signed. The parameter
* 'response_str' contains the return value that will be sent to the external caller.
* The return value from this method is ignore - the reason for having this callback is to allow the
* ruleset to keep track of approved transactions.
*
* When implementing rate-limited rules, this callback should be used.
* If a rule responds with neither 'Approve' nor 'Reject' - the tx goes to manual processing. If the user
* then accepts the transaction, this method will be called.
*
* TLDR; Use this method to keep track of signed transactions, instead of using the data in ApproveTx.
*/
function OnApprovedTx(resp) {
var value = big(resp.tx.value)
var txs = []
// Load stored transactions
var stored = storage.get('txs');
if (stored != "") {
txs = JSON.parse(stored)
}
// Add this to the storage
txs.push({tstamp: new Date().getTime(), value: value});
storage.put("txs", JSON.stringify(txs));
}
```
## Example 2: allow destination
```javascript
function ApproveTx(r){
if(r.transaction.from.toLowerCase()=="0x0000000000000000000000000000000000001337"){ return "Approve"}
if(r.transaction.from.toLowerCase()=="0x000000000000000000000000000000000000dead"){ return "Reject"}
// Otherwise goes to manual processing
```js
function ApproveTx(r) {
if (r.transaction.from.toLowerCase() == "0x0000000000000000000000000000000000001337") {
return "Approve"
}
if (r.transaction.from.toLowerCase() == "0x000000000000000000000000000000000000dead") {
return "Reject"
}
// Otherwise goes to manual processing
}
```
## Example 3: Allow listing
```javascript
function ApproveListing(){
return "Approve"
}
```
```js
function ApproveListing() {
return "Approve"
}
```

View File

@ -1,200 +1,278 @@
## Initializing the signer
## Initializing Clef
First, initialize the master seed.
First thing's first, Clef needs to store some data itself. Since that data might be sensitive (passwords, signing rules, accounts), Clef's entire storage is encrypted. To support encrypting data, the first step is to initialize Clef with a random master seed, itself too encrypted with your chosen password:
```text
#./signer init
$ clef init
WARNING!
The signer is alpha software, and not yet publically released. This software has _not_ been audited, and there
are no guarantees about the workings of this software. It may contain severe flaws. You should not use this software
unless you agree to take full responsibility for doing so, and know what you are doing.
Clef is an account management tool. It may, like any software, contain bugs.
TLDR; THIS IS NOT PRODUCTION-READY SOFTWARE!
Please take care to
- backup your keystore files,
- verify that the keystore(s) can be opened with your password.
Clef is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
Enter 'ok' to proceed:
>ok
A master seed has been generated into /home/martin/.signer/secrets.dat
> ok
This is required to be able to store credentials, such as :
The master seed of clef will be locked with a password.
Please specify a password. Do not forget this password!
Passphrase:
Repeat passphrase:
A master seed has been generated into /home/martin/.clef/masterseed.json
This is required to be able to store credentials, such as:
* Passwords for keystores (used by rule engine)
* Storage for javascript rules
* Hash of rule-file
* Storage for JavaScript auto-signing rules
* Hash of JavaScript rule-file
You should treat that file with utmost secrecy, and make a backup of it.
NOTE: This file does not contain your accounts. Those need to be backed up separately!
You should treat 'masterseed.json' with utmost secrecy and make a backup of it!
* The password is necessary but not enough, you need to back up the master seed too!
* The master seed does not contain your accounts, those need to be backed up separately!
```
(for readability purposes, we'll remove the WARNING printout in the rest of this document)
*For readability purposes, we'll remove the WARNING printout, user confirmation and the unlocking of the master seed in the rest of this document.*
## Creating rules
## Remote interactions
Now, you can create a rule-file. Note that it is not mandatory to use predefined rules, but it's really handy.
Clef is capable of managing both key-file based accounts as well as hardware wallets. To evaluate clef, we're going to point it to our Rinkeby testnet keystore and specify the Rinkeby chain ID for signing (Clef doesn't have a backing chain, so it doesn't know what network it runs on).
```javascript
function ApproveListing(){
```text
$ clef --keystore ~/.ethereum/rinkeby/keystore --chainid 4
INFO [07-01|11:00:46.385] Starting signer chainid=4 keystore=$HOME/.ethereum/rinkeby/keystore light-kdf=false advanced=false
DEBUG[07-01|11:00:46.389] FS scan times list=3.521941ms set=9.017µs diff=4.112µs
DEBUG[07-01|11:00:46.391] Ledger support enabled
DEBUG[07-01|11:00:46.391] Trezor support enabled via HID
DEBUG[07-01|11:00:46.391] Trezor support enabled via WebUSB
INFO [07-01|11:00:46.391] Audit logs configured file=audit.log
DEBUG[07-01|11:00:46.392] IPC registered namespace=account
INFO [07-01|11:00:46.392] IPC endpoint opened url=$HOME/.clef/clef.ipc
------- Signer info -------
* intapi_version : 7.0.0
* extapi_version : 6.0.0
* extapi_http : n/a
* extapi_ipc : $HOME/.clef/clef.ipc
```
By default, Clef starts up in CLI (Command Line Interface) mode. Arbitrary remote processes may *request* account interactions (e.g. sign a transaction), which the user will need to individually *confirm*.
To test this out, we can *request* Clef to list all account via its *External API endpoint*:
```text
echo '{"id": 1, "jsonrpc": "2.0", "method": "account_list"}' | nc -U ~/.clef/clef.ipc
```
This will prompt the user within the Clef CLI to confirm or deny the request:
```text
-------- List Account request--------------
A request has been made to list all accounts.
You can select which accounts the caller can see
[x] 0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3
URL: keystore://$HOME/.ethereum/rinkeby/keystore/UTC--2017-04-14T15-15-00.327614556Z--d9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
[x] 0x086278A6C067775F71d6B2BB1856Db6E28c30418
URL: keystore://$HOME/.ethereum/rinkeby/keystore/UTC--2018-02-06T22-53-11.211657239Z--086278a6c067775f71d6b2bb1856db6e28c30418
-------------------------------------------
Request context:
NA -> NA -> NA
Additional HTTP header data, provided by the external caller:
User-Agent:
Origin:
Approve? [y/N]:
>
```
Depending on whether we approve or deny the request, the original NetCat process will get:
```text
{"jsonrpc":"2.0","id":1,"result":["0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3","0x086278a6c067775f71d6b2bb1856db6e28c30418"]}
or
{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"Request denied"}}
```
Apart from listing accounts, you can also *request* creating a new account; signing transactions and data; and recovering signatures. You can find the available methods in the Clef [External API Spec](https://github.com/ethereum/go-ethereum/tree/master/cmd/clef#external-api-1) and the [External API Changelog](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/extapi_changelog.md).
*Note, the number of things you can do from the External API is deliberately small, since we want to limit the power of remote calls by as much as possible! Clef has an [Internal API](https://github.com/ethereum/go-ethereum/tree/master/cmd/clef#ui-api-1) too for the UI (User Interface) which is much richer and can support custom interfaces on top. But that's out of scope here.*
## Automatic rules
For most users, manually confirming every transaction is the way to go. However, there are cases when it makes sense to set up some rules which permit Clef to sign a transaction without prompting the user. One such example would be running a signer on Rinkeby or other PoA networks.
For starters, we can create a rule file that automatically permits anyone to list our available accounts without user confirmation. The rule file is a tiny JavaScript snippet that you can program however you want:
```js
function ApproveListing() {
return "Approve"
}
```
Get the `sha256` hash. If you have openssl, you can do `openssl sha256 rules.js`...
```text
#sha256sum rules.js
6c21d1737429d6d4f2e55146da0797782f3c0a0355227f19d702df377c165d72 rules.js
```
...now `attest` the file...
```text
#./signer attest 6c21d1737429d6d4f2e55146da0797782f3c0a0355227f19d702df377c165d72
Of course, Clef isn't going to just accept and run arbitrary scripts you give it, that would be dangerous if someone changes your rule file! Instead, you need to explicitly *attest* the rule file, which entails injecting its hash into Clef's secure store.
INFO [02-21|12:14:38] Ruleset attestation updated sha256=6c21d1737429d6d4f2e55146da0797782f3c0a0355227f19d702df377c165d72
```text
$ sha256sum rules.js
645b58e4f945e24d0221714ff29f6aa8e860382ced43490529db1695f5fcc71c rules.js
$ clef attest 645b58e4f945e24d0221714ff29f6aa8e860382ced43490529db1695f5fcc71c
Decrypt master seed of clef
Passphrase:
INFO [07-01|13:25:03.290] Ruleset attestation updated sha256=645b58e4f945e24d0221714ff29f6aa8e860382ced43490529db1695f5fcc71c
```
...and (this is required only for non-production versions) load a mock-up `4byte.json` by copying the file from the source to your current working directory:
```text
#cp $GOPATH/src/github.com/ethereum/go-ethereum/cmd/clef/4byte.json $PWD
```
At this point, we can start Clef with the rule file:
At this point, we can start the signer with the rule-file:
```text
#./signer --rules rules.js --rpc
$ clef --keystore ~/.ethereum/rinkeby/keystore --chainid 4 --rules rules.js
INFO [09-25|20:28:11.866] Using CLI as UI-channel
INFO [09-25|20:28:11.876] Loaded 4byte db signatures=5509 file=./4byte.json
INFO [09-25|20:28:11.877] Rule engine configured file=./rules.js
DEBUG[09-25|20:28:11.877] FS scan times list=100.781µs set=13.253µs diff=5.761µs
DEBUG[09-25|20:28:11.884] Ledger support enabled
DEBUG[09-25|20:28:11.888] Trezor support enabled
INFO [09-25|20:28:11.888] Audit logs configured file=audit.log
DEBUG[09-25|20:28:11.888] HTTP registered namespace=account
INFO [09-25|20:28:11.890] HTTP endpoint opened url=http://localhost:8550
DEBUG[09-25|20:28:11.890] IPC registered namespace=account
INFO [09-25|20:28:11.890] IPC endpoint opened url=<nil>
INFO [07-01|13:39:49.726] Rule engine configured file=rules.js
INFO [07-01|13:39:49.726] Starting signer chainid=4 keystore=$HOME/.ethereum/rinkeby/keystore light-kdf=false advanced=false
DEBUG[07-01|13:39:49.726] FS scan times list=35.15µs set=4.251µs diff=2.766µs
DEBUG[07-01|13:39:49.727] Ledger support enabled
DEBUG[07-01|13:39:49.727] Trezor support enabled via HID
DEBUG[07-01|13:39:49.727] Trezor support enabled via WebUSB
INFO [07-01|13:39:49.728] Audit logs configured file=audit.log
DEBUG[07-01|13:39:49.728] IPC registered namespace=account
INFO [07-01|13:39:49.728] IPC endpoint opened url=$HOME/.clef/clef.ipc
------- Signer info -------
* extapi_version : 2.0.0
* intapi_version : 2.0.0
* extapi_http : http://localhost:8550
* extapi_ipc : <nil>
* intapi_version : 7.0.0
* extapi_version : 6.0.0
* extapi_http : n/a
* extapi_ipc : $HOME/.clef/clef.ipc
```
Any list-requests will now be auto-approved by our rule-file.
Any account listing *request* will now be auto-approved by the rule file:
```text
$ echo '{"id": 1, "jsonrpc": "2.0", "method": "account_list"}' | nc -U ~/.clef/clef.ipc
{"jsonrpc":"2.0","id":1,"result":["0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3","0x086278a6c067775f71d6b2bb1856db6e28c30418"]}
```
## Under the hood
While doing the operations above, these files have been created:
```text
#ls -laR ~/.signer/
/home/martin/.signer/:
total 16
drwx------ 3 martin martin 4096 feb 21 12:14 .
drwxr-xr-x 71 martin martin 4096 feb 21 12:12 ..
drwx------ 2 martin martin 4096 feb 21 12:14 43f73718397aa54d1b22
-rwx------ 1 martin martin 256 feb 21 12:12 secrets.dat
$ ls -laR ~/.clef/
/home/martin/.signer/43f73718397aa54d1b22:
$HOME/.clef/:
total 24
drwxr-x--x 3 user user 4096 Jul 1 13:45 .
drwxr-xr-x 102 user user 12288 Jul 1 13:39 ..
drwx------ 2 user user 4096 Jul 1 13:25 02f90c0603f4f2f60188
-r-------- 1 user user 868 Jun 28 13:55 masterseed.json
$HOME/.clef/02f90c0603f4f2f60188:
total 12
drwx------ 2 martin martin 4096 feb 21 12:14 .
drwx------ 3 martin martin 4096 feb 21 12:14 ..
-rw------- 1 martin martin 159 feb 21 12:14 config.json
#cat /home/martin/.signer/43f73718397aa54d1b22/config.json
{"ruleset_sha256":{"iv":"6v4W4tfJxj3zZFbl","c":"6dt5RTDiTq93yh1qDEjpsat/tsKG7cb+vr3sza26IPL2fvsQ6ZoqFx++CPUa8yy6fD9Bbq41L01ehkKHTG3pOAeqTW6zc/+t0wv3AB6xPmU="}}
drwx------ 2 user user 4096 Jul 1 13:25 .
drwxr-x--x 3 user user 4096 Jul 1 13:45 ..
-rw------- 1 user user 159 Jul 1 13:25 config.json
$ cat ~/.clef/02f90c0603f4f2f60188/config.json
{"ruleset_sha256":{"iv":"SWWEtnl+R+I+wfG7","c":"I3fjmwmamxVcfGax7D0MdUOL29/rBWcs73WBILmYK0o1CrX7wSMc3y37KsmtlZUAjp0oItYq01Ow8VGUOzilG91tDHInB5YHNtm/YkufEbo="}}
```
In `~/.signer`, the `secrets.dat` file was created, containing the `master_seed`.
The `master_seed` was then used to derive a few other things:
In `$HOME/.clef`, the `masterseed.json` file was created, containing the master seed. This seed was then used to derive a few other things:
- `vault_location` : in this case `43f73718397aa54d1b22` .
- Thus, if you use a different `master_seed`, another `vault_location` will be used that does not conflict with each other.
- Example: `signer --signersecret /path/to/afile ...`
- `config.json` which is the encrypted key/value storage for configuration data, containing the key `ruleset_sha256`.
- **Vault location**: in this case `02f90c0603f4f2f60188`.
- If you use a different master seed, a different vault location will be used that does not conflict with each other (e.g. `clef --signersecret /path/to/file`). This allows you to run multiple instances of Clef, each with its own rules (e.g. mainnet + testnet).
- **`config.json`**: the encrypted key/value storage for configuration data, currently only containing the key `ruleset_sha256`, the attested hash of the automatic rules to use.
## Advanced rules
## Adding credentials
In order to make more useful rules like signing transactions, the signer needs access to the passwords needed to unlock keystores.
In order to make more useful rules - like signing transactions - the signer needs access to the passwords needed to unlock keys from the keystore. You can inject an unlock password via `clef setpw`.
```text
#./signer addpw "0x694267f14675d7e1b9494fd8d72fefe1755710fa" "test_password"
$ clef setpw 0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
INFO [02-21|13:43:21] Credential store updated key=0x694267f14675d7e1b9494fd8d72fefe1755710fa
Please enter a passphrase to store for this address:
Passphrase:
Repeat passphrase:
Decrypt master seed of clef
Passphrase:
INFO [07-01|14:05:56.031] Credential store updated key=0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
```
## More advanced rules
Now let's update the rules to make use of credentials:
Now let's update the rules to make use of the new credentials:
```javascript
function ApproveListing(){
```js
function ApproveListing() {
return "Approve"
}
function ApproveSignData(r){
if( r.address.toLowerCase() == "0x694267f14675d7e1b9494fd8d72fefe1755710fa")
{
if(r.message.indexOf("bazonk") >= 0){
function ApproveSignData(req) {
if (req.address.toLowerCase() == "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3") {
if (req.messages[0].value.indexOf("bazonk") >= 0) {
return "Approve"
}
return "Reject"
}
// Otherwise goes to manual processing
}
```
In this example:
* Any requests to sign data with the account `0x694...` will be
* auto-approved if the message contains with `bazonk`
* auto-rejected if it does not.
* Any other signing-requests will be passed along for manual approve/reject.
_Note: make sure that `0x694...` is an account you have access to. You can create it either via the clef or the traditional account cli tool. If the latter was chosen, make sure both clef and geth use the same keystore by specifing `--keystore path/to/your/keystore` when running clef._
- Any requests to sign data with the account `0xd9c9...` will be:
- Auto-approved if the message contains `bazonk`,
- Auto-rejected if the message does not contain `bazonk`,
- Any other requests will be passed along for manual confirmation.
*Note, to make this example work, please use you own accounts. You can create a new account either via Clef or the traditional account CLI tools. If the latter was chosen, make sure both Clef and Geth use the same keystore by specifing `--keystore path/to/your/keystore` when running Clef.*
Attest the new rule file so that Clef will accept loading it:
Attest the new file...
```text
#sha256sum rules.js
2a0cb661dacfc804b6e95d935d813fd17c0997a7170e4092ffbc34ca976acd9f rules.js
$ sha256sum rules.js
f163a1738b649259bb9b369c593fdc4c6b6f86cc87e343c3ba58faee03c2a178 rules.js
#./signer attest 2a0cb661dacfc804b6e95d935d813fd17c0997a7170e4092ffbc34ca976acd9f
INFO [02-21|14:36:30] Ruleset attestation updated sha256=2a0cb661dacfc804b6e95d935d813fd17c0997a7170e4092ffbc34ca976acd9f
$ clef attest f163a1738b649259bb9b369c593fdc4c6b6f86cc87e343c3ba58faee03c2a178
Decrypt master seed of clef
Passphrase:
INFO [07-01|14:11:28.509] Ruleset attestation updated sha256=f163a1738b649259bb9b369c593fdc4c6b6f86cc87e343c3ba58faee03c2a178
```
And start the signer:
Restart Clef with the new rules in place:
```
#./signer --rules rules.js --rpc
$ clef --keystore ~/.ethereum/rinkeby/keystore --chainid 4 --rules rules.js
INFO [09-25|21:02:16.450] Using CLI as UI-channel
INFO [09-25|21:02:16.466] Loaded 4byte db signatures=5509 file=./4byte.json
INFO [09-25|21:02:16.467] Rule engine configured file=./rules.js
DEBUG[09-25|21:02:16.468] FS scan times list=1.45262ms set=21.926µs diff=6.944µs
DEBUG[09-25|21:02:16.473] Ledger support enabled
DEBUG[09-25|21:02:16.475] Trezor support enabled
INFO [09-25|21:02:16.476] Audit logs configured file=audit.log
DEBUG[09-25|21:02:16.476] HTTP registered namespace=account
INFO [09-25|21:02:16.478] HTTP endpoint opened url=http://localhost:8550
DEBUG[09-25|21:02:16.478] IPC registered namespace=account
INFO [09-25|21:02:16.478] IPC endpoint opened url=<nil>
INFO [07-01|14:12:41.636] Rule engine configured file=rules.js
INFO [07-01|14:12:41.636] Starting signer chainid=4 keystore=$HOME/.ethereum/rinkeby/keystore light-kdf=false advanced=false
DEBUG[07-01|14:12:41.636] FS scan times list=46.722µs set=4.47µs diff=2.157µs
DEBUG[07-01|14:12:41.637] Ledger support enabled
DEBUG[07-01|14:12:41.637] Trezor support enabled via HID
DEBUG[07-01|14:12:41.638] Trezor support enabled via WebUSB
INFO [07-01|14:12:41.638] Audit logs configured file=audit.log
DEBUG[07-01|14:12:41.638] IPC registered namespace=account
INFO [07-01|14:12:41.638] IPC endpoint opened url=$HOME/.clef/clef.ipc
------- Signer info -------
* extapi_version : 2.0.0
* intapi_version : 2.0.0
* extapi_http : http://localhost:8550
* extapi_ipc : <nil>
* intapi_version : 7.0.0
* extapi_version : 6.0.0
* extapi_http : n/a
* extapi_ipc : $HOME/.clef/clef.ipc
```
And then test signing, once with `bazonk` and once without:
Then test signing, once with `bazonk` and once without:
```
#curl -H "Content-Type: application/json" -X POST --data "{\"jsonrpc\":\"2.0\",\"method\":\"account_sign\",\"params\":[\"0x694267f14675d7e1b9494fd8d72fefe1755710fa\",\"0x$(xxd -pu <<< ' bazonk baz gaz')\"],\"id\":67}" http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":"0x93e6161840c3ae1efc26dc68dedab6e8fc233bb3fefa1b4645dbf6609b93dace160572ea4ab33240256bb6d3dadb60dcd9c515d6374d3cf614ee897408d41d541c"}
#curl -H "Content-Type: application/json" -X POST --data "{\"jsonrpc\":\"2.0\",\"method\":\"account_sign\",\"params\":[\"0x694267f14675d7e1b9494fd8d72fefe1755710fa\",\"0x$(xxd -pu <<< ' bonk baz gaz')\"],\"id\":67}" http://localhost:8550/
{"jsonrpc":"2.0","id":67,"error":{"code":-32000,"message":"Request denied"}}
$ echo '{"id": 1, "jsonrpc":"2.0", "method":"account_signData", "params":["data/plain", "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3", "0x202062617a6f6e6b2062617a2067617a0a"]}' | nc -U ~/.clef/clef.ipc
{"jsonrpc":"2.0","id":1,"result":"0x4f93e3457027f6be99b06b3392d0ebc60615ba448bb7544687ef1248dea4f5317f789002df783979c417d969836b6fda3710f5bffb296b4d51c8aaae6e2ac4831c"}
$ echo '{"id": 1, "jsonrpc":"2.0", "method":"account_signData", "params":["data/plain", "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3", "0x2020626f6e6b2062617a2067617a0a"]}' | nc -U ~/.clef/clef.ipc
{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"Request denied"}}
```
Meanwhile, in the signer output:
Meanwhile, in the Clef output log you can see:
```text
INFO [02-21|14:42:41] Op approved
INFO [02-21|14:42:56] Op rejected
@ -203,9 +281,73 @@ INFO [02-21|14:42:56] Op rejected
The signer also stores all traffic over the external API in a log file. The last 4 lines shows the two requests and their responses:
```text
#tail -n 4 audit.log
t=2018-02-21T14:42:41+0100 lvl=info msg=Sign api=signer type=request metadata="{\"remote\":\"127.0.0.1:49706\",\"local\":\"localhost:8550\",\"scheme\":\"HTTP/1.1\"}" addr="0x694267f14675d7e1b9494fd8d72fefe1755710fa [chksum INVALID]" data=202062617a6f6e6b2062617a2067617a0a
t=2018-02-21T14:42:42+0100 lvl=info msg=Sign api=signer type=response data=93e6161840c3ae1efc26dc68dedab6e8fc233bb3fefa1b4645dbf6609b93dace160572ea4ab33240256bb6d3dadb60dcd9c515d6374d3cf614ee897408d41d541c error=nil
t=2018-02-21T14:42:56+0100 lvl=info msg=Sign api=signer type=request metadata="{\"remote\":\"127.0.0.1:49708\",\"local\":\"localhost:8550\",\"scheme\":\"HTTP/1.1\"}" addr="0x694267f14675d7e1b9494fd8d72fefe1755710fa [chksum INVALID]" data=2020626f6e6b2062617a2067617a0a
t=2018-02-21T14:42:56+0100 lvl=info msg=Sign api=signer type=response data= error="Request denied"
```
$ tail -n 4 audit.log
t=2019-07-01T15:52:14+0300 lvl=info msg=SignData api=signer type=request metadata="{\"remote\":\"NA\",\"local\":\"NA\",\"scheme\":\"NA\",\"User-Agent\":\"\",\"Origin\":\"\"}" addr="0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3 [chksum INVALID]" data=0x202062617a6f6e6b2062617a2067617a0a content-type=data/plain
t=2019-07-01T15:52:14+0300 lvl=info msg=SignData api=signer type=response data=4f93e3457027f6be99b06b3392d0ebc60615ba448bb7544687ef1248dea4f5317f789002df783979c417d969836b6fda3710f5bffb296b4d51c8aaae6e2ac4831c error=nil
t=2019-07-01T15:52:23+0300 lvl=info msg=SignData api=signer type=request metadata="{\"remote\":\"NA\",\"local\":\"NA\",\"scheme\":\"NA\",\"User-Agent\":\"\",\"Origin\":\"\"}" addr="0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3 [chksum INVALID]" data=0x2020626f6e6b2062617a2067617a0a content-type=data/plain
t=2019-07-01T15:52:23+0300 lvl=info msg=SignData api=signer type=response data= error="Request denied"
```
For more details on writing automatic rules, please see the [rules spec](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/rules.md).
## Geth integration
Of course, as awesome as Clef is, it's not feasible to interact with it via JSON RPC by hand. Long term, we're hoping to convince the general Ethereum community to support Clef as a general signer (it's only 3-5 methods), thus allowing your favorite DApp, Metamask, MyCrypto, etc to request signatures directly.
Until then however, we're trying to pave the way via Geth. Geth v1.9.0 has built in support via `--signer <API endpoint>` for using a local or remote Clef instance as an account backend!
We can try this by running Clef with our previous rules on Rinkeby (for now it's a good idea to allow auto-listing accounts, since Geth likes to retrieve them once in a while).
```text
$ clef --keystore ~/.ethereum/rinkeby/keystore --chainid 4 --rules rules.js
```
In a different window we can start Geth, list our accounts, even list our wallets to see where the accounts originate from:
```text
$ geth --rinkeby --signer=~/.clef/clef.ipc console
> eth.accounts
["0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3", "0x086278a6c067775f71d6b2bb1856db6e28c30418"]
> personal.listWallets
[{
accounts: [{
address: "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3",
url: "extapi://$HOME/.clef/clef.ipc"
}, {
address: "0x086278a6c067775f71d6b2bb1856db6e28c30418",
url: "extapi://$HOME/.clef/clef.ipc"
}],
status: "ok [version=6.0.0]",
url: "extapi://$HOME/.clef/clef.ipc"
}]
> eth.sendTransaction({from: eth.accounts[0], to: eth.accounts[0]})
```
Lastly, when we requested a transaction to be sent, Clef prompted us in the original window to approve it:
```text
--------- Transaction request-------------
to: 0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3
from: 0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3 [chksum ok]
value: 0 wei
gas: 0x5208 (21000)
gasprice: 1000000000 wei
nonce: 0x2366 (9062)
Request context:
NA -> NA -> NA
Additional HTTP header data, provided by the external caller:
User-Agent:
Origin:
-------------------------------------------
Approve? [y/N]:
> y
```
:boom:
*Note, if you enable the external signer backend in Geth, all other account management is disabled. This is because long term we want to remove account management from Geth.*

166
cmd/devp2p/discv4cmd.go Normal file
View File

@ -0,0 +1,166 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"net"
"sort"
"strings"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/params"
"gopkg.in/urfave/cli.v1"
)
var (
discv4Command = cli.Command{
Name: "discv4",
Usage: "Node Discovery v4 tools",
Subcommands: []cli.Command{
discv4PingCommand,
discv4RequestRecordCommand,
discv4ResolveCommand,
},
}
discv4PingCommand = cli.Command{
Name: "ping",
Usage: "Sends ping to a node",
Action: discv4Ping,
}
discv4RequestRecordCommand = cli.Command{
Name: "requestenr",
Usage: "Requests a node record using EIP-868 enrRequest",
Action: discv4RequestRecord,
}
discv4ResolveCommand = cli.Command{
Name: "resolve",
Usage: "Finds a node in the DHT",
Action: discv4Resolve,
Flags: []cli.Flag{bootnodesFlag},
}
)
var bootnodesFlag = cli.StringFlag{
Name: "bootnodes",
Usage: "Comma separated nodes used for bootstrapping",
}
func discv4Ping(ctx *cli.Context) error {
n, disc, err := getNodeArgAndStartV4(ctx)
if err != nil {
return err
}
defer disc.Close()
start := time.Now()
if err := disc.Ping(n); err != nil {
return fmt.Errorf("node didn't respond: %v", err)
}
fmt.Printf("node responded to ping (RTT %v).\n", time.Since(start))
return nil
}
func discv4RequestRecord(ctx *cli.Context) error {
n, disc, err := getNodeArgAndStartV4(ctx)
if err != nil {
return err
}
defer disc.Close()
respN, err := disc.RequestENR(n)
if err != nil {
return fmt.Errorf("can't retrieve record: %v", err)
}
fmt.Println(respN.String())
return nil
}
func discv4Resolve(ctx *cli.Context) error {
n, disc, err := getNodeArgAndStartV4(ctx)
if err != nil {
return err
}
defer disc.Close()
fmt.Println(disc.Resolve(n).String())
return nil
}
func getNodeArgAndStartV4(ctx *cli.Context) (*enode.Node, *discover.UDPv4, error) {
if ctx.NArg() != 1 {
return nil, nil, fmt.Errorf("missing node as command-line argument")
}
n, err := parseNode(ctx.Args()[0])
if err != nil {
return nil, nil, err
}
var bootnodes []*enode.Node
if commandHasFlag(ctx, bootnodesFlag) {
bootnodes, err = parseBootnodes(ctx)
if err != nil {
return nil, nil, err
}
}
disc, err := startV4(bootnodes)
return n, disc, err
}
func parseBootnodes(ctx *cli.Context) ([]*enode.Node, error) {
s := params.RinkebyBootnodes
if ctx.IsSet(bootnodesFlag.Name) {
s = strings.Split(ctx.String(bootnodesFlag.Name), ",")
}
nodes := make([]*enode.Node, len(s))
var err error
for i, record := range s {
nodes[i], err = parseNode(record)
if err != nil {
return nil, fmt.Errorf("invalid bootstrap node: %v", err)
}
}
return nodes, nil
}
// commandHasFlag returns true if the current command supports the given flag.
func commandHasFlag(ctx *cli.Context, flag cli.Flag) bool {
flags := ctx.FlagNames()
sort.Strings(flags)
i := sort.SearchStrings(flags, flag.GetName())
return i != len(flags) && flags[i] == flag.GetName()
}
// startV4 starts an ephemeral discovery V4 node.
func startV4(bootnodes []*enode.Node) (*discover.UDPv4, error) {
var cfg discover.Config
cfg.Bootnodes = bootnodes
cfg.PrivateKey, _ = crypto.GenerateKey()
db, _ := enode.OpenDB("")
ln := enode.NewLocalNode(db, cfg.PrivateKey)
socket, err := net.ListenUDP("udp4", &net.UDPAddr{IP: net.IP{0, 0, 0, 0}})
if err != nil {
return nil, err
}
addr := socket.LocalAddr().(*net.UDPAddr)
ln.SetFallbackIP(net.IP{127, 0, 0, 1})
ln.SetFallbackUDP(addr.Port)
return discover.ListenUDP(socket, ln, cfg)
}

198
cmd/devp2p/enrcmd.go Normal file
View File

@ -0,0 +1,198 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"encoding/base64"
"encoding/hex"
"fmt"
"io/ioutil"
"net"
"os"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/ethereum/go-ethereum/rlp"
"gopkg.in/urfave/cli.v1"
)
var enrdumpCommand = cli.Command{
Name: "enrdump",
Usage: "Pretty-prints node records",
Action: enrdump,
Flags: []cli.Flag{
cli.StringFlag{Name: "file"},
},
}
func enrdump(ctx *cli.Context) error {
var source string
if file := ctx.String("file"); file != "" {
if ctx.NArg() != 0 {
return fmt.Errorf("can't dump record from command-line argument in -file mode")
}
var b []byte
var err error
if file == "-" {
b, err = ioutil.ReadAll(os.Stdin)
} else {
b, err = ioutil.ReadFile(file)
}
if err != nil {
return err
}
source = string(b)
} else if ctx.NArg() == 1 {
source = ctx.Args()[0]
} else {
return fmt.Errorf("need record as argument")
}
r, err := parseRecord(source)
if err != nil {
return fmt.Errorf("INVALID: %v", err)
}
fmt.Print(dumpRecord(r))
return nil
}
// dumpRecord creates a human-readable description of the given node record.
func dumpRecord(r *enr.Record) string {
out := new(bytes.Buffer)
if n, err := enode.New(enode.ValidSchemes, r); err != nil {
fmt.Fprintf(out, "INVALID: %v\n", err)
} else {
fmt.Fprintf(out, "Node ID: %v\n", n.ID())
}
kv := r.AppendElements(nil)[1:]
fmt.Fprintf(out, "Record has sequence number %d and %d key/value pairs.\n", r.Seq(), len(kv)/2)
fmt.Fprint(out, dumpRecordKV(kv, 2))
return out.String()
}
func dumpRecordKV(kv []interface{}, indent int) string {
// Determine the longest key name for alignment.
var out string
var longestKey = 0
for i := 0; i < len(kv); i += 2 {
key := kv[i].(string)
if len(key) > longestKey {
longestKey = len(key)
}
}
// Print the keys, invoking formatters for known keys.
for i := 0; i < len(kv); i += 2 {
key := kv[i].(string)
val := kv[i+1].(rlp.RawValue)
pad := longestKey - len(key)
out += strings.Repeat(" ", indent) + strconv.Quote(key) + strings.Repeat(" ", pad+1)
formatter := attrFormatters[key]
if formatter == nil {
formatter = formatAttrRaw
}
fmtval, ok := formatter(val)
if ok {
out += fmtval + "\n"
} else {
out += hex.EncodeToString(val) + " (!)\n"
}
}
return out
}
// parseNode parses a node record and verifies its signature.
func parseNode(source string) (*enode.Node, error) {
if strings.HasPrefix(source, "enode://") {
return enode.ParseV4(source)
}
r, err := parseRecord(source)
if err != nil {
return nil, err
}
return enode.New(enode.ValidSchemes, r)
}
// parseRecord parses a node record from hex, base64, or raw binary input.
func parseRecord(source string) (*enr.Record, error) {
bin := []byte(source)
if d, ok := decodeRecordHex(bytes.TrimSpace(bin)); ok {
bin = d
} else if d, ok := decodeRecordBase64(bytes.TrimSpace(bin)); ok {
bin = d
}
var r enr.Record
err := rlp.DecodeBytes(bin, &r)
return &r, err
}
func decodeRecordHex(b []byte) ([]byte, bool) {
if bytes.HasPrefix(b, []byte("0x")) {
b = b[2:]
}
dec := make([]byte, hex.DecodedLen(len(b)))
_, err := hex.Decode(dec, b)
return dec, err == nil
}
func decodeRecordBase64(b []byte) ([]byte, bool) {
if bytes.HasPrefix(b, []byte("enr:")) {
b = b[4:]
}
dec := make([]byte, base64.RawURLEncoding.DecodedLen(len(b)))
n, err := base64.RawURLEncoding.Decode(dec, b)
return dec[:n], err == nil
}
// attrFormatters contains formatting functions for well-known ENR keys.
var attrFormatters = map[string]func(rlp.RawValue) (string, bool){
"id": formatAttrString,
"ip": formatAttrIP,
"ip6": formatAttrIP,
"tcp": formatAttrUint,
"tcp6": formatAttrUint,
"udp": formatAttrUint,
"udp6": formatAttrUint,
}
func formatAttrRaw(v rlp.RawValue) (string, bool) {
s := hex.EncodeToString(v)
return s, true
}
func formatAttrString(v rlp.RawValue) (string, bool) {
content, _, err := rlp.SplitString(v)
return strconv.Quote(string(content)), err == nil
}
func formatAttrIP(v rlp.RawValue) (string, bool) {
content, _, err := rlp.SplitString(v)
if err != nil || len(content) != 4 && len(content) != 6 {
return "", false
}
return net.IP(content).String(), true
}
func formatAttrUint(v rlp.RawValue) (string, bool) {
var x uint64
if err := rlp.DecodeBytes(v, &x); err != nil {
return "", false
}
return strconv.FormatUint(x, 10), true
}

68
cmd/devp2p/main.go Normal file
View File

@ -0,0 +1,68 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"os"
"path/filepath"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/params"
"gopkg.in/urfave/cli.v1"
)
var (
// Git information set by linker when building with ci.go.
gitCommit string
gitDate string
app = &cli.App{
Name: filepath.Base(os.Args[0]),
Usage: "go-ethereum devp2p tool",
Version: params.VersionWithCommit(gitCommit, gitDate),
Writer: os.Stdout,
HideVersion: true,
}
)
func init() {
// Set up the CLI app.
app.Flags = append(app.Flags, debug.Flags...)
app.Before = func(ctx *cli.Context) error {
return debug.Setup(ctx, "")
}
app.After = func(ctx *cli.Context) error {
debug.Exit()
return nil
}
app.CommandNotFound = func(ctx *cli.Context, cmd string) {
fmt.Fprintf(os.Stderr, "No such command: %s\n", cmd)
os.Exit(1)
}
// Add subcommands.
app.Commands = []cli.Command{
enrdumpCommand,
discv4Command,
}
}
func main() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}

View File

@ -17,7 +17,6 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
@ -121,28 +120,36 @@ func runCmd(ctx *cli.Context) error {
ret []byte
err error
)
codeFileFlag := ctx.GlobalString(CodeFileFlag.Name)
codeFlag := ctx.GlobalString(CodeFlag.Name)
// The '--code' or '--codefile' flag overrides code in state
if ctx.GlobalString(CodeFileFlag.Name) != "" {
if codeFileFlag != "" || codeFlag != "" {
var hexcode []byte
var err error
// If - is specified, it means that code comes from stdin
if ctx.GlobalString(CodeFileFlag.Name) == "-" {
//Try reading from stdin
if hexcode, err = ioutil.ReadAll(os.Stdin); err != nil {
fmt.Printf("Could not load code from stdin: %v\n", err)
os.Exit(1)
if codeFileFlag != "" {
var err error
// If - is specified, it means that code comes from stdin
if codeFileFlag == "-" {
//Try reading from stdin
if hexcode, err = ioutil.ReadAll(os.Stdin); err != nil {
fmt.Printf("Could not load code from stdin: %v\n", err)
os.Exit(1)
}
} else {
// Codefile with hex assembly
if hexcode, err = ioutil.ReadFile(codeFileFlag); err != nil {
fmt.Printf("Could not load code from file: %v\n", err)
os.Exit(1)
}
}
} else {
// Codefile with hex assembly
if hexcode, err = ioutil.ReadFile(ctx.GlobalString(CodeFileFlag.Name)); err != nil {
fmt.Printf("Could not load code from file: %v\n", err)
os.Exit(1)
}
hexcode = []byte(codeFlag)
}
code = common.Hex2Bytes(string(bytes.TrimRight(hexcode, "\n")))
} else if ctx.GlobalString(CodeFlag.Name) != "" {
code = common.Hex2Bytes(ctx.GlobalString(CodeFlag.Name))
if len(hexcode)%2 != 0 {
fmt.Printf("Invalid input length for hex data (%d)\n", len(hexcode))
os.Exit(1)
}
code = common.FromHex(string(hexcode))
} else if fn := ctx.Args().First(); len(fn) > 0 {
// EASM-file to compile
src, err := ioutil.ReadFile(fn)
@ -155,7 +162,6 @@ func runCmd(ctx *cli.Context) error {
}
code = common.Hex2Bytes(bin)
}
initialGas := ctx.GlobalUint64(GasFlag.Name)
if genesisConfig.GasLimit != 0 {
initialGas = genesisConfig.GasLimit
@ -209,7 +215,7 @@ func runCmd(ctx *cli.Context) error {
if ctx.GlobalBool(DumpFlag.Name) {
statedb.Commit(true)
statedb.IntermediateRoot(true)
fmt.Println(string(statedb.Dump()))
fmt.Println(string(statedb.Dump(false, false, true)))
}
if memProfilePath := ctx.GlobalString(MemProfileFlag.Name); memProfilePath != "" {

View File

@ -105,7 +105,7 @@ func stateTestCmd(ctx *cli.Context) error {
// Test failed, mark as so and dump any state to aid debugging
result.Pass, result.Error = false, err.Error()
if ctx.GlobalBool(DumpFlag.Name) && state != nil {
dump := state.RawDump()
dump := state.RawDump(false, false, true)
result.State = &dump
}
}

View File

@ -260,7 +260,7 @@ func newFaucet(genesis *core.Genesis, port int, enodes []*discv5.Node, network u
return nil, err
}
for _, boot := range enodes {
old, err := enode.ParseV4(boot.String())
old, err := enode.Parse(enode.ValidSchemes, boot.String())
if err == nil {
stack.Server().AddPeer(old)
}

View File

@ -162,6 +162,10 @@ Remove blockchain and state databases`,
utils.DataDirFlag,
utils.CacheFlag,
utils.SyncModeFlag,
utils.IterativeOutputFlag,
utils.ExcludeCodeFlag,
utils.ExcludeStorageFlag,
utils.IncludeIncompletesFlag,
},
Category: "BLOCKCHAIN COMMANDS",
Description: `
@ -287,7 +291,7 @@ func importChain(ctx *cli.Context) error {
fmt.Printf("Allocations: %.3f million\n", float64(mem.Mallocs)/1000000)
fmt.Printf("GC pause: %v\n\n", time.Duration(mem.PauseTotalNs))
if ctx.GlobalIsSet(utils.NoCompactionFlag.Name) {
if ctx.GlobalBool(utils.NoCompactionFlag.Name) {
return nil
}
@ -504,6 +508,7 @@ func dump(ctx *cli.Context) error {
defer stack.Close()
chain, chainDb := utils.MakeChain(ctx, stack)
defer chainDb.Close()
for _, arg := range ctx.Args() {
var block *types.Block
if hashish(arg) {
@ -520,10 +525,20 @@ func dump(ctx *cli.Context) error {
if err != nil {
utils.Fatalf("could not create new state: %v", err)
}
fmt.Printf("%s\n", state.Dump())
excludeCode := ctx.Bool(utils.ExcludeCodeFlag.Name)
excludeStorage := ctx.Bool(utils.ExcludeStorageFlag.Name)
includeMissing := ctx.Bool(utils.IncludeIncompletesFlag.Name)
if ctx.Bool(utils.IterativeOutputFlag.Name) {
state.IterativeDump(excludeCode, excludeStorage, !includeMissing, json.NewEncoder(os.Stdout))
} else {
if includeMissing {
fmt.Printf("If you want to include accounts with missing preimages, you need iterative output, since" +
" otherwise the accounts will overwrite each other in the resulting mapping.")
}
fmt.Printf("%v %s\n", includeMissing, state.Dump(excludeCode, excludeStorage, false))
}
}
}
chainDb.Close()
return nil
}

View File

@ -20,7 +20,6 @@ import (
"bufio"
"errors"
"fmt"
"math/big"
"os"
"reflect"
"unicode"
@ -30,7 +29,6 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/dashboard"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/graphql"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
whisper "github.com/ethereum/go-ethereum/whisper/whisperv6"
@ -125,7 +123,6 @@ func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
}
// Apply flags.
utils.SetULC(ctx, &cfg.Eth)
utils.SetNodeConfig(ctx, &cfg.Node)
stack, err := node.New(&cfg.Node)
if err != nil {
@ -135,7 +132,6 @@ func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
if ctx.GlobalIsSet(utils.EthStatsURLFlag.Name) {
cfg.Ethstats.URL = ctx.GlobalString(utils.EthStatsURLFlag.Name)
}
utils.SetShhConfig(ctx, stack, &cfg.Shh)
utils.SetDashboardConfig(ctx, &cfg.Dashboard)
@ -154,9 +150,6 @@ func enableWhisper(ctx *cli.Context) bool {
func makeFullNode(ctx *cli.Context) *node.Node {
stack, cfg := makeConfigNode(ctx)
if ctx.GlobalIsSet(utils.ConstantinopleOverrideFlag.Name) {
cfg.Eth.ConstantinopleOverride = new(big.Int).SetUint64(ctx.GlobalUint64(utils.ConstantinopleOverrideFlag.Name))
}
utils.RegisterEthService(stack, &cfg.Eth)
if ctx.GlobalBool(utils.DashboardEnabledFlag.Name) {
@ -177,14 +170,10 @@ func makeFullNode(ctx *cli.Context) *node.Node {
}
utils.RegisterShhService(stack, &cfg.Shh)
}
// Configure GraphQL if required
// Configure GraphQL if requested
if ctx.GlobalIsSet(utils.GraphQLEnabledFlag.Name) {
if err := graphql.RegisterGraphQLService(stack, cfg.Node.GraphQLEndpoint(), cfg.Node.GraphQLCors, cfg.Node.GraphQLVirtualHosts, cfg.Node.HTTPTimeouts); err != nil {
utils.Fatalf("Failed to register the Ethereum service: %v", err)
}
utils.RegisterGraphQLService(stack, cfg.Node.GraphQLEndpoint(), cfg.Node.GraphQLCors, cfg.Node.GraphQLVirtualHosts, cfg.Node.HTTPTimeouts)
}
// Add the Ethereum Stats daemon if requested.
if cfg.Ethstats.URL != "" {
utils.RegisterEthStatsService(stack, cfg.Ethstats.URL)

View File

@ -21,6 +21,7 @@ import (
"fmt"
"math"
"os"
"runtime"
godebug "runtime/debug"
"sort"
"strconv"
@ -37,6 +38,7 @@ import (
"github.com/ethereum/go-ethereum/eth/downloader"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/les"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/node"
@ -88,18 +90,19 @@ var (
utils.TxPoolAccountQueueFlag,
utils.TxPoolGlobalQueueFlag,
utils.TxPoolLifetimeFlag,
utils.ULCModeConfigFlag,
utils.OnlyAnnounceModeFlag,
utils.ULCTrustedNodesFlag,
utils.ULCMinTrustedFractionFlag,
utils.SyncModeFlag,
utils.ExitWhenSyncedFlag,
utils.GCModeFlag,
utils.LightServFlag,
utils.LightBandwidthInFlag,
utils.LightBandwidthOutFlag,
utils.LightPeersFlag,
utils.LightServeFlag,
utils.LightLegacyServFlag,
utils.LightIngressFlag,
utils.LightEgressFlag,
utils.LightMaxPeersFlag,
utils.LightLegacyPeersFlag,
utils.LightKDFFlag,
utils.UltraLightServersFlag,
utils.UltraLightFractionFlag,
utils.UltraLightOnlyAnnounceFlag,
utils.WhitelistFlag,
utils.CacheFlag,
utils.CacheDatabaseFlag,
@ -137,7 +140,6 @@ var (
utils.GoerliFlag,
utils.VMEnableDebugFlag,
utils.NetworkIdFlag,
utils.ConstantinopleOverrideFlag,
utils.EthStatsURLFlag,
utils.FakePoWFlag,
utils.NoCompactionFlag,
@ -220,6 +222,8 @@ func init() {
licenseCommand,
// See config.go
dumpConfigCommand,
// See retesteth.go
retestethCommand,
}
sort.Sort(cli.CommandsByName(app.Commands))
@ -254,11 +258,15 @@ func init() {
}
// Cap the cache allowance and tune the garbage collector
var mem gosigar.Mem
if err := mem.Get(); err == nil {
allowance := int(mem.Total / 1024 / 1024 / 3)
if cache := ctx.GlobalInt(utils.CacheFlag.Name); cache > allowance {
log.Warn("Sanitizing cache to Go's GC limits", "provided", cache, "updated", allowance)
ctx.GlobalSet(utils.CacheFlag.Name, strconv.Itoa(allowance))
// Workaround until OpenBSD support lands into gosigar
// Check https://github.com/elastic/gosigar#supported-platforms
if runtime.GOOS != "openbsd" {
if err := mem.Get(); err == nil {
allowance := int(mem.Total / 1024 / 1024 / 3)
if cache := ctx.GlobalInt(utils.CacheFlag.Name); cache > allowance {
log.Warn("Sanitizing cache to Go's GC limits", "provided", cache, "updated", allowance)
ctx.GlobalSet(utils.CacheFlag.Name, strconv.Itoa(allowance))
}
}
}
// Ensure Go's GC ignores the database cache for trigger percentage
@ -321,14 +329,33 @@ func startNode(ctx *cli.Context, stack *node.Node) {
events := make(chan accounts.WalletEvent, 16)
stack.AccountManager().Subscribe(events)
go func() {
// Create a chain state reader for self-derivation
rpcClient, err := stack.Attach()
if err != nil {
utils.Fatalf("Failed to attach to self: %v", err)
}
stateReader := ethclient.NewClient(rpcClient)
// Create a client to interact with local geth node.
rpcClient, err := stack.Attach()
if err != nil {
utils.Fatalf("Failed to attach to self: %v", err)
}
ethClient := ethclient.NewClient(rpcClient)
// Set contract backend for ethereum service if local node
// is serving LES requests.
if ctx.GlobalInt(utils.LightLegacyServFlag.Name) > 0 || ctx.GlobalInt(utils.LightServeFlag.Name) > 0 {
var ethService *eth.Ethereum
if err := stack.Service(&ethService); err != nil {
utils.Fatalf("Failed to retrieve ethereum service: %v", err)
}
ethService.SetContractBackend(ethClient)
}
// Set contract backend for les service if local node is
// running as a light client.
if ctx.GlobalString(utils.SyncModeFlag.Name) == "light" {
var lesService *les.LightEthereum
if err := stack.Service(&lesService); err != nil {
utils.Fatalf("Failed to retrieve light ethereum service: %v", err)
}
lesService.SetContractBackend(ethClient)
}
go func() {
// Open any wallets already attached
for _, wallet := range stack.AccountManager().Wallets() {
if err := wallet.Open(""); err != nil {
@ -352,7 +379,7 @@ func startNode(ctx *cli.Context, stack *node.Node) {
}
derivationPaths = append(derivationPaths, accounts.DefaultBaseDerivationPath)
event.Wallet.SelfDerive(derivationPaths, stateReader)
event.Wallet.SelfDerive(derivationPaths, ethClient)
case accounts.WalletDropped:
log.Info("Old wallet dropped", "url", event.Wallet.URL())
@ -381,7 +408,6 @@ func startNode(ctx *cli.Context, stack *node.Node) {
"age", common.PrettyAge(timestamp))
stack.Stop()
}
}
}()
}

888
cmd/geth/retesteth.go Normal file
View File

@ -0,0 +1,888 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"context"
"fmt"
"math/big"
"os"
"os/signal"
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/trie"
cli "gopkg.in/urfave/cli.v1"
)
var (
rpcPortFlag = cli.IntFlag{
Name: "rpcport",
Usage: "HTTP-RPC server listening port",
Value: node.DefaultHTTPPort,
}
retestethCommand = cli.Command{
Action: utils.MigrateFlags(retesteth),
Name: "retesteth",
Usage: "Launches geth in retesteth mode",
ArgsUsage: "",
Flags: []cli.Flag{rpcPortFlag},
Category: "MISCELLANEOUS COMMANDS",
Description: `Launches geth in retesteth mode (no database, no network, only retesteth RPC interface)`,
}
)
type RetestethTestAPI interface {
SetChainParams(ctx context.Context, chainParams ChainParams) (bool, error)
MineBlocks(ctx context.Context, number uint64) (bool, error)
ModifyTimestamp(ctx context.Context, interval uint64) (bool, error)
ImportRawBlock(ctx context.Context, rawBlock hexutil.Bytes) (common.Hash, error)
RewindToBlock(ctx context.Context, number uint64) (bool, error)
GetLogHash(ctx context.Context, txHash common.Hash) (common.Hash, error)
}
type RetestethEthAPI interface {
SendRawTransaction(ctx context.Context, rawTx hexutil.Bytes) (common.Hash, error)
BlockNumber(ctx context.Context) (uint64, error)
GetBlockByNumber(ctx context.Context, blockNr math.HexOrDecimal64, fullTx bool) (map[string]interface{}, error)
GetBalance(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (*math.HexOrDecimal256, error)
GetCode(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (hexutil.Bytes, error)
GetTransactionCount(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (uint64, error)
}
type RetestethDebugAPI interface {
AccountRangeAt(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
addressHash *math.HexOrDecimal256, maxResults uint64,
) (AccountRangeResult, error)
StorageRangeAt(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
address common.Address,
begin *math.HexOrDecimal256, maxResults uint64,
) (StorageRangeResult, error)
}
type RetestWeb3API interface {
ClientVersion(ctx context.Context) (string, error)
}
type RetestethAPI struct {
ethDb ethdb.Database
db state.Database
chainConfig *params.ChainConfig
author common.Address
extraData []byte
genesisHash common.Hash
engine *NoRewardEngine
blockchain *core.BlockChain
blockNumber uint64
txMap map[common.Address]map[uint64]*types.Transaction // Sender -> Nonce -> Transaction
txSenders map[common.Address]struct{} // Set of transaction senders
blockInterval uint64
}
type ChainParams struct {
SealEngine string `json:"sealEngine"`
Params CParamsParams `json:"params"`
Genesis CParamsGenesis `json:"genesis"`
Accounts map[common.Address]CParamsAccount `json:"accounts"`
}
type CParamsParams struct {
AccountStartNonce math.HexOrDecimal64 `json:"accountStartNonce"`
HomesteadForkBlock *math.HexOrDecimal64 `json:"homesteadForkBlock"`
EIP150ForkBlock *math.HexOrDecimal64 `json:"EIP150ForkBlock"`
EIP158ForkBlock *math.HexOrDecimal64 `json:"EIP158ForkBlock"`
DaoHardforkBlock *math.HexOrDecimal64 `json:"daoHardforkBlock"`
ByzantiumForkBlock *math.HexOrDecimal64 `json:"byzantiumForkBlock"`
ConstantinopleForkBlock *math.HexOrDecimal64 `json:"constantinopleForkBlock"`
ConstantinopleFixForkBlock *math.HexOrDecimal64 `json:"constantinopleFixForkBlock"`
ChainID *math.HexOrDecimal256 `json:"chainID"`
MaximumExtraDataSize math.HexOrDecimal64 `json:"maximumExtraDataSize"`
TieBreakingGas bool `json:"tieBreakingGas"`
MinGasLimit math.HexOrDecimal64 `json:"minGasLimit"`
MaxGasLimit math.HexOrDecimal64 `json:"maxGasLimit"`
GasLimitBoundDivisor math.HexOrDecimal64 `json:"gasLimitBoundDivisor"`
MinimumDifficulty math.HexOrDecimal256 `json:"minimumDifficulty"`
DifficultyBoundDivisor math.HexOrDecimal256 `json:"difficultyBoundDivisor"`
DurationLimit math.HexOrDecimal256 `json:"durationLimit"`
BlockReward math.HexOrDecimal256 `json:"blockReward"`
NetworkID math.HexOrDecimal256 `json:"networkID"`
}
type CParamsGenesis struct {
Nonce math.HexOrDecimal64 `json:"nonce"`
Difficulty *math.HexOrDecimal256 `json:"difficulty"`
MixHash *math.HexOrDecimal256 `json:"mixHash"`
Author common.Address `json:"author"`
Timestamp math.HexOrDecimal64 `json:"timestamp"`
ParentHash common.Hash `json:"parentHash"`
ExtraData hexutil.Bytes `json:"extraData"`
GasLimit math.HexOrDecimal64 `json:"gasLimit"`
}
type CParamsAccount struct {
Balance *math.HexOrDecimal256 `json:"balance"`
Precompiled *CPAccountPrecompiled `json:"precompiled"`
Code hexutil.Bytes `json:"code"`
Storage map[string]string `json:"storage"`
Nonce *math.HexOrDecimal64 `json:"nonce"`
}
type CPAccountPrecompiled struct {
Name string `json:"name"`
StartingBlock math.HexOrDecimal64 `json:"startingBlock"`
Linear *CPAPrecompiledLinear `json:"linear"`
}
type CPAPrecompiledLinear struct {
Base uint64 `json:"base"`
Word uint64 `json:"word"`
}
type AccountRangeResult struct {
AddressMap map[common.Hash]common.Address `json:"addressMap"`
NextKey common.Hash `json:"nextKey"`
}
type StorageRangeResult struct {
Complete bool `json:"complete"`
Storage map[common.Hash]SRItem `json:"storage"`
}
type SRItem struct {
Key string `json:"key"`
Value string `json:"value"`
}
type NoRewardEngine struct {
inner consensus.Engine
rewardsOn bool
}
func (e *NoRewardEngine) Author(header *types.Header) (common.Address, error) {
return e.inner.Author(header)
}
func (e *NoRewardEngine) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error {
return e.inner.VerifyHeader(chain, header, seal)
}
func (e *NoRewardEngine) VerifyHeaders(chain consensus.ChainReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {
return e.inner.VerifyHeaders(chain, headers, seals)
}
func (e *NoRewardEngine) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {
return e.inner.VerifyUncles(chain, block)
}
func (e *NoRewardEngine) VerifySeal(chain consensus.ChainReader, header *types.Header) error {
return e.inner.VerifySeal(chain, header)
}
func (e *NoRewardEngine) Prepare(chain consensus.ChainReader, header *types.Header) error {
return e.inner.Prepare(chain, header)
}
func (e *NoRewardEngine) accumulateRewards(config *params.ChainConfig, state *state.StateDB, header *types.Header, uncles []*types.Header) {
// Simply touch miner and uncle coinbase accounts
reward := big.NewInt(0)
for _, uncle := range uncles {
state.AddBalance(uncle.Coinbase, reward)
}
state.AddBalance(header.Coinbase, reward)
}
func (e *NoRewardEngine) Finalize(chain consensus.ChainReader, header *types.Header, statedb *state.StateDB, txs []*types.Transaction,
uncles []*types.Header) {
if e.rewardsOn {
e.inner.Finalize(chain, header, statedb, txs, uncles)
} else {
e.accumulateRewards(chain.Config(), statedb, header, uncles)
header.Root = statedb.IntermediateRoot(chain.Config().IsEIP158(header.Number))
}
}
func (e *NoRewardEngine) FinalizeAndAssemble(chain consensus.ChainReader, header *types.Header, statedb *state.StateDB, txs []*types.Transaction,
uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {
if e.rewardsOn {
return e.inner.FinalizeAndAssemble(chain, header, statedb, txs, uncles, receipts)
} else {
e.accumulateRewards(chain.Config(), statedb, header, uncles)
header.Root = statedb.IntermediateRoot(chain.Config().IsEIP158(header.Number))
// Header seems complete, assemble into a block and return
return types.NewBlock(header, txs, uncles, receipts), nil
}
}
func (e *NoRewardEngine) Seal(chain consensus.ChainReader, block *types.Block, results chan<- *types.Block, stop <-chan struct{}) error {
return e.inner.Seal(chain, block, results, stop)
}
func (e *NoRewardEngine) SealHash(header *types.Header) common.Hash {
return e.inner.SealHash(header)
}
func (e *NoRewardEngine) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int {
return e.inner.CalcDifficulty(chain, time, parent)
}
func (e *NoRewardEngine) APIs(chain consensus.ChainReader) []rpc.API {
return e.inner.APIs(chain)
}
func (e *NoRewardEngine) Close() error {
return e.inner.Close()
}
func (api *RetestethAPI) SetChainParams(ctx context.Context, chainParams ChainParams) (bool, error) {
// Clean up
if api.blockchain != nil {
api.blockchain.Stop()
}
if api.engine != nil {
api.engine.Close()
}
if api.ethDb != nil {
api.ethDb.Close()
}
ethDb := rawdb.NewMemoryDatabase()
accounts := make(core.GenesisAlloc)
for address, account := range chainParams.Accounts {
balance := big.NewInt(0)
if account.Balance != nil {
balance.Set((*big.Int)(account.Balance))
}
var nonce uint64
if account.Nonce != nil {
nonce = uint64(*account.Nonce)
}
if account.Precompiled == nil || account.Balance != nil {
storage := make(map[common.Hash]common.Hash)
for k, v := range account.Storage {
storage[common.HexToHash(k)] = common.HexToHash(v)
}
accounts[address] = core.GenesisAccount{
Balance: balance,
Code: account.Code,
Nonce: nonce,
Storage: storage,
}
}
}
chainId := big.NewInt(1)
if chainParams.Params.ChainID != nil {
chainId.Set((*big.Int)(chainParams.Params.ChainID))
}
var (
homesteadBlock *big.Int
daoForkBlock *big.Int
eip150Block *big.Int
eip155Block *big.Int
eip158Block *big.Int
byzantiumBlock *big.Int
constantinopleBlock *big.Int
petersburgBlock *big.Int
)
if chainParams.Params.HomesteadForkBlock != nil {
homesteadBlock = big.NewInt(int64(*chainParams.Params.HomesteadForkBlock))
}
if chainParams.Params.DaoHardforkBlock != nil {
daoForkBlock = big.NewInt(int64(*chainParams.Params.DaoHardforkBlock))
}
if chainParams.Params.EIP150ForkBlock != nil {
eip150Block = big.NewInt(int64(*chainParams.Params.EIP150ForkBlock))
}
if chainParams.Params.EIP158ForkBlock != nil {
eip158Block = big.NewInt(int64(*chainParams.Params.EIP158ForkBlock))
eip155Block = eip158Block
}
if chainParams.Params.ByzantiumForkBlock != nil {
byzantiumBlock = big.NewInt(int64(*chainParams.Params.ByzantiumForkBlock))
}
if chainParams.Params.ConstantinopleForkBlock != nil {
constantinopleBlock = big.NewInt(int64(*chainParams.Params.ConstantinopleForkBlock))
}
if chainParams.Params.ConstantinopleFixForkBlock != nil {
petersburgBlock = big.NewInt(int64(*chainParams.Params.ConstantinopleFixForkBlock))
}
if constantinopleBlock != nil && petersburgBlock == nil {
petersburgBlock = big.NewInt(100000000000)
}
genesis := &core.Genesis{
Config: &params.ChainConfig{
ChainID: chainId,
HomesteadBlock: homesteadBlock,
DAOForkBlock: daoForkBlock,
DAOForkSupport: false,
EIP150Block: eip150Block,
EIP155Block: eip155Block,
EIP158Block: eip158Block,
ByzantiumBlock: byzantiumBlock,
ConstantinopleBlock: constantinopleBlock,
PetersburgBlock: petersburgBlock,
},
Nonce: uint64(chainParams.Genesis.Nonce),
Timestamp: uint64(chainParams.Genesis.Timestamp),
ExtraData: chainParams.Genesis.ExtraData,
GasLimit: uint64(chainParams.Genesis.GasLimit),
Difficulty: big.NewInt(0).Set((*big.Int)(chainParams.Genesis.Difficulty)),
Mixhash: common.BigToHash((*big.Int)(chainParams.Genesis.MixHash)),
Coinbase: chainParams.Genesis.Author,
ParentHash: chainParams.Genesis.ParentHash,
Alloc: accounts,
}
chainConfig, genesisHash, err := core.SetupGenesisBlock(ethDb, genesis)
if err != nil {
return false, err
}
fmt.Printf("Chain config: %v\n", chainConfig)
var inner consensus.Engine
switch chainParams.SealEngine {
case "NoProof", "NoReward":
inner = ethash.NewFaker()
case "Ethash":
inner = ethash.New(ethash.Config{
CacheDir: "ethash",
CachesInMem: 2,
CachesOnDisk: 3,
DatasetsInMem: 1,
DatasetsOnDisk: 2,
}, nil, false)
default:
return false, fmt.Errorf("unrecognised seal engine: %s", chainParams.SealEngine)
}
engine := &NoRewardEngine{inner: inner, rewardsOn: chainParams.SealEngine != "NoReward"}
blockchain, err := core.NewBlockChain(ethDb, nil, chainConfig, engine, vm.Config{}, nil)
if err != nil {
return false, err
}
api.chainConfig = chainConfig
api.genesisHash = genesisHash
api.author = chainParams.Genesis.Author
api.extraData = chainParams.Genesis.ExtraData
api.ethDb = ethDb
api.engine = engine
api.blockchain = blockchain
api.db = state.NewDatabase(api.ethDb)
api.blockNumber = 0
api.txMap = make(map[common.Address]map[uint64]*types.Transaction)
api.txSenders = make(map[common.Address]struct{})
api.blockInterval = 0
return true, nil
}
func (api *RetestethAPI) SendRawTransaction(ctx context.Context, rawTx hexutil.Bytes) (common.Hash, error) {
tx := new(types.Transaction)
if err := rlp.DecodeBytes(rawTx, tx); err != nil {
// Return nil is not by mistake - some tests include sending transaction where gasLimit overflows uint64
return common.Hash{}, nil
}
signer := types.MakeSigner(api.chainConfig, big.NewInt(int64(api.blockNumber)))
sender, err := types.Sender(signer, tx)
if err != nil {
return common.Hash{}, err
}
if nonceMap, ok := api.txMap[sender]; ok {
nonceMap[tx.Nonce()] = tx
} else {
nonceMap = make(map[uint64]*types.Transaction)
nonceMap[tx.Nonce()] = tx
api.txMap[sender] = nonceMap
}
api.txSenders[sender] = struct{}{}
return tx.Hash(), nil
}
func (api *RetestethAPI) MineBlocks(ctx context.Context, number uint64) (bool, error) {
for i := 0; i < int(number); i++ {
if err := api.mineBlock(); err != nil {
return false, err
}
}
fmt.Printf("Mined %d blocks\n", number)
return true, nil
}
func (api *RetestethAPI) mineBlock() error {
parentHash := rawdb.ReadCanonicalHash(api.ethDb, api.blockNumber)
parent := rawdb.ReadBlock(api.ethDb, parentHash, api.blockNumber)
var timestamp uint64
if api.blockInterval == 0 {
timestamp = uint64(time.Now().Unix())
} else {
timestamp = parent.Time() + api.blockInterval
}
gasLimit := core.CalcGasLimit(parent, 9223372036854775807, 9223372036854775807)
header := &types.Header{
ParentHash: parent.Hash(),
Number: big.NewInt(int64(api.blockNumber + 1)),
GasLimit: gasLimit,
Extra: api.extraData,
Time: timestamp,
}
header.Coinbase = api.author
if api.engine != nil {
api.engine.Prepare(api.blockchain, header)
}
// If we are care about TheDAO hard-fork check whether to override the extra-data or not
if daoBlock := api.chainConfig.DAOForkBlock; daoBlock != nil {
// Check whether the block is among the fork extra-override range
limit := new(big.Int).Add(daoBlock, params.DAOForkExtraRange)
if header.Number.Cmp(daoBlock) >= 0 && header.Number.Cmp(limit) < 0 {
// Depending whether we support or oppose the fork, override differently
if api.chainConfig.DAOForkSupport {
header.Extra = common.CopyBytes(params.DAOForkBlockExtra)
} else if bytes.Equal(header.Extra, params.DAOForkBlockExtra) {
header.Extra = []byte{} // If miner opposes, don't let it use the reserved extra-data
}
}
}
statedb, err := api.blockchain.StateAt(parent.Root())
if err != nil {
return err
}
if api.chainConfig.DAOForkSupport && api.chainConfig.DAOForkBlock != nil && api.chainConfig.DAOForkBlock.Cmp(header.Number) == 0 {
misc.ApplyDAOHardFork(statedb)
}
gasPool := new(core.GasPool).AddGas(header.GasLimit)
txCount := 0
var txs []*types.Transaction
var receipts []*types.Receipt
var coalescedLogs []*types.Log
var blockFull = gasPool.Gas() < params.TxGas
for address := range api.txSenders {
if blockFull {
break
}
m := api.txMap[address]
for nonce := statedb.GetNonce(address); ; nonce++ {
if tx, ok := m[nonce]; ok {
// Try to apply transactions to the state
statedb.Prepare(tx.Hash(), common.Hash{}, txCount)
snap := statedb.Snapshot()
receipt, _, err := core.ApplyTransaction(
api.chainConfig,
api.blockchain,
&api.author,
gasPool,
statedb,
header, tx, &header.GasUsed, *api.blockchain.GetVMConfig(),
)
if err != nil {
statedb.RevertToSnapshot(snap)
break
}
txs = append(txs, tx)
receipts = append(receipts, receipt)
coalescedLogs = append(coalescedLogs, receipt.Logs...)
delete(m, nonce)
if len(m) == 0 {
// Last tx for the sender
delete(api.txMap, address)
delete(api.txSenders, address)
}
txCount++
if gasPool.Gas() < params.TxGas {
blockFull = true
break
}
} else {
break // Gap in the nonces
}
}
}
block, err := api.engine.FinalizeAndAssemble(api.blockchain, header, statedb, txs, []*types.Header{}, receipts)
return api.importBlock(block)
}
func (api *RetestethAPI) importBlock(block *types.Block) error {
if _, err := api.blockchain.InsertChain([]*types.Block{block}); err != nil {
return err
}
api.blockNumber = block.NumberU64()
fmt.Printf("Imported block %d\n", block.NumberU64())
return nil
}
func (api *RetestethAPI) ModifyTimestamp(ctx context.Context, interval uint64) (bool, error) {
api.blockInterval = interval
return true, nil
}
func (api *RetestethAPI) ImportRawBlock(ctx context.Context, rawBlock hexutil.Bytes) (common.Hash, error) {
block := new(types.Block)
if err := rlp.DecodeBytes(rawBlock, block); err != nil {
return common.Hash{}, err
}
fmt.Printf("Importing block %d with parent hash: %x, genesisHash: %x\n", block.NumberU64(), block.ParentHash(), api.genesisHash)
if err := api.importBlock(block); err != nil {
return common.Hash{}, err
}
return block.Hash(), nil
}
func (api *RetestethAPI) RewindToBlock(ctx context.Context, newHead uint64) (bool, error) {
if err := api.blockchain.SetHead(newHead); err != nil {
return false, err
}
api.blockNumber = newHead
return true, nil
}
var emptyListHash common.Hash = common.HexToHash("0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347")
func (api *RetestethAPI) GetLogHash(ctx context.Context, txHash common.Hash) (common.Hash, error) {
receipt, _, _, _ := rawdb.ReadReceipt(api.ethDb, txHash, api.chainConfig)
if receipt == nil {
return emptyListHash, nil
} else {
if logListRlp, err := rlp.EncodeToBytes(receipt.Logs); err != nil {
return common.Hash{}, err
} else {
return common.BytesToHash(crypto.Keccak256(logListRlp)), nil
}
}
}
func (api *RetestethAPI) BlockNumber(ctx context.Context) (uint64, error) {
//fmt.Printf("BlockNumber, response: %d\n", api.blockNumber)
return api.blockNumber, nil
}
func (api *RetestethAPI) GetBlockByNumber(ctx context.Context, blockNr math.HexOrDecimal64, fullTx bool) (map[string]interface{}, error) {
block := api.blockchain.GetBlockByNumber(uint64(blockNr))
if block != nil {
response, err := RPCMarshalBlock(block, true, fullTx)
if err != nil {
return nil, err
}
response["author"] = response["miner"]
response["totalDifficulty"] = (*hexutil.Big)(api.blockchain.GetTd(block.Hash(), uint64(blockNr)))
return response, err
}
return nil, fmt.Errorf("block %d not found", blockNr)
}
func (api *RetestethAPI) AccountRangeAt(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
addressHash *math.HexOrDecimal256, maxResults uint64,
) (AccountRangeResult, error) {
var (
header *types.Header
block *types.Block
)
if (*big.Int)(blockHashOrNumber).Cmp(big.NewInt(math.MaxInt64)) > 0 {
blockHash := common.BigToHash((*big.Int)(blockHashOrNumber))
header = api.blockchain.GetHeaderByHash(blockHash)
block = api.blockchain.GetBlockByHash(blockHash)
//fmt.Printf("Account range: %x, txIndex %d, start: %x, maxResults: %d\n", blockHash, txIndex, common.BigToHash((*big.Int)(addressHash)), maxResults)
} else {
blockNumber := (*big.Int)(blockHashOrNumber).Uint64()
header = api.blockchain.GetHeaderByNumber(blockNumber)
block = api.blockchain.GetBlockByNumber(blockNumber)
//fmt.Printf("Account range: %d, txIndex %d, start: %x, maxResults: %d\n", blockNumber, txIndex, common.BigToHash((*big.Int)(addressHash)), maxResults)
}
parentHeader := api.blockchain.GetHeaderByHash(header.ParentHash)
var root common.Hash
var statedb *state.StateDB
var err error
if parentHeader == nil || int(txIndex) >= len(block.Transactions()) {
root = header.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return AccountRangeResult{}, err
}
} else {
root = parentHeader.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return AccountRangeResult{}, err
}
// Recompute transactions up to the target index.
signer := types.MakeSigner(api.blockchain.Config(), block.Number())
for idx, tx := range block.Transactions() {
// Assemble the transaction call message and return if the requested offset
msg, _ := tx.AsMessage(signer)
context := core.NewEVMContext(msg, block.Header(), api.blockchain, nil)
// Not yet the searched for transaction, execute on top of the current state
vmenv := vm.NewEVM(context, statedb, api.blockchain.Config(), vm.Config{})
if _, _, _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.Gas())); err != nil {
return AccountRangeResult{}, fmt.Errorf("transaction %#x failed: %v", tx.Hash(), err)
}
// Ensure any modifications are committed to the state
// Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
root = statedb.IntermediateRoot(vmenv.ChainConfig().IsEIP158(block.Number()))
if idx == int(txIndex) {
// This is to make sure root can be opened by OpenTrie
root, err = statedb.Commit(api.chainConfig.IsEIP158(block.Number()))
if err != nil {
return AccountRangeResult{}, err
}
break
}
}
}
accountTrie, err := statedb.Database().OpenTrie(root)
if err != nil {
return AccountRangeResult{}, err
}
it := trie.NewIterator(accountTrie.NodeIterator(common.BigToHash((*big.Int)(addressHash)).Bytes()))
result := AccountRangeResult{AddressMap: make(map[common.Hash]common.Address)}
for i := 0; /*i < int(maxResults) && */ it.Next(); i++ {
if preimage := accountTrie.GetKey(it.Key); preimage != nil {
result.AddressMap[common.BytesToHash(it.Key)] = common.BytesToAddress(preimage)
//fmt.Printf("%x: %x\n", it.Key, preimage)
} else {
//fmt.Printf("could not find preimage for %x\n", it.Key)
}
}
//fmt.Printf("Number of entries returned: %d\n", len(result.AddressMap))
// Add the 'next key' so clients can continue downloading.
if it.Next() {
next := common.BytesToHash(it.Key)
result.NextKey = next
}
return result, nil
}
func (api *RetestethAPI) GetBalance(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (*math.HexOrDecimal256, error) {
//fmt.Printf("GetBalance %x, block %d\n", address, blockNr)
header := api.blockchain.GetHeaderByNumber(uint64(blockNr))
statedb, err := api.blockchain.StateAt(header.Root)
if err != nil {
return nil, err
}
return (*math.HexOrDecimal256)(statedb.GetBalance(address)), nil
}
func (api *RetestethAPI) GetCode(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (hexutil.Bytes, error) {
header := api.blockchain.GetHeaderByNumber(uint64(blockNr))
statedb, err := api.blockchain.StateAt(header.Root)
if err != nil {
return nil, err
}
return statedb.GetCode(address), nil
}
func (api *RetestethAPI) GetTransactionCount(ctx context.Context, address common.Address, blockNr math.HexOrDecimal64) (uint64, error) {
header := api.blockchain.GetHeaderByNumber(uint64(blockNr))
statedb, err := api.blockchain.StateAt(header.Root)
if err != nil {
return 0, err
}
return statedb.GetNonce(address), nil
}
func (api *RetestethAPI) StorageRangeAt(ctx context.Context,
blockHashOrNumber *math.HexOrDecimal256, txIndex uint64,
address common.Address,
begin *math.HexOrDecimal256, maxResults uint64,
) (StorageRangeResult, error) {
var (
header *types.Header
block *types.Block
)
if (*big.Int)(blockHashOrNumber).Cmp(big.NewInt(math.MaxInt64)) > 0 {
blockHash := common.BigToHash((*big.Int)(blockHashOrNumber))
header = api.blockchain.GetHeaderByHash(blockHash)
block = api.blockchain.GetBlockByHash(blockHash)
//fmt.Printf("Storage range: %x, txIndex %d, addr: %x, start: %x, maxResults: %d\n",
// blockHash, txIndex, address, common.BigToHash((*big.Int)(begin)), maxResults)
} else {
blockNumber := (*big.Int)(blockHashOrNumber).Uint64()
header = api.blockchain.GetHeaderByNumber(blockNumber)
block = api.blockchain.GetBlockByNumber(blockNumber)
//fmt.Printf("Storage range: %d, txIndex %d, addr: %x, start: %x, maxResults: %d\n",
// blockNumber, txIndex, address, common.BigToHash((*big.Int)(begin)), maxResults)
}
parentHeader := api.blockchain.GetHeaderByHash(header.ParentHash)
var root common.Hash
var statedb *state.StateDB
var err error
if parentHeader == nil || int(txIndex) >= len(block.Transactions()) {
root = header.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return StorageRangeResult{}, err
}
} else {
root = parentHeader.Root
statedb, err = api.blockchain.StateAt(root)
if err != nil {
return StorageRangeResult{}, err
}
// Recompute transactions up to the target index.
signer := types.MakeSigner(api.blockchain.Config(), block.Number())
for idx, tx := range block.Transactions() {
// Assemble the transaction call message and return if the requested offset
msg, _ := tx.AsMessage(signer)
context := core.NewEVMContext(msg, block.Header(), api.blockchain, nil)
// Not yet the searched for transaction, execute on top of the current state
vmenv := vm.NewEVM(context, statedb, api.blockchain.Config(), vm.Config{})
if _, _, _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.Gas())); err != nil {
return StorageRangeResult{}, fmt.Errorf("transaction %#x failed: %v", tx.Hash(), err)
}
// Ensure any modifications are committed to the state
// Only delete empty objects if EIP158/161 (a.k.a Spurious Dragon) is in effect
root = statedb.IntermediateRoot(vmenv.ChainConfig().IsEIP158(block.Number()))
if idx == int(txIndex) {
// This is to make sure root can be opened by OpenTrie
root, err = statedb.Commit(vmenv.ChainConfig().IsEIP158(block.Number()))
if err != nil {
return StorageRangeResult{}, err
}
}
}
}
storageTrie := statedb.StorageTrie(address)
it := trie.NewIterator(storageTrie.NodeIterator(common.BigToHash((*big.Int)(begin)).Bytes()))
result := StorageRangeResult{Storage: make(map[common.Hash]SRItem)}
for i := 0; /*i < int(maxResults) && */ it.Next(); i++ {
if preimage := storageTrie.GetKey(it.Key); preimage != nil {
key := (*math.HexOrDecimal256)(big.NewInt(0).SetBytes(preimage))
v, _, err := rlp.SplitString(it.Value)
if err != nil {
return StorageRangeResult{}, err
}
value := (*math.HexOrDecimal256)(big.NewInt(0).SetBytes(v))
ks, _ := key.MarshalText()
vs, _ := value.MarshalText()
if len(ks)%2 != 0 {
ks = append(append(append([]byte{}, ks[:2]...), byte('0')), ks[2:]...)
}
if len(vs)%2 != 0 {
vs = append(append(append([]byte{}, vs[:2]...), byte('0')), vs[2:]...)
}
result.Storage[common.BytesToHash(it.Key)] = SRItem{
Key: string(ks),
Value: string(vs),
}
//fmt.Printf("Key: %s, Value: %s\n", ks, vs)
} else {
//fmt.Printf("Did not find preimage for %x\n", it.Key)
}
}
if it.Next() {
result.Complete = false
} else {
result.Complete = true
}
return result, nil
}
func (api *RetestethAPI) ClientVersion(ctx context.Context) (string, error) {
return "Geth-" + params.VersionWithCommit(gitCommit, gitDate), nil
}
// splitAndTrim splits input separated by a comma
// and trims excessive white space from the substrings.
func splitAndTrim(input string) []string {
result := strings.Split(input, ",")
for i, r := range result {
result[i] = strings.TrimSpace(r)
}
return result
}
func retesteth(ctx *cli.Context) error {
log.Info("Welcome to retesteth!")
// register signer API with server
var (
extapiURL = "n/a"
)
apiImpl := &RetestethAPI{}
var testApi RetestethTestAPI = apiImpl
var ethApi RetestethEthAPI = apiImpl
var debugApi RetestethDebugAPI = apiImpl
var web3Api RetestWeb3API = apiImpl
rpcAPI := []rpc.API{
{
Namespace: "test",
Public: true,
Service: testApi,
Version: "1.0",
},
{
Namespace: "eth",
Public: true,
Service: ethApi,
Version: "1.0",
},
{
Namespace: "debug",
Public: true,
Service: debugApi,
Version: "1.0",
},
{
Namespace: "web3",
Public: true,
Service: web3Api,
Version: "1.0",
},
}
vhosts := splitAndTrim(ctx.GlobalString(utils.RPCVirtualHostsFlag.Name))
cors := splitAndTrim(ctx.GlobalString(utils.RPCCORSDomainFlag.Name))
// start http server
httpEndpoint := fmt.Sprintf("%s:%d", ctx.GlobalString(utils.RPCListenAddrFlag.Name), ctx.Int(rpcPortFlag.Name))
listener, _, err := rpc.StartHTTPEndpoint(httpEndpoint, rpcAPI, []string{"test", "eth", "debug", "web3"}, cors, vhosts, rpc.DefaultHTTPTimeouts)
if err != nil {
utils.Fatalf("Could not start RPC api: %v", err)
}
extapiURL = fmt.Sprintf("http://%s", httpEndpoint)
log.Info("HTTP endpoint opened", "url", extapiURL)
defer func() {
listener.Close()
log.Info("HTTP endpoint closed", "url", httpEndpoint)
}()
abortChan := make(chan os.Signal)
signal.Notify(abortChan, os.Interrupt)
sig := <-abortChan
log.Info("Exiting...", "signal", sig)
return nil
}

View File

@ -0,0 +1,148 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
)
// RPCTransaction represents a transaction that will serialize to the RPC representation of a transaction
type RPCTransaction struct {
BlockHash common.Hash `json:"blockHash"`
BlockNumber *hexutil.Big `json:"blockNumber"`
From common.Address `json:"from"`
Gas hexutil.Uint64 `json:"gas"`
GasPrice *hexutil.Big `json:"gasPrice"`
Hash common.Hash `json:"hash"`
Input hexutil.Bytes `json:"input"`
Nonce hexutil.Uint64 `json:"nonce"`
To *common.Address `json:"to"`
TransactionIndex hexutil.Uint `json:"transactionIndex"`
Value *hexutil.Big `json:"value"`
V *hexutil.Big `json:"v"`
R *hexutil.Big `json:"r"`
S *hexutil.Big `json:"s"`
}
// newRPCTransaction returns a transaction that will serialize to the RPC
// representation, with the given location metadata set (if available).
func newRPCTransaction(tx *types.Transaction, blockHash common.Hash, blockNumber uint64, index uint64) *RPCTransaction {
var signer types.Signer = types.FrontierSigner{}
if tx.Protected() {
signer = types.NewEIP155Signer(tx.ChainId())
}
from, _ := types.Sender(signer, tx)
v, r, s := tx.RawSignatureValues()
result := &RPCTransaction{
From: from,
Gas: hexutil.Uint64(tx.Gas()),
GasPrice: (*hexutil.Big)(tx.GasPrice()),
Hash: tx.Hash(),
Input: hexutil.Bytes(tx.Data()),
Nonce: hexutil.Uint64(tx.Nonce()),
To: tx.To(),
Value: (*hexutil.Big)(tx.Value()),
V: (*hexutil.Big)(v),
R: (*hexutil.Big)(r),
S: (*hexutil.Big)(s),
}
if blockHash != (common.Hash{}) {
result.BlockHash = blockHash
result.BlockNumber = (*hexutil.Big)(new(big.Int).SetUint64(blockNumber))
result.TransactionIndex = hexutil.Uint(index)
}
return result
}
// newRPCTransactionFromBlockIndex returns a transaction that will serialize to the RPC representation.
func newRPCTransactionFromBlockIndex(b *types.Block, index uint64) *RPCTransaction {
txs := b.Transactions()
if index >= uint64(len(txs)) {
return nil
}
return newRPCTransaction(txs[index], b.Hash(), b.NumberU64(), index)
}
// newRPCTransactionFromBlockHash returns a transaction that will serialize to the RPC representation.
func newRPCTransactionFromBlockHash(b *types.Block, hash common.Hash) *RPCTransaction {
for idx, tx := range b.Transactions() {
if tx.Hash() == hash {
return newRPCTransactionFromBlockIndex(b, uint64(idx))
}
}
return nil
}
// RPCMarshalBlock converts the given block to the RPC output which depends on fullTx. If inclTx is true transactions are
// returned. When fullTx is true the returned block contains full transaction details, otherwise it will only contain
// transaction hashes.
func RPCMarshalBlock(b *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
head := b.Header() // copies the header once
fields := map[string]interface{}{
"number": (*hexutil.Big)(head.Number),
"hash": b.Hash(),
"parentHash": head.ParentHash,
"nonce": head.Nonce,
"mixHash": head.MixDigest,
"sha3Uncles": head.UncleHash,
"logsBloom": head.Bloom,
"stateRoot": head.Root,
"miner": head.Coinbase,
"difficulty": (*hexutil.Big)(head.Difficulty),
"extraData": hexutil.Bytes(head.Extra),
"size": hexutil.Uint64(b.Size()),
"gasLimit": hexutil.Uint64(head.GasLimit),
"gasUsed": hexutil.Uint64(head.GasUsed),
"timestamp": hexutil.Uint64(head.Time),
"transactionsRoot": head.TxHash,
"receiptsRoot": head.ReceiptHash,
}
if inclTx {
formatTx := func(tx *types.Transaction) (interface{}, error) {
return tx.Hash(), nil
}
if fullTx {
formatTx = func(tx *types.Transaction) (interface{}, error) {
return newRPCTransactionFromBlockHash(b, tx.Hash()), nil
}
}
txs := b.Transactions()
transactions := make([]interface{}, len(txs))
var err error
for i, tx := range txs {
if transactions[i], err = formatTx(tx); err != nil {
return nil, err
}
}
fields["transactions"] = transactions
}
uncles := b.Uncles()
uncleHashes := make([]common.Hash, len(uncles))
for i, uncle := range uncles {
uncleHashes[i] = uncle.Hash()
}
fields["uncles"] = uncleHashes
return fields, nil
}

View File

@ -82,14 +82,22 @@ var AppHelpFlagGroups = []flagGroup{
utils.GCModeFlag,
utils.EthStatsURLFlag,
utils.IdentityFlag,
utils.LightServFlag,
utils.LightBandwidthInFlag,
utils.LightBandwidthOutFlag,
utils.LightPeersFlag,
utils.LightKDFFlag,
utils.WhitelistFlag,
},
},
{
Name: "LIGHT CLIENT",
Flags: []cli.Flag{
utils.LightServeFlag,
utils.LightIngressFlag,
utils.LightEgressFlag,
utils.LightMaxPeersFlag,
utils.UltraLightServersFlag,
utils.UltraLightFractionFlag,
utils.UltraLightOnlyAnnounceFlag,
},
},
{
Name: "DEVELOPER CHAIN",
Flags: []cli.Flag{
@ -156,20 +164,25 @@ var AppHelpFlagGroups = []flagGroup{
{
Name: "API AND CONSOLE",
Flags: []cli.Flag{
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.RPCEnabledFlag,
utils.RPCListenAddrFlag,
utils.RPCPortFlag,
utils.RPCApiFlag,
utils.RPCGlobalGasCap,
utils.RPCCORSDomainFlag,
utils.RPCVirtualHostsFlag,
utils.WSEnabledFlag,
utils.WSListenAddrFlag,
utils.WSPortFlag,
utils.WSApiFlag,
utils.WSAllowedOriginsFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.RPCCORSDomainFlag,
utils.RPCVirtualHostsFlag,
utils.GraphQLEnabledFlag,
utils.GraphQLListenAddrFlag,
utils.GraphQLPortFlag,
utils.GraphQLCORSDomainFlag,
utils.GraphQLVirtualHostsFlag,
utils.JSpathFlag,
utils.ExecFlag,
utils.PreloadJSFlag,
@ -240,6 +253,8 @@ var AppHelpFlagGroups = []flagGroup{
{
Name: "DEPRECATED",
Flags: []cli.Flag{
utils.LightLegacyServFlag,
utils.LightLegacyPeersFlag,
utils.MinerLegacyThreadsFlag,
utils.MinerLegacyGasTargetFlag,
utils.MinerLegacyGasPriceFlag,

View File

@ -30,108 +30,86 @@ import (
// explorerDockerfile is the Dockerfile required to run a block explorer.
var explorerDockerfile = `
FROM puppeth/explorer:latest
ADD ethstats.json /ethstats.json
ADD chain.json /chain.json
FROM puppeth/blockscout:latest
ADD genesis.json /genesis.json
RUN \
echo '(cd ../eth-net-intelligence-api && pm2 start /ethstats.json)' > explorer.sh && \
echo '(cd ../etherchain-light && npm start &)' >> explorer.sh && \
echo 'exec /parity/parity --chain=/chain.json --port={{.NodePort}} --tracing=on --fat-db=on --pruning=archive' >> explorer.sh
echo 'geth --cache 512 init /genesis.json' > explorer.sh && \
echo $'geth --networkid {{.NetworkID}} --syncmode "full" --gcmode "archive" --port {{.EthPort}} --bootnodes {{.Bootnodes}} --ethstats \'{{.Ethstats}}\' --cache=512 --rpc --rpcapi "net,web3,eth,shh,debug" --rpccorsdomain "*" --rpcvhosts "*" --ws --wsorigins "*" --exitwhensynced' >> explorer.sh && \
echo $'exec geth --networkid {{.NetworkID}} --syncmode "full" --gcmode "archive" --port {{.EthPort}} --bootnodes {{.Bootnodes}} --ethstats \'{{.Ethstats}}\' --cache=512 --rpc --rpcapi "net,web3,eth,shh,debug" --rpccorsdomain "*" --rpcvhosts "*" --ws --wsorigins "*" &' >> explorer.sh && \
echo '/usr/local/bin/docker-entrypoint.sh postgres &' >> explorer.sh && \
echo 'sleep 5' >> explorer.sh && \
echo 'mix do ecto.drop --force, ecto.create, ecto.migrate' >> explorer.sh && \
echo 'mix phx.server' >> explorer.sh
ENTRYPOINT ["/bin/sh", "explorer.sh"]
`
// explorerEthstats is the configuration file for the ethstats javascript client.
var explorerEthstats = `[
{
"name" : "node-app",
"script" : "app.js",
"log_date_format" : "YYYY-MM-DD HH:mm Z",
"merge_logs" : false,
"watch" : false,
"max_restarts" : 10,
"exec_interpreter" : "node",
"exec_mode" : "fork_mode",
"env":
{
"NODE_ENV" : "production",
"RPC_HOST" : "localhost",
"RPC_PORT" : "8545",
"LISTENING_PORT" : "{{.Port}}",
"INSTANCE_NAME" : "{{.Name}}",
"CONTACT_DETAILS" : "",
"WS_SERVER" : "{{.Host}}",
"WS_SECRET" : "{{.Secret}}",
"VERBOSITY" : 2
}
}
]`
// explorerComposefile is the docker-compose.yml file required to deploy and
// maintain a block explorer.
var explorerComposefile = `
version: '2'
services:
explorer:
build: .
image: {{.Network}}/explorer
container_name: {{.Network}}_explorer_1
ports:
- "{{.NodePort}}:{{.NodePort}}"
- "{{.NodePort}}:{{.NodePort}}/udp"{{if not .VHost}}
- "{{.WebPort}}:3000"{{end}}
volumes:
- {{.Datadir}}:/root/.local/share/io.parity.ethereum
environment:
- NODE_PORT={{.NodePort}}/tcp
- STATS={{.Ethstats}}{{if .VHost}}
- VIRTUAL_HOST={{.VHost}}
- VIRTUAL_PORT=3000{{end}}
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "10"
restart: always
explorer:
build: .
image: {{.Network}}/explorer
container_name: {{.Network}}_explorer_1
ports:
- "{{.EthPort}}:{{.EthPort}}"
- "{{.EthPort}}:{{.EthPort}}/udp"{{if not .VHost}}
- "{{.WebPort}}:4000"{{end}}
environment:
- ETH_PORT={{.EthPort}}
- ETH_NAME={{.EthName}}
- BLOCK_TRANSFORMER={{.Transformer}}{{if .VHost}}
- VIRTUAL_HOST={{.VHost}}
- VIRTUAL_PORT=4000{{end}}
volumes:
- {{.Datadir}}:/opt/app/.ethereum
- {{.DBDir}}:/var/lib/postgresql/data
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "10"
restart: always
`
// deployExplorer deploys a new block explorer container to a remote machine via
// SSH, docker and docker-compose. If an instance with the specified network name
// already exists there, it will be overwritten!
func deployExplorer(client *sshClient, network string, chainspec []byte, config *explorerInfos, nocache bool) ([]byte, error) {
func deployExplorer(client *sshClient, network string, bootnodes []string, config *explorerInfos, nocache bool, isClique bool) ([]byte, error) {
// Generate the content to upload to the server
workdir := fmt.Sprintf("%d", rand.Int63())
files := make(map[string][]byte)
dockerfile := new(bytes.Buffer)
template.Must(template.New("").Parse(explorerDockerfile)).Execute(dockerfile, map[string]interface{}{
"NodePort": config.nodePort,
"NetworkID": config.node.network,
"Bootnodes": strings.Join(bootnodes, ","),
"Ethstats": config.node.ethstats,
"EthPort": config.node.port,
})
files[filepath.Join(workdir, "Dockerfile")] = dockerfile.Bytes()
ethstats := new(bytes.Buffer)
template.Must(template.New("").Parse(explorerEthstats)).Execute(ethstats, map[string]interface{}{
"Port": config.nodePort,
"Name": config.ethstats[:strings.Index(config.ethstats, ":")],
"Secret": config.ethstats[strings.Index(config.ethstats, ":")+1 : strings.Index(config.ethstats, "@")],
"Host": config.ethstats[strings.Index(config.ethstats, "@")+1:],
})
files[filepath.Join(workdir, "ethstats.json")] = ethstats.Bytes()
transformer := "base"
if isClique {
transformer = "clique"
}
composefile := new(bytes.Buffer)
template.Must(template.New("").Parse(explorerComposefile)).Execute(composefile, map[string]interface{}{
"Datadir": config.datadir,
"Network": network,
"NodePort": config.nodePort,
"VHost": config.webHost,
"WebPort": config.webPort,
"Ethstats": config.ethstats[:strings.Index(config.ethstats, ":")],
"Network": network,
"VHost": config.host,
"Ethstats": config.node.ethstats,
"Datadir": config.node.datadir,
"DBDir": config.dbdir,
"EthPort": config.node.port,
"EthName": config.node.ethstats[:strings.Index(config.node.ethstats, ":")],
"WebPort": config.port,
"Transformer": transformer,
})
files[filepath.Join(workdir, "docker-compose.yaml")] = composefile.Bytes()
files[filepath.Join(workdir, "chain.json")] = chainspec
files[filepath.Join(workdir, "genesis.json")] = config.node.genesis
// Upload the deployment files to the remote server (and clean up afterwards)
if out, err := client.Upload(files); err != nil {
@ -149,22 +127,20 @@ func deployExplorer(client *sshClient, network string, chainspec []byte, config
// explorerInfos is returned from a block explorer status check to allow reporting
// various configuration parameters.
type explorerInfos struct {
datadir string
ethstats string
nodePort int
webHost string
webPort int
node *nodeInfos
dbdir string
host string
port int
}
// Report converts the typed struct into a plain string->string map, containing
// most - but not all - fields for reporting to the user.
func (info *explorerInfos) Report() map[string]string {
report := map[string]string{
"Data directory": info.datadir,
"Node listener port ": strconv.Itoa(info.nodePort),
"Ethstats username": info.ethstats,
"Website address ": info.webHost,
"Website listener port ": strconv.Itoa(info.webPort),
"Website address ": info.host,
"Website listener port ": strconv.Itoa(info.port),
"Ethereum listener port ": strconv.Itoa(info.node.port),
"Ethstats username": info.node.ethstats,
}
return report
}
@ -172,7 +148,7 @@ func (info *explorerInfos) Report() map[string]string {
// checkExplorer does a health-check against a block explorer server to verify
// whether it's running, and if yes, whether it's responsive.
func checkExplorer(client *sshClient, network string) (*explorerInfos, error) {
// Inspect a possible block explorer container on the host
// Inspect a possible explorer container on the host
infos, err := inspectContainer(client, fmt.Sprintf("%s_explorer_1", network))
if err != nil {
return nil, err
@ -181,13 +157,13 @@ func checkExplorer(client *sshClient, network string) (*explorerInfos, error) {
return nil, ErrServiceOffline
}
// Resolve the port from the host, or the reverse proxy
webPort := infos.portmap["3000/tcp"]
if webPort == 0 {
port := infos.portmap["4000/tcp"]
if port == 0 {
if proxy, _ := checkNginx(client, network); proxy != nil {
webPort = proxy.port
port = proxy.port
}
}
if webPort == 0 {
if port == 0 {
return nil, ErrNotExposed
}
// Resolve the host from the reverse-proxy and the config values
@ -196,17 +172,23 @@ func checkExplorer(client *sshClient, network string) (*explorerInfos, error) {
host = client.server
}
// Run a sanity check to see if the devp2p is reachable
nodePort := infos.portmap[infos.envvars["NODE_PORT"]]
if err = checkPort(client.server, nodePort); err != nil {
log.Warn(fmt.Sprintf("Explorer devp2p port seems unreachable"), "server", client.server, "port", nodePort, "err", err)
p2pPort := infos.portmap[infos.envvars["ETH_PORT"]+"/tcp"]
if err = checkPort(host, p2pPort); err != nil {
log.Warn("Explorer node seems unreachable", "server", host, "port", p2pPort, "err", err)
}
if err = checkPort(host, port); err != nil {
log.Warn("Explorer service seems unreachable", "server", host, "port", port, "err", err)
}
// Assemble and return the useful infos
stats := &explorerInfos{
datadir: infos.volumes["/root/.local/share/io.parity.ethereum"],
nodePort: nodePort,
webHost: host,
webPort: webPort,
ethstats: infos.envvars["STATS"],
node: &nodeInfos{
datadir: infos.volumes["/opt/app/.ethereum"],
port: infos.portmap[infos.envvars["ETH_PORT"]+"/tcp"],
ethstats: infos.envvars["ETH_NAME"],
},
dbdir: infos.volumes["/var/lib/postgresql/data"],
host: host,
port: port,
}
return stats, nil
}

View File

@ -77,7 +77,7 @@ func (w *wizard) deployDashboard() {
}
case "explorer":
if infos, err := checkExplorer(client, w.network); err == nil {
port = infos.webPort
port = infos.port
}
case "wallet":
if infos, err := checkWallet(client, w.network); err == nil {

View File

@ -35,10 +35,6 @@ func (w *wizard) deployExplorer() {
log.Error("No ethstats server configured")
return
}
if w.conf.Genesis.Config.Ethash == nil {
log.Error("Only ethash network supported")
return
}
// Select the server to interact with
server := w.selectServer()
if server == "" {
@ -50,50 +46,57 @@ func (w *wizard) deployExplorer() {
infos, err := checkExplorer(client, w.network)
if err != nil {
infos = &explorerInfos{
nodePort: 30303, webPort: 80, webHost: client.server,
node: &nodeInfos{port: 30303},
port: 80,
host: client.server,
}
}
existed := err == nil
chainspec, err := newParityChainSpec(w.network, w.conf.Genesis, w.conf.bootnodes)
if err != nil {
log.Error("Failed to create chain spec for explorer", "err", err)
return
}
chain, _ := json.MarshalIndent(chainspec, "", " ")
infos.node.genesis, _ = json.MarshalIndent(w.conf.Genesis, "", " ")
infos.node.network = w.conf.Genesis.Config.ChainID.Int64()
// Figure out which port to listen on
fmt.Println()
fmt.Printf("Which port should the explorer listen on? (default = %d)\n", infos.webPort)
infos.webPort = w.readDefaultInt(infos.webPort)
fmt.Printf("Which port should the explorer listen on? (default = %d)\n", infos.port)
infos.port = w.readDefaultInt(infos.port)
// Figure which virtual-host to deploy ethstats on
if infos.webHost, err = w.ensureVirtualHost(client, infos.webPort, infos.webHost); err != nil {
if infos.host, err = w.ensureVirtualHost(client, infos.port, infos.host); err != nil {
log.Error("Failed to decide on explorer host", "err", err)
return
}
// Figure out where the user wants to store the persistent data
fmt.Println()
if infos.datadir == "" {
fmt.Printf("Where should data be stored on the remote machine?\n")
infos.datadir = w.readString()
if infos.node.datadir == "" {
fmt.Printf("Where should node data be stored on the remote machine?\n")
infos.node.datadir = w.readString()
} else {
fmt.Printf("Where should data be stored on the remote machine? (default = %s)\n", infos.datadir)
infos.datadir = w.readDefaultString(infos.datadir)
fmt.Printf("Where should node data be stored on the remote machine? (default = %s)\n", infos.node.datadir)
infos.node.datadir = w.readDefaultString(infos.node.datadir)
}
// Figure out where the user wants to store the persistent data for backend database
fmt.Println()
if infos.dbdir == "" {
fmt.Printf("Where should postgres data be stored on the remote machine?\n")
infos.dbdir = w.readString()
} else {
fmt.Printf("Where should postgres data be stored on the remote machine? (default = %s)\n", infos.dbdir)
infos.dbdir = w.readDefaultString(infos.dbdir)
}
// Figure out which port to listen on
fmt.Println()
fmt.Printf("Which TCP/UDP port should the archive node listen on? (default = %d)\n", infos.nodePort)
infos.nodePort = w.readDefaultInt(infos.nodePort)
fmt.Printf("Which TCP/UDP port should the archive node listen on? (default = %d)\n", infos.node.port)
infos.node.port = w.readDefaultInt(infos.node.port)
// Set a proper name to report on the stats page
fmt.Println()
if infos.ethstats == "" {
if infos.node.ethstats == "" {
fmt.Printf("What should the explorer be called on the stats page?\n")
infos.ethstats = w.readString() + ":" + w.conf.ethstats
infos.node.ethstats = w.readString() + ":" + w.conf.ethstats
} else {
fmt.Printf("What should the explorer be called on the stats page? (default = %s)\n", infos.ethstats)
infos.ethstats = w.readDefaultString(infos.ethstats) + ":" + w.conf.ethstats
fmt.Printf("What should the explorer be called on the stats page? (default = %s)\n", infos.node.ethstats)
infos.node.ethstats = w.readDefaultString(infos.node.ethstats) + ":" + w.conf.ethstats
}
// Try to deploy the explorer on the host
nocache := false
@ -102,7 +105,7 @@ func (w *wizard) deployExplorer() {
fmt.Printf("Should the explorer be built from scratch (y/n)? (default = no)\n")
nocache = w.readDefaultYesNo(false)
}
if out, err := deployExplorer(client, w.network, chain, infos, nocache); err != nil {
if out, err := deployExplorer(client, w.network, w.conf.bootnodes, infos, nocache, w.conf.Genesis.Config.Clique != nil); err != nil {
log.Error("Failed to deploy explorer container", "err", err)
if len(out) > 0 {
fmt.Printf("%s\n", out)

View File

@ -44,12 +44,13 @@ func (w *wizard) makeGenesis() {
Difficulty: big.NewInt(524288),
Alloc: make(core.GenesisAlloc),
Config: &params.ChainConfig{
HomesteadBlock: big.NewInt(1),
EIP150Block: big.NewInt(2),
EIP155Block: big.NewInt(3),
EIP158Block: big.NewInt(3),
ByzantiumBlock: big.NewInt(4),
ConstantinopleBlock: big.NewInt(5),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),
EIP158Block: big.NewInt(0),
ByzantiumBlock: big.NewInt(0),
ConstantinopleBlock: big.NewInt(0),
PetersburgBlock: big.NewInt(0),
},
}
// Figure out which consensus engine to choose
@ -191,7 +192,7 @@ func (w *wizard) importGenesis() {
func (w *wizard) manageGenesis() {
// Figure out whether to modify or export the genesis
fmt.Println()
fmt.Println(" 1. Modify existing fork rules")
fmt.Println(" 1. Modify existing configurations")
fmt.Println(" 2. Export genesis configurations")
fmt.Println(" 3. Remove genesis configuration")
@ -226,7 +227,7 @@ func (w *wizard) manageGenesis() {
w.conf.Genesis.Config.PetersburgBlock = w.conf.Genesis.Config.ConstantinopleBlock
}
fmt.Println()
fmt.Printf("Which block should Constantinople-Fix (remove EIP-1283) come into effect? (default = %v)\n", w.conf.Genesis.Config.PetersburgBlock)
fmt.Printf("Which block should Petersburg come into effect? (default = %v)\n", w.conf.Genesis.Config.PetersburgBlock)
w.conf.Genesis.Config.PetersburgBlock = w.readDefaultBigInt(w.conf.Genesis.Config.PetersburgBlock)
out, _ := json.MarshalIndent(w.conf.Genesis.Config, "", " ")

View File

@ -174,7 +174,7 @@ func (w *wizard) deployComponent() {
fmt.Println(" 1. Ethstats - Network monitoring tool")
fmt.Println(" 2. Bootnode - Entry point of the network")
fmt.Println(" 3. Sealer - Full node minting new blocks")
fmt.Println(" 4. Explorer - Chain analysis webservice (ethash only)")
fmt.Println(" 4. Explorer - Chain analysis webservice")
fmt.Println(" 5. Wallet - Browser wallet for quick sends")
fmt.Println(" 6. Faucet - Crypto faucet to give away funds")
fmt.Println(" 7. Dashboard - Website listing above web-services")

View File

@ -1,297 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/rand"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"strings"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/swarm/api"
"github.com/ethereum/go-ethereum/swarm/api/client"
"gopkg.in/urfave/cli.v1"
)
var (
salt = make([]byte, 32)
accessCommand = cli.Command{
CustomHelpTemplate: helpTemplate,
Name: "access",
Usage: "encrypts a reference and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root manifest",
Subcommands: []cli.Command{
{
CustomHelpTemplate: helpTemplate,
Name: "new",
Usage: "encrypts a reference and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
Subcommands: []cli.Command{
{
Action: accessNewPass,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
utils.PasswordFileFlag,
SwarmDryRunFlag,
},
Name: "pass",
Usage: "encrypts a reference with a password and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
{
Action: accessNewPK,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
utils.PasswordFileFlag,
SwarmDryRunFlag,
SwarmAccessGrantKeyFlag,
},
Name: "pk",
Usage: "encrypts a reference with the node's private key and a given grantee's public key and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
{
Action: accessNewACT,
CustomHelpTemplate: helpTemplate,
Flags: []cli.Flag{
SwarmAccessGrantKeysFlag,
SwarmDryRunFlag,
utils.PasswordFileFlag,
},
Name: "act",
Usage: "encrypts a reference with the node's private key and a given grantee's public key and embeds it into a root manifest",
ArgsUsage: "<ref>",
Description: "encrypts a reference and embeds it into a root access manifest and prints the resulting manifest",
},
},
},
},
}
)
func init() {
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
panic("reading from crypto/rand failed: " + err.Error())
}
}
func accessNewPass(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 1 {
utils.Fatalf("Expected 1 argument - the ref")
}
var (
ae *api.AccessEntry
accessKey []byte
err error
ref = args[0]
password = getPassPhrase("", 0, makePasswordList(ctx))
dryRun = ctx.Bool(SwarmDryRunFlag.Name)
)
accessKey, ae, err = api.DoPassword(ctx, password, salt)
if err != nil {
utils.Fatalf("error getting session key: %v", err)
}
m, err := api.GenerateAccessControlManifest(ctx, ref, accessKey, ae)
if err != nil {
utils.Fatalf("had an error generating the manifest: %v", err)
}
if dryRun {
err = printManifests(m, nil)
if err != nil {
utils.Fatalf("had an error printing the manifests: %v", err)
}
} else {
err = uploadManifests(ctx, m, nil)
if err != nil {
utils.Fatalf("had an error uploading the manifests: %v", err)
}
}
}
func accessNewPK(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 1 {
utils.Fatalf("Expected 1 argument - the ref")
}
var (
ae *api.AccessEntry
sessionKey []byte
err error
ref = args[0]
privateKey = getPrivKey(ctx)
granteePublicKey = ctx.String(SwarmAccessGrantKeyFlag.Name)
dryRun = ctx.Bool(SwarmDryRunFlag.Name)
)
sessionKey, ae, err = api.DoPK(ctx, privateKey, granteePublicKey, salt)
if err != nil {
utils.Fatalf("error getting session key: %v", err)
}
m, err := api.GenerateAccessControlManifest(ctx, ref, sessionKey, ae)
if err != nil {
utils.Fatalf("had an error generating the manifest: %v", err)
}
if dryRun {
err = printManifests(m, nil)
if err != nil {
utils.Fatalf("had an error printing the manifests: %v", err)
}
} else {
err = uploadManifests(ctx, m, nil)
if err != nil {
utils.Fatalf("had an error uploading the manifests: %v", err)
}
}
}
func accessNewACT(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 1 {
utils.Fatalf("Expected 1 argument - the ref")
}
var (
ae *api.AccessEntry
actManifest *api.Manifest
accessKey []byte
err error
ref = args[0]
pkGrantees []string
passGrantees []string
pkGranteesFilename = ctx.String(SwarmAccessGrantKeysFlag.Name)
passGranteesFilename = ctx.String(utils.PasswordFileFlag.Name)
privateKey = getPrivKey(ctx)
dryRun = ctx.Bool(SwarmDryRunFlag.Name)
)
if pkGranteesFilename == "" && passGranteesFilename == "" {
utils.Fatalf("you have to provide either a grantee public-keys file or an encryption passwords file (or both)")
}
if pkGranteesFilename != "" {
bytes, err := ioutil.ReadFile(pkGranteesFilename)
if err != nil {
utils.Fatalf("had an error reading the grantee public key list")
}
pkGrantees = strings.Split(strings.Trim(string(bytes), "\n"), "\n")
}
if passGranteesFilename != "" {
bytes, err := ioutil.ReadFile(passGranteesFilename)
if err != nil {
utils.Fatalf("could not read password filename: %v", err)
}
passGrantees = strings.Split(strings.Trim(string(bytes), "\n"), "\n")
}
accessKey, ae, actManifest, err = api.DoACT(ctx, privateKey, salt, pkGrantees, passGrantees)
if err != nil {
utils.Fatalf("error generating ACT manifest: %v", err)
}
if err != nil {
utils.Fatalf("error getting session key: %v", err)
}
m, err := api.GenerateAccessControlManifest(ctx, ref, accessKey, ae)
if err != nil {
utils.Fatalf("error generating root access manifest: %v", err)
}
if dryRun {
err = printManifests(m, actManifest)
if err != nil {
utils.Fatalf("had an error printing the manifests: %v", err)
}
} else {
err = uploadManifests(ctx, m, actManifest)
if err != nil {
utils.Fatalf("had an error uploading the manifests: %v", err)
}
}
}
func printManifests(rootAccessManifest, actManifest *api.Manifest) error {
js, err := json.Marshal(rootAccessManifest)
if err != nil {
return err
}
fmt.Println(string(js))
if actManifest != nil {
js, err := json.Marshal(actManifest)
if err != nil {
return err
}
fmt.Println(string(js))
}
return nil
}
func uploadManifests(ctx *cli.Context, rootAccessManifest, actManifest *api.Manifest) error {
bzzapi := strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client := client.NewClient(bzzapi)
var (
key string
err error
)
if actManifest != nil {
key, err = client.UploadManifest(actManifest, false)
if err != nil {
return err
}
rootAccessManifest.Entries[0].Access.Act = key
}
key, err = client.UploadManifest(rootAccessManifest, false)
if err != nil {
return err
}
fmt.Println(key)
return nil
}
// makePasswordList reads password lines from the file specified by the global --password flag
// and also by the same subcommand --password flag.
// This function ia a fork of utils.MakePasswordList to lookup cli context for subcommand.
// Function ctx.SetGlobal is not setting the global flag value that can be accessed
// by ctx.GlobalString using the current version of cli package.
func makePasswordList(ctx *cli.Context) []string {
path := ctx.GlobalString(utils.PasswordFileFlag.Name)
if path == "" {
path = ctx.String(utils.PasswordFileFlag.Name)
if path == "" {
return nil
}
}
text, err := ioutil.ReadFile(path)
if err != nil {
utils.Fatalf("Failed to read password file: %v", err)
}
lines := strings.Split(string(text), "\n")
// Sanitise DOS line endings.
for i := range lines {
lines[i] = strings.TrimRight(lines[i], "\r")
}
return lines
}

View File

@ -1,617 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"crypto/rand"
"encoding/hex"
"encoding/json"
"io"
"io/ioutil"
gorand "math/rand"
"net/http"
"os"
"runtime"
"strings"
"testing"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/crypto/ecies"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/api"
swarmapi "github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/swarm/testutil"
"golang.org/x/crypto/sha3"
)
const (
hashRegexp = `[a-f\d]{128}`
data = "notsorandomdata"
)
var DefaultCurve = crypto.S256()
func TestACT(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
cluster := newTestCluster(t, clusterSize)
defer cluster.Shutdown()
cases := []struct {
name string
f func(t *testing.T, cluster *testCluster)
}{
{"Password", testPassword},
{"PK", testPK},
{"ACTWithoutBogus", testACTWithoutBogus},
{"ACTWithBogus", testACTWithBogus},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
tc.f(t, cluster)
})
}
}
// testPassword tests for the correct creation of an ACT manifest protected by a password.
// The test creates bogus content, uploads it encrypted, then creates the wrapping manifest with the Access entry
// The parties participating - node (publisher), uploads to second node then disappears. Content which was uploaded
// is then fetched through 2nd node. since the tested code is not key-aware - we can just
// fetch from the 2nd node using HTTP BasicAuth
func testPassword(t *testing.T, cluster *testCluster) {
dataFilename := testutil.TempFileWithContent(t, data)
defer os.RemoveAll(dataFilename)
// upload the file with 'swarm up' and expect a hash
up := runSwarm(t,
"--bzzapi",
cluster.Nodes[0].URL,
"up",
"--encrypt",
dataFilename)
_, matches := up.ExpectRegexp(hashRegexp)
up.ExpectExit()
if len(matches) < 1 {
t.Fatal("no matches found")
}
ref := matches[0]
tmp, err := ioutil.TempDir("", "swarm-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
password := "smth"
passwordFilename := testutil.TempFileWithContent(t, "smth")
defer os.RemoveAll(passwordFilename)
up = runSwarm(t,
"access",
"new",
"pass",
"--dry-run",
"--password",
passwordFilename,
ref,
)
_, matches = up.ExpectRegexp(".+")
up.ExpectExit()
if len(matches) == 0 {
t.Fatalf("stdout not matched")
}
var m api.Manifest
err = json.Unmarshal([]byte(matches[0]), &m)
if err != nil {
t.Fatalf("unmarshal manifest: %v", err)
}
if len(m.Entries) != 1 {
t.Fatalf("expected one manifest entry, got %v", len(m.Entries))
}
e := m.Entries[0]
ct := "application/bzz-manifest+json"
if e.ContentType != ct {
t.Errorf("expected %q content type, got %q", ct, e.ContentType)
}
if e.Access == nil {
t.Fatal("manifest access is nil")
}
a := e.Access
if a.Type != "pass" {
t.Errorf(`got access type %q, expected "pass"`, a.Type)
}
if len(a.Salt) < 32 {
t.Errorf(`got salt with length %v, expected not less the 32 bytes`, len(a.Salt))
}
if a.KdfParams == nil {
t.Fatal("manifest access kdf params is nil")
}
if a.Publisher != "" {
t.Fatal("should be empty")
}
client := swarmapi.NewClient(cluster.Nodes[0].URL)
hash, err := client.UploadManifest(&m, false)
if err != nil {
t.Fatal(err)
}
url := cluster.Nodes[0].URL + "/" + "bzz:/" + hash
httpClient := &http.Client{}
response, err := httpClient.Get(url)
if err != nil {
t.Fatal(err)
}
if response.StatusCode != http.StatusUnauthorized {
t.Fatal("should be a 401")
}
authHeader := response.Header.Get("WWW-Authenticate")
if authHeader == "" {
t.Fatal("should be something here")
}
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
t.Fatal(err)
}
req.SetBasicAuth("", password)
response, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
defer response.Body.Close()
if response.StatusCode != http.StatusOK {
t.Errorf("expected status %v, got %v", http.StatusOK, response.StatusCode)
}
d, err := ioutil.ReadAll(response.Body)
if err != nil {
t.Fatal(err)
}
if string(d) != data {
t.Errorf("expected decrypted data %q, got %q", data, string(d))
}
wrongPasswordFilename := testutil.TempFileWithContent(t, "just wr0ng")
defer os.RemoveAll(wrongPasswordFilename)
//download file with 'swarm down' with wrong password
up = runSwarm(t,
"--bzzapi",
cluster.Nodes[0].URL,
"down",
"bzz:/"+hash,
tmp,
"--password",
wrongPasswordFilename)
_, matches = up.ExpectRegexp("unauthorized")
if len(matches) != 1 && matches[0] != "unauthorized" {
t.Fatal(`"unauthorized" not found in output"`)
}
up.ExpectExit()
}
// testPK tests for the correct creation of an ACT manifest between two parties (publisher and grantee).
// The test creates bogus content, uploads it encrypted, then creates the wrapping manifest with the Access entry
// The parties participating - node (publisher), uploads to second node (which is also the grantee) then disappears.
// Content which was uploaded is then fetched through the grantee's http proxy. Since the tested code is private-key aware,
// the test will fail if the proxy's given private key is not granted on the ACT.
func testPK(t *testing.T, cluster *testCluster) {
dataFilename := testutil.TempFileWithContent(t, data)
defer os.RemoveAll(dataFilename)
// upload the file with 'swarm up' and expect a hash
up := runSwarm(t,
"--bzzapi",
cluster.Nodes[0].URL,
"up",
"--encrypt",
dataFilename)
_, matches := up.ExpectRegexp(hashRegexp)
up.ExpectExit()
if len(matches) < 1 {
t.Fatal("no matches found")
}
ref := matches[0]
pk := cluster.Nodes[0].PrivateKey
granteePubKey := crypto.CompressPubkey(&pk.PublicKey)
publisherDir, err := ioutil.TempDir("", "swarm-account-dir-temp")
if err != nil {
t.Fatal(err)
}
passwordFilename := testutil.TempFileWithContent(t, testPassphrase)
defer os.RemoveAll(passwordFilename)
_, publisherAccount := getTestAccount(t, publisherDir)
up = runSwarm(t,
"--bzzaccount",
publisherAccount.Address.String(),
"--password",
passwordFilename,
"--datadir",
publisherDir,
"--bzzapi",
cluster.Nodes[0].URL,
"access",
"new",
"pk",
"--dry-run",
"--grant-key",
hex.EncodeToString(granteePubKey),
ref,
)
_, matches = up.ExpectRegexp(".+")
up.ExpectExit()
if len(matches) == 0 {
t.Fatalf("stdout not matched")
}
//get the public key from the publisher directory
publicKeyFromDataDir := runSwarm(t,
"--bzzaccount",
publisherAccount.Address.String(),
"--password",
passwordFilename,
"--datadir",
publisherDir,
"print-keys",
"--compressed",
)
_, publicKeyString := publicKeyFromDataDir.ExpectRegexp(".+")
publicKeyFromDataDir.ExpectExit()
pkComp := strings.Split(publicKeyString[0], "=")[1]
var m api.Manifest
err = json.Unmarshal([]byte(matches[0]), &m)
if err != nil {
t.Fatalf("unmarshal manifest: %v", err)
}
if len(m.Entries) != 1 {
t.Fatalf("expected one manifest entry, got %v", len(m.Entries))
}
e := m.Entries[0]
ct := "application/bzz-manifest+json"
if e.ContentType != ct {
t.Errorf("expected %q content type, got %q", ct, e.ContentType)
}
if e.Access == nil {
t.Fatal("manifest access is nil")
}
a := e.Access
if a.Type != "pk" {
t.Errorf(`got access type %q, expected "pk"`, a.Type)
}
if len(a.Salt) < 32 {
t.Errorf(`got salt with length %v, expected not less the 32 bytes`, len(a.Salt))
}
if a.KdfParams != nil {
t.Fatal("manifest access kdf params should be nil")
}
if a.Publisher != pkComp {
t.Fatal("publisher key did not match")
}
client := swarmapi.NewClient(cluster.Nodes[0].URL)
hash, err := client.UploadManifest(&m, false)
if err != nil {
t.Fatal(err)
}
httpClient := &http.Client{}
url := cluster.Nodes[0].URL + "/" + "bzz:/" + hash
response, err := httpClient.Get(url)
if err != nil {
t.Fatal(err)
}
if response.StatusCode != http.StatusOK {
t.Fatal("should be a 200")
}
d, err := ioutil.ReadAll(response.Body)
if err != nil {
t.Fatal(err)
}
if string(d) != data {
t.Errorf("expected decrypted data %q, got %q", data, string(d))
}
}
// testACTWithoutBogus tests the creation of the ACT manifest end-to-end, without any bogus entries (i.e. default scenario = 3 nodes 1 unauthorized)
func testACTWithoutBogus(t *testing.T, cluster *testCluster) {
testACT(t, cluster, 0)
}
// testACTWithBogus tests the creation of the ACT manifest end-to-end, with 100 bogus entries (i.e. 100 EC keys + default scenario = 3 nodes 1 unauthorized = 103 keys in the ACT manifest)
func testACTWithBogus(t *testing.T, cluster *testCluster) {
testACT(t, cluster, 100)
}
// testACT tests the e2e creation, uploading and downloading of an ACT access control with both EC keys AND password protection
// the test fires up a 3 node cluster, then randomly picks 2 nodes which will be acting as grantees to the data
// set and also protects the ACT with a password. the third node should fail decoding the reference as it will not be granted access.
// the third node then then tries to download using a correct password (and succeeds) then uses a wrong password and fails.
// the publisher uploads through one of the nodes then disappears.
func testACT(t *testing.T, cluster *testCluster, bogusEntries int) {
var uploadThroughNode = cluster.Nodes[0]
client := swarmapi.NewClient(uploadThroughNode.URL)
r1 := gorand.New(gorand.NewSource(time.Now().UnixNano()))
nodeToSkip := r1.Intn(clusterSize) // a number between 0 and 2 (node indices in `cluster`)
dataFilename := testutil.TempFileWithContent(t, data)
defer os.RemoveAll(dataFilename)
// upload the file with 'swarm up' and expect a hash
up := runSwarm(t,
"--bzzapi",
cluster.Nodes[0].URL,
"up",
"--encrypt",
dataFilename)
_, matches := up.ExpectRegexp(hashRegexp)
up.ExpectExit()
if len(matches) < 1 {
t.Fatal("no matches found")
}
ref := matches[0]
var grantees []string
for i, v := range cluster.Nodes {
if i == nodeToSkip {
continue
}
pk := v.PrivateKey
granteePubKey := crypto.CompressPubkey(&pk.PublicKey)
grantees = append(grantees, hex.EncodeToString(granteePubKey))
}
if bogusEntries > 0 {
var bogusGrantees []string
for i := 0; i < bogusEntries; i++ {
prv, err := ecies.GenerateKey(rand.Reader, DefaultCurve, nil)
if err != nil {
t.Fatal(err)
}
bogusGrantees = append(bogusGrantees, hex.EncodeToString(crypto.CompressPubkey(&prv.ExportECDSA().PublicKey)))
}
r2 := gorand.New(gorand.NewSource(time.Now().UnixNano()))
for i := 0; i < len(grantees); i++ {
insertAtIdx := r2.Intn(len(bogusGrantees))
bogusGrantees = append(bogusGrantees[:insertAtIdx], append([]string{grantees[i]}, bogusGrantees[insertAtIdx:]...)...)
}
grantees = bogusGrantees
}
granteesPubkeyListFile := testutil.TempFileWithContent(t, strings.Join(grantees, "\n"))
defer os.RemoveAll(granteesPubkeyListFile)
publisherDir, err := ioutil.TempDir("", "swarm-account-dir-temp")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(publisherDir)
passwordFilename := testutil.TempFileWithContent(t, testPassphrase)
defer os.RemoveAll(passwordFilename)
actPasswordFilename := testutil.TempFileWithContent(t, "smth")
defer os.RemoveAll(actPasswordFilename)
_, publisherAccount := getTestAccount(t, publisherDir)
up = runSwarm(t,
"--bzzaccount",
publisherAccount.Address.String(),
"--password",
passwordFilename,
"--datadir",
publisherDir,
"--bzzapi",
cluster.Nodes[0].URL,
"access",
"new",
"act",
"--grant-keys",
granteesPubkeyListFile,
"--password",
actPasswordFilename,
ref,
)
_, matches = up.ExpectRegexp(`[a-f\d]{64}`)
up.ExpectExit()
if len(matches) == 0 {
t.Fatalf("stdout not matched")
}
//get the public key from the publisher directory
publicKeyFromDataDir := runSwarm(t,
"--bzzaccount",
publisherAccount.Address.String(),
"--password",
passwordFilename,
"--datadir",
publisherDir,
"print-keys",
"--compressed",
)
_, publicKeyString := publicKeyFromDataDir.ExpectRegexp(".+")
publicKeyFromDataDir.ExpectExit()
pkComp := strings.Split(publicKeyString[0], "=")[1]
hash := matches[0]
m, _, err := client.DownloadManifest(hash)
if err != nil {
t.Fatalf("unmarshal manifest: %v", err)
}
if len(m.Entries) != 1 {
t.Fatalf("expected one manifest entry, got %v", len(m.Entries))
}
e := m.Entries[0]
ct := "application/bzz-manifest+json"
if e.ContentType != ct {
t.Errorf("expected %q content type, got %q", ct, e.ContentType)
}
if e.Access == nil {
t.Fatal("manifest access is nil")
}
a := e.Access
if a.Type != "act" {
t.Fatalf(`got access type %q, expected "act"`, a.Type)
}
if len(a.Salt) < 32 {
t.Fatalf(`got salt with length %v, expected not less the 32 bytes`, len(a.Salt))
}
if a.Publisher != pkComp {
t.Fatal("publisher key did not match")
}
httpClient := &http.Client{}
// all nodes except the skipped node should be able to decrypt the content
for i, node := range cluster.Nodes {
log.Debug("trying to fetch from node", "node index", i)
url := node.URL + "/" + "bzz:/" + hash
response, err := httpClient.Get(url)
if err != nil {
t.Fatal(err)
}
log.Debug("got response from node", "response code", response.StatusCode)
if i == nodeToSkip {
log.Debug("reached node to skip", "status code", response.StatusCode)
if response.StatusCode != http.StatusUnauthorized {
t.Fatalf("should be a 401")
}
// try downloading using a password instead, using the unauthorized node
passwordUrl := strings.Replace(url, "http://", "http://:smth@", -1)
response, err = httpClient.Get(passwordUrl)
if err != nil {
t.Fatal(err)
}
if response.StatusCode != http.StatusOK {
t.Fatal("should be a 200")
}
// now try with the wrong password, expect 401
passwordUrl = strings.Replace(url, "http://", "http://:smthWrong@", -1)
response, err = httpClient.Get(passwordUrl)
if err != nil {
t.Fatal(err)
}
if response.StatusCode != http.StatusUnauthorized {
t.Fatal("should be a 401")
}
continue
}
if response.StatusCode != http.StatusOK {
t.Fatal("should be a 200")
}
d, err := ioutil.ReadAll(response.Body)
if err != nil {
t.Fatal(err)
}
if string(d) != data {
t.Errorf("expected decrypted data %q, got %q", data, string(d))
}
}
}
// TestKeypairSanity is a sanity test for the crypto scheme for ACT. it asserts the correct shared secret according to
// the specs at https://github.com/ethersphere/swarm-docs/blob/eb857afda906c6e7bb90d37f3f334ccce5eef230/act.md
func TestKeypairSanity(t *testing.T) {
salt := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
t.Fatalf("reading from crypto/rand failed: %v", err.Error())
}
sharedSecret := "a85586744a1ddd56a7ed9f33fa24f40dd745b3a941be296a0d60e329dbdb896d"
for i, v := range []struct {
publisherPriv string
granteePub string
}{
{
publisherPriv: "ec5541555f3bc6376788425e9d1a62f55a82901683fd7062c5eddcc373a73459",
granteePub: "0226f213613e843a413ad35b40f193910d26eb35f00154afcde9ded57479a6224a",
},
{
publisherPriv: "70c7a73011aa56584a0009ab874794ee7e5652fd0c6911cd02f8b6267dd82d2d",
granteePub: "02e6f8d5e28faaa899744972bb847b6eb805a160494690c9ee7197ae9f619181db",
},
} {
b, _ := hex.DecodeString(v.granteePub)
granteePub, _ := crypto.DecompressPubkey(b)
publisherPrivate, _ := crypto.HexToECDSA(v.publisherPriv)
ssKey, err := api.NewSessionKeyPK(publisherPrivate, granteePub, salt)
if err != nil {
t.Fatal(err)
}
hasher := sha3.NewLegacyKeccak256()
hasher.Write(salt)
shared, err := hex.DecodeString(sharedSecret)
if err != nil {
t.Fatal(err)
}
hasher.Write(shared)
sum := hasher.Sum(nil)
if !bytes.Equal(ssKey, sum) {
t.Fatalf("%d: got a session key mismatch", i)
}
}
}

View File

@ -1,24 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
var SwarmBootnodes = []string{
// EF Swarm Bootnode - AWS - eu-central-1
"enode://4c113504601930bf2000c29bcd98d1716b6167749f58bad703bae338332fe93cc9d9204f08afb44100dc7bea479205f5d162df579f9a8f76f8b402d339709023@3.122.203.99:30301",
// EF Swarm Bootnode - AWS - us-west-2
"enode://89f2ede3371bff1ad9f2088f2012984e280287a4e2b68007c2a6ad994909c51886b4a8e9e2ecc97f9910aca538398e0a5804b0ee80a187fde1ba4f32626322ba@52.35.212.179:30301",
}

View File

@ -1,451 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"errors"
"fmt"
"io"
"os"
"reflect"
"strconv"
"strings"
"time"
"unicode"
cli "gopkg.in/urfave/cli.v1"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/naoina/toml"
bzzapi "github.com/ethereum/go-ethereum/swarm/api"
)
var (
//flag definition for the dumpconfig command
DumpConfigCommand = cli.Command{
Action: utils.MigrateFlags(dumpConfig),
Name: "dumpconfig",
Usage: "Show configuration values",
ArgsUsage: "",
Flags: app.Flags,
Category: "MISCELLANEOUS COMMANDS",
Description: `The dumpconfig command shows configuration values.`,
}
//flag definition for the config file command
SwarmTomlConfigPathFlag = cli.StringFlag{
Name: "config",
Usage: "TOML configuration file",
}
)
//constants for environment variables
const (
SwarmEnvChequebookAddr = "SWARM_CHEQUEBOOK_ADDR"
SwarmEnvAccount = "SWARM_ACCOUNT"
SwarmEnvListenAddr = "SWARM_LISTEN_ADDR"
SwarmEnvPort = "SWARM_PORT"
SwarmEnvNetworkID = "SWARM_NETWORK_ID"
SwarmEnvSwapEnable = "SWARM_SWAP_ENABLE"
SwarmEnvSwapAPI = "SWARM_SWAP_API"
SwarmEnvSyncDisable = "SWARM_SYNC_DISABLE"
SwarmEnvSyncUpdateDelay = "SWARM_ENV_SYNC_UPDATE_DELAY"
SwarmEnvMaxStreamPeerServers = "SWARM_ENV_MAX_STREAM_PEER_SERVERS"
SwarmEnvLightNodeEnable = "SWARM_LIGHT_NODE_ENABLE"
SwarmEnvDeliverySkipCheck = "SWARM_DELIVERY_SKIP_CHECK"
SwarmEnvENSAPI = "SWARM_ENS_API"
SwarmEnvENSAddr = "SWARM_ENS_ADDR"
SwarmEnvCORS = "SWARM_CORS"
SwarmEnvBootnodes = "SWARM_BOOTNODES"
SwarmEnvPSSEnable = "SWARM_PSS_ENABLE"
SwarmEnvStorePath = "SWARM_STORE_PATH"
SwarmEnvStoreCapacity = "SWARM_STORE_CAPACITY"
SwarmEnvStoreCacheCapacity = "SWARM_STORE_CACHE_CAPACITY"
SwarmEnvBootnodeMode = "SWARM_BOOTNODE_MODE"
SwarmAccessPassword = "SWARM_ACCESS_PASSWORD"
SwarmAutoDefaultPath = "SWARM_AUTO_DEFAULTPATH"
SwarmGlobalstoreAPI = "SWARM_GLOBALSTORE_API"
GethEnvDataDir = "GETH_DATADIR"
)
// These settings ensure that TOML keys use the same names as Go struct fields.
var tomlSettings = toml.Config{
NormFieldName: func(rt reflect.Type, key string) string {
return key
},
FieldToKey: func(rt reflect.Type, field string) string {
return field
},
MissingField: func(rt reflect.Type, field string) error {
link := ""
if unicode.IsUpper(rune(rt.Name()[0])) && rt.PkgPath() != "main" {
link = fmt.Sprintf(", check github.com/ethereum/go-ethereum/swarm/api/config.go for available fields")
}
return fmt.Errorf("field '%s' is not defined in %s%s", field, rt.String(), link)
},
}
//before booting the swarm node, build the configuration
func buildConfig(ctx *cli.Context) (config *bzzapi.Config, err error) {
//start by creating a default config
config = bzzapi.NewConfig()
//first load settings from config file (if provided)
config, err = configFileOverride(config, ctx)
if err != nil {
return nil, err
}
//override settings provided by environment variables
config = envVarsOverride(config)
//override settings provided by command line
config = cmdLineOverride(config, ctx)
//validate configuration parameters
err = validateConfig(config)
return
}
//finally, after the configuration build phase is finished, initialize
func initSwarmNode(config *bzzapi.Config, stack *node.Node, ctx *cli.Context, nodeconfig *node.Config) error {
//at this point, all vars should be set in the Config
//get the account for the provided swarm account
prvkey := getAccount(config.BzzAccount, ctx, stack)
//set the resolved config path (geth --datadir)
config.Path = expandPath(stack.InstanceDir())
//finally, initialize the configuration
err := config.Init(prvkey, nodeconfig.NodeKey())
if err != nil {
return err
}
//configuration phase completed here
log.Debug("Starting Swarm with the following parameters:")
//after having created the config, print it to screen
log.Debug(printConfig(config))
return nil
}
//configFileOverride overrides the current config with the config file, if a config file has been provided
func configFileOverride(config *bzzapi.Config, ctx *cli.Context) (*bzzapi.Config, error) {
var err error
//only do something if the -config flag has been set
if ctx.GlobalIsSet(SwarmTomlConfigPathFlag.Name) {
var filepath string
if filepath = ctx.GlobalString(SwarmTomlConfigPathFlag.Name); filepath == "" {
utils.Fatalf("Config file flag provided with invalid file path")
}
var f *os.File
f, err = os.Open(filepath)
if err != nil {
return nil, err
}
defer f.Close()
//decode the TOML file into a Config struct
//note that we are decoding into the existing defaultConfig;
//if an entry is not present in the file, the default entry is kept
err = tomlSettings.NewDecoder(f).Decode(&config)
// Add file name to errors that have a line number.
if _, ok := err.(*toml.LineError); ok {
err = errors.New(filepath + ", " + err.Error())
}
}
return config, err
}
// cmdLineOverride overrides the current config with whatever is provided through the command line
// most values are not allowed a zero value (empty string), if not otherwise noted
func cmdLineOverride(currentConfig *bzzapi.Config, ctx *cli.Context) *bzzapi.Config {
if keyid := ctx.GlobalString(SwarmAccountFlag.Name); keyid != "" {
currentConfig.BzzAccount = keyid
}
if chbookaddr := ctx.GlobalString(ChequebookAddrFlag.Name); chbookaddr != "" {
currentConfig.Contract = common.HexToAddress(chbookaddr)
}
if networkid := ctx.GlobalString(SwarmNetworkIdFlag.Name); networkid != "" {
id, err := strconv.ParseUint(networkid, 10, 64)
if err != nil {
utils.Fatalf("invalid cli flag %s: %v", SwarmNetworkIdFlag.Name, err)
}
if id != 0 {
currentConfig.NetworkID = id
}
}
if ctx.GlobalIsSet(utils.DataDirFlag.Name) {
if datadir := ctx.GlobalString(utils.DataDirFlag.Name); datadir != "" {
currentConfig.Path = expandPath(datadir)
}
}
bzzport := ctx.GlobalString(SwarmPortFlag.Name)
if len(bzzport) > 0 {
currentConfig.Port = bzzport
}
if bzzaddr := ctx.GlobalString(SwarmListenAddrFlag.Name); bzzaddr != "" {
currentConfig.ListenAddr = bzzaddr
}
if ctx.GlobalIsSet(SwarmSwapEnabledFlag.Name) {
currentConfig.SwapEnabled = true
}
if ctx.GlobalIsSet(SwarmSyncDisabledFlag.Name) {
currentConfig.SyncEnabled = false
}
if d := ctx.GlobalDuration(SwarmSyncUpdateDelay.Name); d > 0 {
currentConfig.SyncUpdateDelay = d
}
// any value including 0 is acceptable
currentConfig.MaxStreamPeerServers = ctx.GlobalInt(SwarmMaxStreamPeerServersFlag.Name)
if ctx.GlobalIsSet(SwarmLightNodeEnabled.Name) {
currentConfig.LightNodeEnabled = true
}
if ctx.GlobalIsSet(SwarmDeliverySkipCheckFlag.Name) {
currentConfig.DeliverySkipCheck = true
}
currentConfig.SwapAPI = ctx.GlobalString(SwarmSwapAPIFlag.Name)
if currentConfig.SwapEnabled && currentConfig.SwapAPI == "" {
utils.Fatalf(SwarmErrSwapSetNoAPI)
}
if ctx.GlobalIsSet(EnsAPIFlag.Name) {
ensAPIs := ctx.GlobalStringSlice(EnsAPIFlag.Name)
// preserve backward compatibility to disable ENS with --ens-api=""
if len(ensAPIs) == 1 && ensAPIs[0] == "" {
ensAPIs = nil
}
for i := range ensAPIs {
ensAPIs[i] = expandPath(ensAPIs[i])
}
currentConfig.EnsAPIs = ensAPIs
}
if cors := ctx.GlobalString(CorsStringFlag.Name); cors != "" {
currentConfig.Cors = cors
}
if storePath := ctx.GlobalString(SwarmStorePath.Name); storePath != "" {
currentConfig.ChunkDbPath = storePath
}
if storeCapacity := ctx.GlobalUint64(SwarmStoreCapacity.Name); storeCapacity != 0 {
currentConfig.DbCapacity = storeCapacity
}
if ctx.GlobalIsSet(SwarmStoreCacheCapacity.Name) {
currentConfig.CacheCapacity = ctx.GlobalUint(SwarmStoreCacheCapacity.Name)
}
if ctx.GlobalIsSet(SwarmBootnodeModeFlag.Name) {
currentConfig.BootnodeMode = ctx.GlobalBool(SwarmBootnodeModeFlag.Name)
}
if ctx.GlobalIsSet(SwarmGlobalStoreAPIFlag.Name) {
currentConfig.GlobalStoreAPI = ctx.GlobalString(SwarmGlobalStoreAPIFlag.Name)
}
return currentConfig
}
// envVarsOverride overrides the current config with whatver is provided in environment variables
// most values are not allowed a zero value (empty string), if not otherwise noted
func envVarsOverride(currentConfig *bzzapi.Config) (config *bzzapi.Config) {
if keyid := os.Getenv(SwarmEnvAccount); keyid != "" {
currentConfig.BzzAccount = keyid
}
if chbookaddr := os.Getenv(SwarmEnvChequebookAddr); chbookaddr != "" {
currentConfig.Contract = common.HexToAddress(chbookaddr)
}
if networkid := os.Getenv(SwarmEnvNetworkID); networkid != "" {
id, err := strconv.ParseUint(networkid, 10, 64)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvNetworkID, err)
}
if id != 0 {
currentConfig.NetworkID = id
}
}
if datadir := os.Getenv(GethEnvDataDir); datadir != "" {
currentConfig.Path = expandPath(datadir)
}
bzzport := os.Getenv(SwarmEnvPort)
if len(bzzport) > 0 {
currentConfig.Port = bzzport
}
if bzzaddr := os.Getenv(SwarmEnvListenAddr); bzzaddr != "" {
currentConfig.ListenAddr = bzzaddr
}
if swapenable := os.Getenv(SwarmEnvSwapEnable); swapenable != "" {
swap, err := strconv.ParseBool(swapenable)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvSwapEnable, err)
}
currentConfig.SwapEnabled = swap
}
if syncdisable := os.Getenv(SwarmEnvSyncDisable); syncdisable != "" {
sync, err := strconv.ParseBool(syncdisable)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvSyncDisable, err)
}
currentConfig.SyncEnabled = !sync
}
if v := os.Getenv(SwarmEnvDeliverySkipCheck); v != "" {
skipCheck, err := strconv.ParseBool(v)
if err != nil {
currentConfig.DeliverySkipCheck = skipCheck
}
}
if v := os.Getenv(SwarmEnvSyncUpdateDelay); v != "" {
d, err := time.ParseDuration(v)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvSyncUpdateDelay, err)
}
currentConfig.SyncUpdateDelay = d
}
if max := os.Getenv(SwarmEnvMaxStreamPeerServers); max != "" {
m, err := strconv.Atoi(max)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvMaxStreamPeerServers, err)
}
currentConfig.MaxStreamPeerServers = m
}
if lne := os.Getenv(SwarmEnvLightNodeEnable); lne != "" {
lightnode, err := strconv.ParseBool(lne)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvLightNodeEnable, err)
}
currentConfig.LightNodeEnabled = lightnode
}
if swapapi := os.Getenv(SwarmEnvSwapAPI); swapapi != "" {
currentConfig.SwapAPI = swapapi
}
if currentConfig.SwapEnabled && currentConfig.SwapAPI == "" {
utils.Fatalf(SwarmErrSwapSetNoAPI)
}
if ensapi := os.Getenv(SwarmEnvENSAPI); ensapi != "" {
currentConfig.EnsAPIs = strings.Split(ensapi, ",")
}
if ensaddr := os.Getenv(SwarmEnvENSAddr); ensaddr != "" {
currentConfig.EnsRoot = common.HexToAddress(ensaddr)
}
if cors := os.Getenv(SwarmEnvCORS); cors != "" {
currentConfig.Cors = cors
}
if bm := os.Getenv(SwarmEnvBootnodeMode); bm != "" {
bootnodeMode, err := strconv.ParseBool(bm)
if err != nil {
utils.Fatalf("invalid environment variable %s: %v", SwarmEnvBootnodeMode, err)
}
currentConfig.BootnodeMode = bootnodeMode
}
if api := os.Getenv(SwarmGlobalstoreAPI); api != "" {
currentConfig.GlobalStoreAPI = api
}
return currentConfig
}
// dumpConfig is the dumpconfig command.
// writes a default config to STDOUT
func dumpConfig(ctx *cli.Context) error {
cfg, err := buildConfig(ctx)
if err != nil {
utils.Fatalf(fmt.Sprintf("Uh oh - dumpconfig triggered an error %v", err))
}
comment := ""
out, err := tomlSettings.Marshal(&cfg)
if err != nil {
return err
}
io.WriteString(os.Stdout, comment)
os.Stdout.Write(out)
return nil
}
//validate configuration parameters
func validateConfig(cfg *bzzapi.Config) (err error) {
for _, ensAPI := range cfg.EnsAPIs {
if ensAPI != "" {
if err := validateEnsAPIs(ensAPI); err != nil {
return fmt.Errorf("invalid format [tld:][contract-addr@]url for ENS API endpoint configuration %q: %v", ensAPI, err)
}
}
}
return nil
}
//validate EnsAPIs configuration parameter
func validateEnsAPIs(s string) (err error) {
// missing contract address
if strings.HasPrefix(s, "@") {
return errors.New("missing contract address")
}
// missing url
if strings.HasSuffix(s, "@") {
return errors.New("missing url")
}
// missing tld
if strings.HasPrefix(s, ":") {
return errors.New("missing tld")
}
// missing url
if strings.HasSuffix(s, ":") {
return errors.New("missing url")
}
return nil
}
//print a Config as string
func printConfig(config *bzzapi.Config) string {
out, err := tomlSettings.Marshal(&config)
if err != nil {
return fmt.Sprintf("Something is not right with the configuration: %v", err)
}
return string(out)
}

View File

@ -1,575 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"io"
"io/ioutil"
"net"
"os"
"os/exec"
"testing"
"time"
"github.com/docker/docker/pkg/reexec"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm"
"github.com/ethereum/go-ethereum/swarm/api"
)
func TestConfigDump(t *testing.T) {
swarm := runSwarm(t, "dumpconfig")
defaultConf := api.NewConfig()
out, err := tomlSettings.Marshal(&defaultConf)
if err != nil {
t.Fatal(err)
}
swarm.Expect(string(out))
swarm.ExpectExit()
}
func TestConfigFailsSwapEnabledNoSwapApi(t *testing.T) {
flags := []string{
fmt.Sprintf("--%s", SwarmNetworkIdFlag.Name), "42",
fmt.Sprintf("--%s", SwarmPortFlag.Name), "54545",
fmt.Sprintf("--%s", SwarmSwapEnabledFlag.Name),
}
swarm := runSwarm(t, flags...)
swarm.Expect("Fatal: " + SwarmErrSwapSetNoAPI + "\n")
swarm.ExpectExit()
}
func TestConfigFailsNoBzzAccount(t *testing.T) {
flags := []string{
fmt.Sprintf("--%s", SwarmNetworkIdFlag.Name), "42",
fmt.Sprintf("--%s", SwarmPortFlag.Name), "54545",
}
swarm := runSwarm(t, flags...)
swarm.Expect("Fatal: " + SwarmErrNoBZZAccount + "\n")
swarm.ExpectExit()
}
func TestConfigCmdLineOverrides(t *testing.T) {
dir, err := ioutil.TempDir("", "bzztest")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
conf, account := getTestAccount(t, dir)
node := &testNode{Dir: dir}
// assign ports
httpPort, err := assignTCPPort()
if err != nil {
t.Fatal(err)
}
flags := []string{
fmt.Sprintf("--%s", SwarmNetworkIdFlag.Name), "42",
fmt.Sprintf("--%s", SwarmPortFlag.Name), httpPort,
fmt.Sprintf("--%s", SwarmSyncDisabledFlag.Name),
fmt.Sprintf("--%s", CorsStringFlag.Name), "*",
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
fmt.Sprintf("--%s", SwarmDeliverySkipCheckFlag.Name),
fmt.Sprintf("--%s", EnsAPIFlag.Name), "",
fmt.Sprintf("--%s", utils.DataDirFlag.Name), dir,
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), conf.IPCPath,
}
node.Cmd = runSwarm(t, flags...)
node.Cmd.InputLine(testPassphrase)
defer func() {
if t.Failed() {
node.Shutdown()
}
}()
// wait for the node to start
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
node.Client, err = rpc.Dial(conf.IPCEndpoint())
if err == nil {
break
}
}
if node.Client == nil {
t.Fatal(err)
}
// load info
var info swarm.Info
if err := node.Client.Call(&info, "bzz_info"); err != nil {
t.Fatal(err)
}
if info.Port != httpPort {
t.Fatalf("Expected port to be %s, got %s", httpPort, info.Port)
}
if info.NetworkID != 42 {
t.Fatalf("Expected network ID to be %d, got %d", 42, info.NetworkID)
}
if info.SyncEnabled {
t.Fatal("Expected Sync to be disabled, but is true")
}
if !info.DeliverySkipCheck {
t.Fatal("Expected DeliverySkipCheck to be enabled, but it is not")
}
if info.Cors != "*" {
t.Fatalf("Expected Cors flag to be set to %s, got %s", "*", info.Cors)
}
node.Shutdown()
}
func TestConfigFileOverrides(t *testing.T) {
// assign ports
httpPort, err := assignTCPPort()
if err != nil {
t.Fatal(err)
}
//create a config file
//first, create a default conf
defaultConf := api.NewConfig()
//change some values in order to test if they have been loaded
defaultConf.SyncEnabled = false
defaultConf.DeliverySkipCheck = true
defaultConf.NetworkID = 54
defaultConf.Port = httpPort
defaultConf.DbCapacity = 9000000
defaultConf.HiveParams.KeepAliveInterval = 6000000000
defaultConf.Swap.Params.Strategy.AutoCashInterval = 600 * time.Second
//defaultConf.SyncParams.KeyBufferSize = 512
//create a TOML string
out, err := tomlSettings.Marshal(&defaultConf)
if err != nil {
t.Fatalf("Error creating TOML file in TestFileOverride: %v", err)
}
//create file
f, err := ioutil.TempFile("", "testconfig.toml")
if err != nil {
t.Fatalf("Error writing TOML file in TestFileOverride: %v", err)
}
//write file
_, err = f.WriteString(string(out))
if err != nil {
t.Fatalf("Error writing TOML file in TestFileOverride: %v", err)
}
f.Sync()
dir, err := ioutil.TempDir("", "bzztest")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
conf, account := getTestAccount(t, dir)
node := &testNode{Dir: dir}
flags := []string{
fmt.Sprintf("--%s", SwarmTomlConfigPathFlag.Name), f.Name(),
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
fmt.Sprintf("--%s", EnsAPIFlag.Name), "",
fmt.Sprintf("--%s", utils.DataDirFlag.Name), dir,
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), conf.IPCPath,
}
node.Cmd = runSwarm(t, flags...)
node.Cmd.InputLine(testPassphrase)
defer func() {
if t.Failed() {
node.Shutdown()
}
}()
// wait for the node to start
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
node.Client, err = rpc.Dial(conf.IPCEndpoint())
if err == nil {
break
}
}
if node.Client == nil {
t.Fatal(err)
}
// load info
var info swarm.Info
if err := node.Client.Call(&info, "bzz_info"); err != nil {
t.Fatal(err)
}
if info.Port != httpPort {
t.Fatalf("Expected port to be %s, got %s", httpPort, info.Port)
}
if info.NetworkID != 54 {
t.Fatalf("Expected network ID to be %d, got %d", 54, info.NetworkID)
}
if info.SyncEnabled {
t.Fatal("Expected Sync to be disabled, but is true")
}
if !info.DeliverySkipCheck {
t.Fatal("Expected DeliverySkipCheck to be enabled, but it is not")
}
if info.DbCapacity != 9000000 {
t.Fatalf("Expected network ID to be %d, got %d", 54, info.NetworkID)
}
if info.HiveParams.KeepAliveInterval != 6000000000 {
t.Fatalf("Expected HiveParams KeepAliveInterval to be %d, got %d", uint64(6000000000), uint64(info.HiveParams.KeepAliveInterval))
}
if info.Swap.Params.Strategy.AutoCashInterval != 600*time.Second {
t.Fatalf("Expected SwapParams AutoCashInterval to be %ds, got %d", 600, info.Swap.Params.Strategy.AutoCashInterval)
}
// if info.SyncParams.KeyBufferSize != 512 {
// t.Fatalf("Expected info.SyncParams.KeyBufferSize to be %d, got %d", 512, info.SyncParams.KeyBufferSize)
// }
node.Shutdown()
}
func TestConfigEnvVars(t *testing.T) {
// assign ports
httpPort, err := assignTCPPort()
if err != nil {
t.Fatal(err)
}
envVars := os.Environ()
envVars = append(envVars, fmt.Sprintf("%s=%s", SwarmPortFlag.EnvVar, httpPort))
envVars = append(envVars, fmt.Sprintf("%s=%s", SwarmNetworkIdFlag.EnvVar, "999"))
envVars = append(envVars, fmt.Sprintf("%s=%s", CorsStringFlag.EnvVar, "*"))
envVars = append(envVars, fmt.Sprintf("%s=%s", SwarmSyncDisabledFlag.EnvVar, "true"))
envVars = append(envVars, fmt.Sprintf("%s=%s", SwarmDeliverySkipCheckFlag.EnvVar, "true"))
dir, err := ioutil.TempDir("", "bzztest")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
conf, account := getTestAccount(t, dir)
node := &testNode{Dir: dir}
flags := []string{
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
"--ens-api", "",
"--datadir", dir,
"--ipcpath", conf.IPCPath,
}
//node.Cmd = runSwarm(t,flags...)
//node.Cmd.cmd.Env = envVars
//the above assignment does not work, so we need a custom Cmd here in order to pass envVars:
cmd := &exec.Cmd{
Path: reexec.Self(),
Args: append([]string{"swarm-test"}, flags...),
Stderr: os.Stderr,
Stdout: os.Stdout,
}
cmd.Env = envVars
//stdout, err := cmd.StdoutPipe()
//if err != nil {
// t.Fatal(err)
//}
//stdout = bufio.NewReader(stdout)
var stdin io.WriteCloser
if stdin, err = cmd.StdinPipe(); err != nil {
t.Fatal(err)
}
if err := cmd.Start(); err != nil {
t.Fatal(err)
}
//cmd.InputLine(testPassphrase)
io.WriteString(stdin, testPassphrase+"\n")
defer func() {
if t.Failed() {
node.Shutdown()
cmd.Process.Kill()
}
}()
// wait for the node to start
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
node.Client, err = rpc.Dial(conf.IPCEndpoint())
if err == nil {
break
}
}
if node.Client == nil {
t.Fatal(err)
}
// load info
var info swarm.Info
if err := node.Client.Call(&info, "bzz_info"); err != nil {
t.Fatal(err)
}
if info.Port != httpPort {
t.Fatalf("Expected port to be %s, got %s", httpPort, info.Port)
}
if info.NetworkID != 999 {
t.Fatalf("Expected network ID to be %d, got %d", 999, info.NetworkID)
}
if info.Cors != "*" {
t.Fatalf("Expected Cors flag to be set to %s, got %s", "*", info.Cors)
}
if info.SyncEnabled {
t.Fatal("Expected Sync to be disabled, but is true")
}
if !info.DeliverySkipCheck {
t.Fatal("Expected DeliverySkipCheck to be enabled, but it is not")
}
node.Shutdown()
cmd.Process.Kill()
}
func TestConfigCmdLineOverridesFile(t *testing.T) {
// assign ports
httpPort, err := assignTCPPort()
if err != nil {
t.Fatal(err)
}
//create a config file
//first, create a default conf
defaultConf := api.NewConfig()
//change some values in order to test if they have been loaded
defaultConf.SyncEnabled = true
defaultConf.NetworkID = 54
defaultConf.Port = "8588"
defaultConf.DbCapacity = 9000000
defaultConf.HiveParams.KeepAliveInterval = 6000000000
defaultConf.Swap.Params.Strategy.AutoCashInterval = 600 * time.Second
//defaultConf.SyncParams.KeyBufferSize = 512
//create a TOML file
out, err := tomlSettings.Marshal(&defaultConf)
if err != nil {
t.Fatalf("Error creating TOML file in TestFileOverride: %v", err)
}
//write file
fname := "testconfig.toml"
f, err := ioutil.TempFile("", fname)
if err != nil {
t.Fatalf("Error writing TOML file in TestFileOverride: %v", err)
}
defer os.Remove(fname)
//write file
_, err = f.WriteString(string(out))
if err != nil {
t.Fatalf("Error writing TOML file in TestFileOverride: %v", err)
}
f.Sync()
dir, err := ioutil.TempDir("", "bzztest")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
conf, account := getTestAccount(t, dir)
node := &testNode{Dir: dir}
expectNetworkId := uint64(77)
flags := []string{
fmt.Sprintf("--%s", SwarmNetworkIdFlag.Name), "77",
fmt.Sprintf("--%s", SwarmPortFlag.Name), httpPort,
fmt.Sprintf("--%s", SwarmSyncDisabledFlag.Name),
fmt.Sprintf("--%s", SwarmTomlConfigPathFlag.Name), f.Name(),
fmt.Sprintf("--%s", SwarmAccountFlag.Name), account.Address.String(),
fmt.Sprintf("--%s", EnsAPIFlag.Name), "",
fmt.Sprintf("--%s", utils.DataDirFlag.Name), dir,
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), conf.IPCPath,
}
node.Cmd = runSwarm(t, flags...)
node.Cmd.InputLine(testPassphrase)
defer func() {
if t.Failed() {
node.Shutdown()
}
}()
// wait for the node to start
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
node.Client, err = rpc.Dial(conf.IPCEndpoint())
if err == nil {
break
}
}
if node.Client == nil {
t.Fatal(err)
}
// load info
var info swarm.Info
if err := node.Client.Call(&info, "bzz_info"); err != nil {
t.Fatal(err)
}
if info.Port != httpPort {
t.Fatalf("Expected port to be %s, got %s", httpPort, info.Port)
}
if info.NetworkID != expectNetworkId {
t.Fatalf("Expected network ID to be %d, got %d", expectNetworkId, info.NetworkID)
}
if info.SyncEnabled {
t.Fatal("Expected Sync to be disabled, but is true")
}
if info.DbCapacity != 9000000 {
t.Fatalf("Expected Capacity to be %d, got %d", 9000000, info.DbCapacity)
}
if info.HiveParams.KeepAliveInterval != 6000000000 {
t.Fatalf("Expected HiveParams KeepAliveInterval to be %d, got %d", uint64(6000000000), uint64(info.HiveParams.KeepAliveInterval))
}
if info.Swap.Params.Strategy.AutoCashInterval != 600*time.Second {
t.Fatalf("Expected SwapParams AutoCashInterval to be %ds, got %d", 600, info.Swap.Params.Strategy.AutoCashInterval)
}
// if info.SyncParams.KeyBufferSize != 512 {
// t.Fatalf("Expected info.SyncParams.KeyBufferSize to be %d, got %d", 512, info.SyncParams.KeyBufferSize)
// }
node.Shutdown()
}
func TestConfigValidate(t *testing.T) {
for _, c := range []struct {
cfg *api.Config
err string
}{
{
cfg: &api.Config{EnsAPIs: []string{
"/data/testnet/geth.ipc",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"http://127.0.0.1:1234",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"ws://127.0.0.1:1234",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"test:/data/testnet/geth.ipc",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"test:ws://127.0.0.1:1234",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"314159265dD8dbb310642f98f50C066173C1259b@/data/testnet/geth.ipc",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"314159265dD8dbb310642f98f50C066173C1259b@http://127.0.0.1:1234",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"314159265dD8dbb310642f98f50C066173C1259b@ws://127.0.0.1:1234",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"test:314159265dD8dbb310642f98f50C066173C1259b@/data/testnet/geth.ipc",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"eth:314159265dD8dbb310642f98f50C066173C1259b@http://127.0.0.1:1234",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"eth:314159265dD8dbb310642f98f50C066173C1259b@ws://127.0.0.1:12344",
}},
},
{
cfg: &api.Config{EnsAPIs: []string{
"eth:",
}},
err: "invalid format [tld:][contract-addr@]url for ENS API endpoint configuration \"eth:\": missing url",
},
{
cfg: &api.Config{EnsAPIs: []string{
"314159265dD8dbb310642f98f50C066173C1259b@",
}},
err: "invalid format [tld:][contract-addr@]url for ENS API endpoint configuration \"314159265dD8dbb310642f98f50C066173C1259b@\": missing url",
},
{
cfg: &api.Config{EnsAPIs: []string{
":314159265dD8dbb310642f98f50C066173C1259",
}},
err: "invalid format [tld:][contract-addr@]url for ENS API endpoint configuration \":314159265dD8dbb310642f98f50C066173C1259\": missing tld",
},
{
cfg: &api.Config{EnsAPIs: []string{
"@/data/testnet/geth.ipc",
}},
err: "invalid format [tld:][contract-addr@]url for ENS API endpoint configuration \"@/data/testnet/geth.ipc\": missing contract address",
},
} {
err := validateConfig(c.cfg)
if c.err != "" && err.Error() != c.err {
t.Errorf("expected error %q, got %q", c.err, err)
}
if c.err == "" && err != nil {
t.Errorf("unexpected error %q", err)
}
}
}
func assignTCPPort() (string, error) {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return "", err
}
l.Close()
_, port, err := net.SplitHostPort(l.Addr().String())
if err != nil {
return "", err
}
return port, nil
}

View File

@ -1,239 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"archive/tar"
"bytes"
"encoding/binary"
"encoding/hex"
"fmt"
"io"
"os"
"path/filepath"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/swarm/chunk"
"github.com/ethereum/go-ethereum/swarm/storage/localstore"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"gopkg.in/urfave/cli.v1"
)
var legacyKeyIndex = byte(0)
var keyData = byte(6)
type dpaDBIndex struct {
Idx uint64
Access uint64
}
var dbCommand = cli.Command{
Name: "db",
CustomHelpTemplate: helpTemplate,
Usage: "manage the local chunk database",
ArgsUsage: "db COMMAND",
Description: "Manage the local chunk database",
Subcommands: []cli.Command{
{
Action: dbExport,
CustomHelpTemplate: helpTemplate,
Name: "export",
Usage: "export a local chunk database as a tar archive (use - to send to stdout)",
ArgsUsage: "<chunkdb> <file>",
Description: `
Export a local chunk database as a tar archive (use - to send to stdout).
swarm db export ~/.ethereum/swarm/bzz-KEY/chunks chunks.tar
The export may be quite large, consider piping the output through the Unix
pv(1) tool to get a progress bar:
swarm db export ~/.ethereum/swarm/bzz-KEY/chunks - | pv > chunks.tar
`,
},
{
Action: dbImport,
CustomHelpTemplate: helpTemplate,
Name: "import",
Usage: "import chunks from a tar archive into a local chunk database (use - to read from stdin)",
ArgsUsage: "<chunkdb> <file>",
Description: `Import chunks from a tar archive into a local chunk database (use - to read from stdin).
swarm db import ~/.ethereum/swarm/bzz-KEY/chunks chunks.tar
The import may be quite large, consider piping the input through the Unix
pv(1) tool to get a progress bar:
pv chunks.tar | swarm db import ~/.ethereum/swarm/bzz-KEY/chunks -`,
Flags: []cli.Flag{
SwarmLegacyFlag,
},
},
},
}
func dbExport(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 3 {
utils.Fatalf("invalid arguments, please specify both <chunkdb> (path to a local chunk database), <file> (path to write the tar archive to, - for stdout) and the base key")
}
var out io.Writer
if args[1] == "-" {
out = os.Stdout
} else {
f, err := os.Create(args[1])
if err != nil {
utils.Fatalf("error opening output file: %s", err)
}
defer f.Close()
out = f
}
isLegacy := localstore.IsLegacyDatabase(args[0])
if isLegacy {
count, err := exportLegacy(args[0], common.Hex2Bytes(args[2]), out)
if err != nil {
utils.Fatalf("error exporting legacy local chunk database: %s", err)
}
log.Info(fmt.Sprintf("successfully exported %d chunks from legacy db", count))
return
}
store, err := openLDBStore(args[0], common.Hex2Bytes(args[2]))
if err != nil {
utils.Fatalf("error opening local chunk database: %s", err)
}
defer store.Close()
count, err := store.Export(out)
if err != nil {
utils.Fatalf("error exporting local chunk database: %s", err)
}
log.Info(fmt.Sprintf("successfully exported %d chunks", count))
}
func dbImport(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 3 {
utils.Fatalf("invalid arguments, please specify both <chunkdb> (path to a local chunk database), <file> (path to read the tar archive from, - for stdin) and the base key")
}
legacy := ctx.IsSet(SwarmLegacyFlag.Name)
store, err := openLDBStore(args[0], common.Hex2Bytes(args[2]))
if err != nil {
utils.Fatalf("error opening local chunk database: %s", err)
}
defer store.Close()
var in io.Reader
if args[1] == "-" {
in = os.Stdin
} else {
f, err := os.Open(args[1])
if err != nil {
utils.Fatalf("error opening input file: %s", err)
}
defer f.Close()
in = f
}
count, err := store.Import(in, legacy)
if err != nil {
utils.Fatalf("error importing local chunk database: %s", err)
}
log.Info(fmt.Sprintf("successfully imported %d chunks", count))
}
func openLDBStore(path string, basekey []byte) (*localstore.DB, error) {
if _, err := os.Stat(filepath.Join(path, "CURRENT")); err != nil {
return nil, fmt.Errorf("invalid chunkdb path: %s", err)
}
return localstore.New(path, basekey, nil)
}
func decodeIndex(data []byte, index *dpaDBIndex) error {
dec := rlp.NewStream(bytes.NewReader(data), 0)
return dec.Decode(index)
}
func getDataKey(idx uint64, po uint8) []byte {
key := make([]byte, 10)
key[0] = keyData
key[1] = po
binary.BigEndian.PutUint64(key[2:], idx)
return key
}
func exportLegacy(path string, basekey []byte, out io.Writer) (int64, error) {
tw := tar.NewWriter(out)
defer tw.Close()
db, err := leveldb.OpenFile(path, &opt.Options{OpenFilesCacheCapacity: 128})
if err != nil {
return 0, err
}
defer db.Close()
it := db.NewIterator(nil, nil)
defer it.Release()
var count int64
for ok := it.Seek([]byte{legacyKeyIndex}); ok; ok = it.Next() {
key := it.Key()
if (key == nil) || (key[0] != legacyKeyIndex) {
break
}
var index dpaDBIndex
hash := key[1:]
decodeIndex(it.Value(), &index)
po := uint8(chunk.Proximity(basekey, hash))
datakey := getDataKey(index.Idx, po)
data, err := db.Get(datakey, nil)
if err != nil {
log.Crit(fmt.Sprintf("Chunk %x found but could not be accessed: %v, %x", key, err, datakey))
continue
}
hdr := &tar.Header{
Name: hex.EncodeToString(hash),
Mode: 0644,
Size: int64(len(data)),
}
if err := tw.WriteHeader(hdr); err != nil {
return count, err
}
if _, err := tw.Write(data); err != nil {
return count, err
}
count++
}
return count, nil
}

View File

@ -1,112 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
"gopkg.in/urfave/cli.v1"
)
var downloadCommand = cli.Command{
Action: download,
Name: "down",
Flags: []cli.Flag{SwarmRecursiveFlag, SwarmAccessPasswordFlag},
Usage: "downloads a swarm manifest or a file inside a manifest",
ArgsUsage: " <uri> [<dir>]",
Description: `Downloads a swarm bzz uri to the given dir. When no dir is provided, working directory is assumed. --recursive flag is expected when downloading a manifest with multiple entries.`,
}
func download(ctx *cli.Context) {
log.Debug("downloading content using swarm down")
args := ctx.Args()
dest := "."
switch len(args) {
case 0:
utils.Fatalf("Usage: swarm down [options] <bzz locator> [<destination path>]")
case 1:
log.Trace(fmt.Sprintf("swarm down: no destination path - assuming working dir"))
default:
log.Trace(fmt.Sprintf("destination path arg: %s", args[1]))
if absDest, err := filepath.Abs(args[1]); err == nil {
dest = absDest
} else {
utils.Fatalf("could not get download path: %v", err)
}
}
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
isRecursive = ctx.Bool(SwarmRecursiveFlag.Name)
client = swarm.NewClient(bzzapi)
)
if fi, err := os.Stat(dest); err == nil {
if isRecursive && !fi.Mode().IsDir() {
utils.Fatalf("destination path is not a directory!")
}
} else {
if !os.IsNotExist(err) {
utils.Fatalf("could not stat path: %v", err)
}
}
uri, err := api.Parse(args[0])
if err != nil {
utils.Fatalf("could not parse uri argument: %v", err)
}
dl := func(credentials string) error {
// assume behaviour according to --recursive switch
if isRecursive {
if err := client.DownloadDirectory(uri.Addr, uri.Path, dest, credentials); err != nil {
if err == swarm.ErrUnauthorized {
return err
}
return fmt.Errorf("directory %s: %v", uri.Path, err)
}
} else {
// we are downloading a file
log.Debug("downloading file/path from a manifest", "uri.Addr", uri.Addr, "uri.Path", uri.Path)
err := client.DownloadFile(uri.Addr, uri.Path, dest, credentials)
if err != nil {
if err == swarm.ErrUnauthorized {
return err
}
return fmt.Errorf("file %s from address: %s: %v", uri.Path, uri.Addr, err)
}
}
return nil
}
if passwords := makePasswordList(ctx); passwords != nil {
password := getPassPhrase(fmt.Sprintf("Downloading %s is restricted", uri), 0, passwords)
err = dl(password)
} else {
err = dl("")
}
if err != nil {
utils.Fatalf("download: %v", err)
}
}

View File

@ -1,60 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command bzzhash computes a swarm tree hash.
package main
import (
"context"
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/swarm/chunk"
"github.com/ethereum/go-ethereum/swarm/storage"
"gopkg.in/urfave/cli.v1"
)
var hashesCommand = cli.Command{
Action: hashes,
CustomHelpTemplate: helpTemplate,
Name: "hashes",
Usage: "print all hashes of a file to STDOUT",
ArgsUsage: "<file>",
Description: "Prints all hashes of a file to STDOUT",
}
func hashes(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm hashes <file name>")
}
f, err := os.Open(args[0])
if err != nil {
utils.Fatalf("Error opening file " + args[1])
}
defer f.Close()
fileStore := storage.NewFileStore(&storage.FakeChunkStore{}, storage.NewFileStoreParams(), chunk.NewTags())
refs, err := fileStore.GetAllReferences(context.TODO(), f, false)
if err != nil {
utils.Fatalf("%v\n", err)
} else {
for _, r := range refs {
fmt.Println(r.String())
}
}
}

View File

@ -1,287 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"archive/tar"
"bytes"
"compress/gzip"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"io"
"io/ioutil"
"net/http"
"os"
"path"
"runtime"
"strings"
"testing"
"github.com/ethereum/go-ethereum/cmd/swarm/testdata"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm"
"github.com/ethereum/go-ethereum/swarm/testutil"
)
const (
DATABASE_FIXTURE_BZZ_ACCOUNT = "0aa159029fa13ffa8fa1c6fff6ebceface99d6a4"
DATABASE_FIXTURE_PASSWORD = "pass"
FIXTURE_DATADIR_PREFIX = "swarm/bzz-0aa159029fa13ffa8fa1c6fff6ebceface99d6a4"
FixtureBaseKey = "a9f22b3d77b4bdf5f3eefce995d6c8e7cecf2636f20956f08a0d1ed95adb52ad"
)
// TestCLISwarmExportImport perform the following test:
// 1. runs swarm node
// 2. uploads a random file
// 3. runs an export of the local datastore
// 4. runs a second swarm node
// 5. imports the exported datastore
// 6. fetches the uploaded random file from the second node
func TestCLISwarmExportImport(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
cluster := newTestCluster(t, 1)
// generate random 1mb file
content := testutil.RandomBytes(1, 1000000)
fileName := testutil.TempFileWithContent(t, string(content))
defer os.Remove(fileName)
// upload the file with 'swarm up' and expect a hash
up := runSwarm(t, "--bzzapi", cluster.Nodes[0].URL, "up", fileName)
_, matches := up.ExpectRegexp(`[a-f\d]{64}`)
up.ExpectExit()
hash := matches[0]
var info swarm.Info
if err := cluster.Nodes[0].Client.Call(&info, "bzz_info"); err != nil {
t.Fatal(err)
}
cluster.Stop()
defer cluster.Cleanup()
// generate an export.tar
exportCmd := runSwarm(t, "db", "export", info.Path+"/chunks", info.Path+"/export.tar", strings.TrimPrefix(info.BzzKey, "0x"))
exportCmd.ExpectExit()
// start second cluster
cluster2 := newTestCluster(t, 1)
var info2 swarm.Info
if err := cluster2.Nodes[0].Client.Call(&info2, "bzz_info"); err != nil {
t.Fatal(err)
}
// stop second cluster, so that we close LevelDB
cluster2.Stop()
defer cluster2.Cleanup()
// import the export.tar
importCmd := runSwarm(t, "db", "import", info2.Path+"/chunks", info.Path+"/export.tar", strings.TrimPrefix(info2.BzzKey, "0x"))
importCmd.ExpectExit()
// spin second cluster back up
cluster2.StartExistingNodes(t, 1, strings.TrimPrefix(info2.BzzAccount, "0x"))
// try to fetch imported file
res, err := http.Get(cluster2.Nodes[0].URL + "/bzz:/" + hash)
if err != nil {
t.Fatal(err)
}
if res.StatusCode != 200 {
t.Fatalf("expected HTTP status %d, got %s", 200, res.Status)
}
// compare downloaded file with the generated random file
mustEqualFiles(t, bytes.NewReader(content), res.Body)
}
// TestExportLegacyToNew checks that an old database gets imported correctly into the new localstore structure
// The test sequence is as follows:
// 1. unpack database fixture to tmp dir
// 2. try to open with new swarm binary that should complain about old database
// 3. export from old database
// 4. remove the chunks folder
// 5. import the dump
// 6. file should be accessible
func TestExportLegacyToNew(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip() // this should be reenabled once the appveyor tests underlying issue is fixed
}
/*
fixture bzz account 0aa159029fa13ffa8fa1c6fff6ebceface99d6a4
*/
const UPLOADED_FILE_MD5_HASH = "a001fdae53ba50cae584b8b02b06f821"
const UPLOADED_HASH = "67a86082ee0ea1bc7dd8d955bb1e14d04f61d55ae6a4b37b3d0296a3a95e454a"
tmpdir, err := ioutil.TempDir("", "swarm-test")
log.Trace("running legacy datastore migration test", "temp dir", tmpdir)
defer os.RemoveAll(tmpdir)
if err != nil {
t.Fatal(err)
}
inflateBase64Gzip(t, testdata.DATADIR_MIGRATION_FIXTURE, tmpdir)
tmpPassword := testutil.TempFileWithContent(t, DATABASE_FIXTURE_PASSWORD)
defer os.Remove(tmpPassword)
flags := []string{
"--datadir", tmpdir,
"--bzzaccount", DATABASE_FIXTURE_BZZ_ACCOUNT,
"--password", tmpPassword,
}
newSwarmOldDb := runSwarm(t, flags...)
_, matches := newSwarmOldDb.ExpectRegexp(".+")
newSwarmOldDb.ExpectExit()
if len(matches) == 0 {
t.Fatalf("stdout not matched")
}
if newSwarmOldDb.ExitStatus() == 0 {
t.Fatal("should error")
}
t.Log("exporting legacy database")
actualDataDir := path.Join(tmpdir, FIXTURE_DATADIR_PREFIX)
exportCmd := runSwarm(t, "--verbosity", "5", "db", "export", actualDataDir+"/chunks", tmpdir+"/export.tar", FixtureBaseKey)
exportCmd.ExpectExit()
stat, err := os.Stat(tmpdir + "/export.tar")
if err != nil {
t.Fatal(err)
}
// make some silly size assumption
if stat.Size() < 90000 {
t.Fatal("export size too small")
}
log.Info("removing chunk datadir")
err = os.RemoveAll(path.Join(actualDataDir, "chunks"))
if err != nil {
t.Fatal(err)
}
// start second cluster
cluster2 := newTestCluster(t, 1)
var info2 swarm.Info
if err := cluster2.Nodes[0].Client.Call(&info2, "bzz_info"); err != nil {
t.Fatal(err)
}
// stop second cluster, so that we close LevelDB
cluster2.Stop()
defer cluster2.Cleanup()
// import the export.tar
importCmd := runSwarm(t, "db", "import", "--legacy", info2.Path+"/chunks", tmpdir+"/export.tar", strings.TrimPrefix(info2.BzzKey, "0x"))
importCmd.ExpectExit()
// spin second cluster back up
cluster2.StartExistingNodes(t, 1, strings.TrimPrefix(info2.BzzAccount, "0x"))
t.Log("trying to http get the file")
// try to fetch imported file
res, err := http.Get(cluster2.Nodes[0].URL + "/bzz:/" + UPLOADED_HASH)
if err != nil {
t.Fatal(err)
}
if res.StatusCode != 200 {
t.Fatalf("expected HTTP status %d, got %s", 200, res.Status)
}
h := md5.New()
if _, err := io.Copy(h, res.Body); err != nil {
t.Fatal(err)
}
sum := h.Sum(nil)
b, err := hex.DecodeString(UPLOADED_FILE_MD5_HASH)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(sum, b) {
t.Fatal("should be equal")
}
}
func mustEqualFiles(t *testing.T, up io.Reader, down io.Reader) {
h := md5.New()
upLen, err := io.Copy(h, up)
if err != nil {
t.Fatal(err)
}
upHash := h.Sum(nil)
h.Reset()
downLen, err := io.Copy(h, down)
if err != nil {
t.Fatal(err)
}
downHash := h.Sum(nil)
if !bytes.Equal(upHash, downHash) || upLen != downLen {
t.Fatalf("downloaded imported file md5=%x (length %v) is not the same as the generated one mp5=%x (length %v)", downHash, downLen, upHash, upLen)
}
}
func inflateBase64Gzip(t *testing.T, base64File, directory string) {
t.Helper()
f := base64.NewDecoder(base64.StdEncoding, strings.NewReader(base64File))
gzf, err := gzip.NewReader(f)
if err != nil {
t.Fatal(err)
}
tarReader := tar.NewReader(gzf)
for {
header, err := tarReader.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
name := header.Name
switch header.Typeflag {
case tar.TypeDir:
err := os.Mkdir(path.Join(directory, name), os.ModePerm)
if err != nil {
t.Fatal(err)
}
case tar.TypeReg:
file, err := os.Create(path.Join(directory, name))
if err != nil {
t.Fatal(err)
}
if _, err := io.Copy(file, tarReader); err != nil {
t.Fatal(err)
}
file.Close()
default:
t.Fatal("shouldn't happen")
}
}
}

View File

@ -1,238 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command feed allows the user to create and update signed Swarm feeds
package main
import (
"fmt"
"strings"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/cmd/utils"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
"github.com/ethereum/go-ethereum/swarm/storage/feed"
"gopkg.in/urfave/cli.v1"
)
var feedCommand = cli.Command{
CustomHelpTemplate: helpTemplate,
Name: "feed",
Usage: "(Advanced) Create and update Swarm Feeds",
ArgsUsage: "<create|update|info>",
Description: "Works with Swarm Feeds",
Subcommands: []cli.Command{
{
Action: feedCreateManifest,
CustomHelpTemplate: helpTemplate,
Name: "create",
Usage: "creates and publishes a new feed manifest",
Description: `creates and publishes a new feed manifest pointing to a specified user's updates about a particular topic.
The feed topic can be built in the following ways:
* use --topic to set the topic to an arbitrary binary hex string.
* use --name to set the topic to a human-readable name.
For example --name could be set to "profile-picture", meaning this feed allows to get this user's current profile picture.
* use both --topic and --name to create named subtopics.
For example, --topic could be set to an Ethereum contract address and --name could be set to "comments", meaning
this feed tracks a discussion about that contract.
The --user flag allows to have this manifest refer to a user other than yourself. If not specified,
it will then default to your local account (--bzzaccount)`,
Flags: []cli.Flag{SwarmFeedNameFlag, SwarmFeedTopicFlag, SwarmFeedUserFlag},
},
{
Action: feedUpdate,
CustomHelpTemplate: helpTemplate,
Name: "update",
Usage: "updates the content of an existing Swarm Feed",
ArgsUsage: "<0x Hex data>",
Description: `publishes a new update on the specified topic
The feed topic can be built in the following ways:
* use --topic to set the topic to an arbitrary binary hex string.
* use --name to set the topic to a human-readable name.
For example --name could be set to "profile-picture", meaning this feed allows to get this user's current profile picture.
* use both --topic and --name to create named subtopics.
For example, --topic could be set to an Ethereum contract address and --name could be set to "comments", meaning
this feed tracks a discussion about that contract.
If you have a manifest, you can specify it with --manifest to refer to the feed,
instead of using --topic / --name
`,
Flags: []cli.Flag{SwarmFeedManifestFlag, SwarmFeedNameFlag, SwarmFeedTopicFlag},
},
{
Action: feedInfo,
CustomHelpTemplate: helpTemplate,
Name: "info",
Usage: "obtains information about an existing Swarm feed",
Description: `obtains information about an existing Swarm feed
The topic can be specified directly with the --topic flag as an hex string
If no topic is specified, the default topic (zero) will be used
The --name flag can be used to specify subtopics with a specific name.
The --user flag allows to refer to a user other than yourself. If not specified,
it will then default to your local account (--bzzaccount)
If you have a manifest, you can specify it with --manifest instead of --topic / --name / ---user
to refer to the feed`,
Flags: []cli.Flag{SwarmFeedManifestFlag, SwarmFeedNameFlag, SwarmFeedTopicFlag, SwarmFeedUserFlag},
},
},
}
func NewGenericSigner(ctx *cli.Context) feed.Signer {
return feed.NewGenericSigner(getPrivKey(ctx))
}
func getTopic(ctx *cli.Context) (topic feed.Topic) {
var name = ctx.String(SwarmFeedNameFlag.Name)
var relatedTopic = ctx.String(SwarmFeedTopicFlag.Name)
var relatedTopicBytes []byte
var err error
if relatedTopic != "" {
relatedTopicBytes, err = hexutil.Decode(relatedTopic)
if err != nil {
utils.Fatalf("Error parsing topic: %s", err)
}
}
topic, err = feed.NewTopic(name, relatedTopicBytes)
if err != nil {
utils.Fatalf("Error parsing topic: %s", err)
}
return topic
}
// swarm feed create <frequency> [--name <name>] [--data <0x Hexdata> [--multihash=false]]
// swarm feed update <Manifest Address or ENS domain> <0x Hexdata> [--multihash=false]
// swarm feed info <Manifest Address or ENS domain>
func feedCreateManifest(ctx *cli.Context) {
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client = swarm.NewClient(bzzapi)
)
newFeedUpdateRequest := feed.NewFirstRequest(getTopic(ctx))
newFeedUpdateRequest.Feed.User = feedGetUser(ctx)
manifestAddress, err := client.CreateFeedWithManifest(newFeedUpdateRequest)
if err != nil {
utils.Fatalf("Error creating feed manifest: %s", err.Error())
return
}
fmt.Println(manifestAddress) // output manifest address to the user in a single line (useful for other commands to pick up)
}
func feedUpdate(ctx *cli.Context) {
args := ctx.Args()
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client = swarm.NewClient(bzzapi)
manifestAddressOrDomain = ctx.String(SwarmFeedManifestFlag.Name)
)
if len(args) < 1 {
fmt.Println("Incorrect number of arguments")
cli.ShowCommandHelpAndExit(ctx, "update", 1)
return
}
signer := NewGenericSigner(ctx)
data, err := hexutil.Decode(args[0])
if err != nil {
utils.Fatalf("Error parsing data: %s", err.Error())
return
}
var updateRequest *feed.Request
var query *feed.Query
if manifestAddressOrDomain == "" {
query = new(feed.Query)
query.User = signer.Address()
query.Topic = getTopic(ctx)
}
// Retrieve a feed update request
updateRequest, err = client.GetFeedRequest(query, manifestAddressOrDomain)
if err != nil {
utils.Fatalf("Error retrieving feed status: %s", err.Error())
}
// Check that the provided signer matches the request to sign
if updateRequest.User != signer.Address() {
utils.Fatalf("Signer address does not match the update request")
}
// set the new data
updateRequest.SetData(data)
// sign update
if err = updateRequest.Sign(signer); err != nil {
utils.Fatalf("Error signing feed update: %s", err.Error())
}
// post update
err = client.UpdateFeed(updateRequest)
if err != nil {
utils.Fatalf("Error updating feed: %s", err.Error())
return
}
}
func feedInfo(ctx *cli.Context) {
var (
bzzapi = strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client = swarm.NewClient(bzzapi)
manifestAddressOrDomain = ctx.String(SwarmFeedManifestFlag.Name)
)
var query *feed.Query
if manifestAddressOrDomain == "" {
query = new(feed.Query)
query.Topic = getTopic(ctx)
query.User = feedGetUser(ctx)
}
metadata, err := client.GetFeedRequest(query, manifestAddressOrDomain)
if err != nil {
utils.Fatalf("Error retrieving feed metadata: %s", err.Error())
return
}
encodedMetadata, err := metadata.MarshalJSON()
if err != nil {
utils.Fatalf("Error encoding metadata to JSON for display:%s", err)
}
fmt.Println(string(encodedMetadata))
}
func feedGetUser(ctx *cli.Context) common.Address {
var user = ctx.String(SwarmFeedUserFlag.Name)
if user != "" {
return common.HexToAddress(user)
}
pk := getPrivKey(ctx)
if pk == nil {
utils.Fatalf("Cannot read private key. Must specify --user or --bzzaccount")
}
return crypto.PubkeyToAddress(pk.PublicKey)
}

View File

@ -1,196 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"encoding/json"
"io/ioutil"
"os"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
swarmhttp "github.com/ethereum/go-ethereum/swarm/api/http"
"github.com/ethereum/go-ethereum/swarm/storage/feed"
"github.com/ethereum/go-ethereum/swarm/storage/feed/lookup"
"github.com/ethereum/go-ethereum/swarm/testutil"
)
func TestCLIFeedUpdate(t *testing.T) {
srv := swarmhttp.NewTestSwarmServer(t, func(api *api.API) swarmhttp.TestServer {
return swarmhttp.NewServer(api, "")
}, nil)
log.Info("starting a test swarm server")
defer srv.Close()
// create a private key file for signing
privkeyHex := "0000000000000000000000000000000000000000000000000000000000001979"
privKey, _ := crypto.HexToECDSA(privkeyHex)
address := crypto.PubkeyToAddress(privKey.PublicKey)
pkFileName := testutil.TempFileWithContent(t, privkeyHex)
defer os.Remove(pkFileName)
// compose a topic. We'll be doing quotes about Miguel de Cervantes
var topic feed.Topic
subject := []byte("Miguel de Cervantes")
copy(topic[:], subject[:])
name := "quotes"
// prepare some data for the update
data := []byte("En boca cerrada no entran moscas")
hexData := hexutil.Encode(data)
flags := []string{
"--bzzapi", srv.URL,
"--bzzaccount", pkFileName,
"feed", "update",
"--topic", topic.Hex(),
"--name", name,
hexData}
// create an update and expect an exit without errors
log.Info("updating a feed with 'swarm feed update'")
cmd := runSwarm(t, flags...)
cmd.ExpectExit()
// now try to get the update using the client
client := swarm.NewClient(srv.URL)
// build the same topic as before, this time
// we use NewTopic to create a topic automatically.
topic, err := feed.NewTopic(name, subject)
if err != nil {
t.Fatal(err)
}
// Feed configures whose updates we will be looking up.
fd := feed.Feed{
Topic: topic,
User: address,
}
// Build a query to get the latest update
query := feed.NewQueryLatest(&fd, lookup.NoClue)
// retrieve content!
reader, err := client.QueryFeed(query, "")
if err != nil {
t.Fatal(err)
}
retrieved, err := ioutil.ReadAll(reader)
if err != nil {
t.Fatal(err)
}
// check we retrieved the sent information
if !bytes.Equal(data, retrieved) {
t.Fatalf("Received %s, expected %s", retrieved, data)
}
// Now retrieve info for the next update
flags = []string{
"--bzzapi", srv.URL,
"feed", "info",
"--topic", topic.Hex(),
"--user", address.Hex(),
}
log.Info("getting feed info with 'swarm feed info'")
cmd = runSwarm(t, flags...)
_, matches := cmd.ExpectRegexp(`.*`) // regex hack to extract stdout
cmd.ExpectExit()
// verify we can deserialize the result as a valid JSON
var request feed.Request
err = json.Unmarshal([]byte(matches[0]), &request)
if err != nil {
t.Fatal(err)
}
// make sure the retrieved feed is the same
if request.Feed != fd {
t.Fatalf("Expected feed to be: %s, got %s", fd, request.Feed)
}
// test publishing a manifest
flags = []string{
"--bzzapi", srv.URL,
"--bzzaccount", pkFileName,
"feed", "create",
"--topic", topic.Hex(),
}
log.Info("Publishing manifest with 'swarm feed create'")
cmd = runSwarm(t, flags...)
_, matches = cmd.ExpectRegexp(`[a-f\d]{64}`)
cmd.ExpectExit()
manifestAddress := matches[0] // read the received feed manifest
// now attempt to lookup the latest update using a manifest instead
reader, err = client.QueryFeed(nil, manifestAddress)
if err != nil {
t.Fatal(err)
}
retrieved, err = ioutil.ReadAll(reader)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data, retrieved) {
t.Fatalf("Received %s, expected %s", retrieved, data)
}
// test publishing a manifest for a different user
flags = []string{
"--bzzapi", srv.URL,
"feed", "create",
"--topic", topic.Hex(),
"--user", "0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", // different user
}
log.Info("Publishing manifest with 'swarm feed create' for a different user")
cmd = runSwarm(t, flags...)
_, matches = cmd.ExpectRegexp(`[a-f\d]{64}`)
cmd.ExpectExit()
manifestAddress = matches[0] // read the received feed manifest
// now let's try to update that user's manifest which we don't have the private key for
flags = []string{
"--bzzapi", srv.URL,
"--bzzaccount", pkFileName,
"feed", "update",
"--manifest", manifestAddress,
hexData}
// create an update and expect an error given there is a user mismatch
log.Info("updating a feed with 'swarm feed update'")
cmd = runSwarm(t, flags...)
cmd.ExpectRegexp("Fatal:.*") // best way so far to detect a failure.
cmd.ExpectExit()
if cmd.ExitStatus() == 0 {
t.Fatal("Expected nonzero exit code when updating a manifest with the wrong user. Got 0.")
}
}

View File

@ -1,189 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command feed allows the user to create and update signed Swarm feeds
package main
import cli "gopkg.in/urfave/cli.v1"
var (
ChequebookAddrFlag = cli.StringFlag{
Name: "chequebook",
Usage: "chequebook contract address",
EnvVar: SwarmEnvChequebookAddr,
}
SwarmAccountFlag = cli.StringFlag{
Name: "bzzaccount",
Usage: "Swarm account key file",
EnvVar: SwarmEnvAccount,
}
SwarmListenAddrFlag = cli.StringFlag{
Name: "httpaddr",
Usage: "Swarm HTTP API listening interface",
EnvVar: SwarmEnvListenAddr,
}
SwarmPortFlag = cli.StringFlag{
Name: "bzzport",
Usage: "Swarm local http api port",
EnvVar: SwarmEnvPort,
}
SwarmNetworkIdFlag = cli.IntFlag{
Name: "bzznetworkid",
Usage: "Network identifier (integer, default 3=swarm testnet)",
EnvVar: SwarmEnvNetworkID,
}
SwarmSwapEnabledFlag = cli.BoolFlag{
Name: "swap",
Usage: "Swarm SWAP enabled (default false)",
EnvVar: SwarmEnvSwapEnable,
}
SwarmSwapAPIFlag = cli.StringFlag{
Name: "swap-api",
Usage: "URL of the Ethereum API provider to use to settle SWAP payments",
EnvVar: SwarmEnvSwapAPI,
}
SwarmSyncDisabledFlag = cli.BoolTFlag{
Name: "nosync",
Usage: "Disable swarm syncing",
EnvVar: SwarmEnvSyncDisable,
}
SwarmSyncUpdateDelay = cli.DurationFlag{
Name: "sync-update-delay",
Usage: "Duration for sync subscriptions update after no new peers are added (default 15s)",
EnvVar: SwarmEnvSyncUpdateDelay,
}
SwarmMaxStreamPeerServersFlag = cli.IntFlag{
Name: "max-stream-peer-servers",
Usage: "Limit of Stream peer servers, 0 denotes unlimited",
EnvVar: SwarmEnvMaxStreamPeerServers,
Value: 10000, // A very large default value is possible as stream servers have very small memory footprint
}
SwarmLightNodeEnabled = cli.BoolFlag{
Name: "lightnode",
Usage: "Enable Swarm LightNode (default false)",
EnvVar: SwarmEnvLightNodeEnable,
}
SwarmDeliverySkipCheckFlag = cli.BoolFlag{
Name: "delivery-skip-check",
Usage: "Skip chunk delivery check (default false)",
EnvVar: SwarmEnvDeliverySkipCheck,
}
EnsAPIFlag = cli.StringSliceFlag{
Name: "ens-api",
Usage: "ENS API endpoint for a TLD and with contract address, can be repeated, format [tld:][contract-addr@]url",
EnvVar: SwarmEnvENSAPI,
}
SwarmApiFlag = cli.StringFlag{
Name: "bzzapi",
Usage: "Specifies the Swarm HTTP endpoint to connect to",
Value: "http://127.0.0.1:8500",
}
SwarmRecursiveFlag = cli.BoolFlag{
Name: "recursive",
Usage: "Upload directories recursively",
}
SwarmWantManifestFlag = cli.BoolTFlag{
Name: "manifest",
Usage: "Automatic manifest upload (default true)",
}
SwarmUploadDefaultPath = cli.StringFlag{
Name: "defaultpath",
Usage: "path to file served for empty url path (none)",
}
SwarmAccessGrantKeyFlag = cli.StringFlag{
Name: "grant-key",
Usage: "grants a given public key access to an ACT",
}
SwarmAccessGrantKeysFlag = cli.StringFlag{
Name: "grant-keys",
Usage: "grants a given list of public keys in the following file (separated by line breaks) access to an ACT",
}
SwarmUpFromStdinFlag = cli.BoolFlag{
Name: "stdin",
Usage: "reads data to be uploaded from stdin",
}
SwarmUploadMimeType = cli.StringFlag{
Name: "mime",
Usage: "Manually specify MIME type",
}
SwarmEncryptedFlag = cli.BoolFlag{
Name: "encrypt",
Usage: "use encrypted upload",
}
SwarmAccessPasswordFlag = cli.StringFlag{
Name: "password",
Usage: "Password",
EnvVar: SwarmAccessPassword,
}
SwarmDryRunFlag = cli.BoolFlag{
Name: "dry-run",
Usage: "dry-run",
}
CorsStringFlag = cli.StringFlag{
Name: "corsdomain",
Usage: "Domain on which to send Access-Control-Allow-Origin header (multiple domains can be supplied separated by a ',')",
EnvVar: SwarmEnvCORS,
}
SwarmStorePath = cli.StringFlag{
Name: "store.path",
Usage: "Path to leveldb chunk DB (default <$GETH_ENV_DIR>/swarm/bzz-<$BZZ_KEY>/chunks)",
EnvVar: SwarmEnvStorePath,
}
SwarmStoreCapacity = cli.Uint64Flag{
Name: "store.size",
Usage: "Number of chunks (5M is roughly 20-25GB) (default 5000000)",
EnvVar: SwarmEnvStoreCapacity,
}
SwarmStoreCacheCapacity = cli.UintFlag{
Name: "store.cache.size",
Usage: "Number of recent chunks cached in memory",
EnvVar: SwarmEnvStoreCacheCapacity,
Value: 10000,
}
SwarmCompressedFlag = cli.BoolFlag{
Name: "compressed",
Usage: "Prints encryption keys in compressed form",
}
SwarmBootnodeModeFlag = cli.BoolFlag{
Name: "bootnode-mode",
Usage: "Run Swarm in Bootnode mode",
}
SwarmFeedNameFlag = cli.StringFlag{
Name: "name",
Usage: "User-defined name for the new feed, limited to 32 characters. If combined with topic, it will refer to a subtopic with this name",
}
SwarmFeedTopicFlag = cli.StringFlag{
Name: "topic",
Usage: "User-defined topic this feed is tracking, hex encoded. Limited to 64 hexadecimal characters",
}
SwarmFeedManifestFlag = cli.StringFlag{
Name: "manifest",
Usage: "Refers to the feed through a manifest",
}
SwarmFeedUserFlag = cli.StringFlag{
Name: "user",
Usage: "Indicates the user who updates the feed",
}
SwarmGlobalStoreAPIFlag = cli.StringFlag{
Name: "globalstore-api",
Usage: "URL of the Global Store API provider (only for testing)",
EnvVar: SwarmGlobalstoreAPI,
}
SwarmLegacyFlag = cli.BoolFlag{
Name: "legacy",
Usage: "Use this flag when importing a db export from a legacy local store database dump (for schemas older than 'sanctuary')",
}
)

View File

@ -1,162 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"context"
"fmt"
"path/filepath"
"strings"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm/fuse"
"gopkg.in/urfave/cli.v1"
)
var fsCommand = cli.Command{
Name: "fs",
CustomHelpTemplate: helpTemplate,
Usage: "perform FUSE operations",
ArgsUsage: "fs COMMAND",
Description: "Performs FUSE operations by mounting/unmounting/listing mount points. This assumes you already have a Swarm node running locally. For all operation you must reference the correct path to bzzd.ipc in order to communicate with the node",
Subcommands: []cli.Command{
{
Action: mount,
CustomHelpTemplate: helpTemplate,
Name: "mount",
Usage: "mount a swarm hash to a mount point",
ArgsUsage: "swarm fs mount <manifest hash> <mount point>",
Description: "Mounts a Swarm manifest hash to a given mount point. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
{
Action: unmount,
CustomHelpTemplate: helpTemplate,
Name: "unmount",
Usage: "unmount a swarmfs mount",
ArgsUsage: "swarm fs unmount <mount point>",
Description: "Unmounts a swarmfs mount residing at <mount point>. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
{
Action: listMounts,
CustomHelpTemplate: helpTemplate,
Name: "list",
Usage: "list swarmfs mounts",
ArgsUsage: "swarm fs list",
Description: "Lists all mounted swarmfs volumes. This assumes you already have a Swarm node running locally. You must reference the correct path to your bzzd.ipc file",
},
},
}
func mount(cliContext *cli.Context) {
args := cliContext.Args()
if len(args) < 2 {
utils.Fatalf("Usage: swarm fs mount <manifestHash> <file name>")
}
client, err := dialRPC(cliContext)
if err != nil {
utils.Fatalf("had an error dailing to RPC endpoint: %v", err)
}
defer client.Close()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
mf := &fuse.MountInfo{}
mountPoint, err := filepath.Abs(filepath.Clean(args[1]))
if err != nil {
utils.Fatalf("error expanding path for mount point: %v", err)
}
err = client.CallContext(ctx, mf, "swarmfs_mount", args[0], mountPoint)
if err != nil {
utils.Fatalf("had an error calling the RPC endpoint while mounting: %v", err)
}
}
func unmount(cliContext *cli.Context) {
args := cliContext.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm fs unmount <mount path>")
}
client, err := dialRPC(cliContext)
if err != nil {
utils.Fatalf("had an error dailing to RPC endpoint: %v", err)
}
defer client.Close()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
mf := fuse.MountInfo{}
err = client.CallContext(ctx, &mf, "swarmfs_unmount", args[0])
if err != nil {
utils.Fatalf("encountered an error calling the RPC endpoint while unmounting: %v", err)
}
fmt.Printf("%s\n", mf.LatestManifest) //print the latest manifest hash for user reference
}
func listMounts(cliContext *cli.Context) {
client, err := dialRPC(cliContext)
if err != nil {
utils.Fatalf("had an error dailing to RPC endpoint: %v", err)
}
defer client.Close()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
var mf []fuse.MountInfo
err = client.CallContext(ctx, &mf, "swarmfs_listmounts")
if err != nil {
utils.Fatalf("encountered an error calling the RPC endpoint while listing mounts: %v", err)
}
if len(mf) == 0 {
fmt.Print("Could not found any swarmfs mounts. Please make sure you've specified the correct RPC endpoint\n")
} else {
fmt.Printf("Found %d swarmfs mount(s):\n", len(mf))
for i, mountInfo := range mf {
fmt.Printf("%d:\n", i)
fmt.Printf("\tMount point: %s\n", mountInfo.MountPoint)
fmt.Printf("\tLatest Manifest: %s\n", mountInfo.LatestManifest)
fmt.Printf("\tStart Manifest: %s\n", mountInfo.StartManifest)
}
}
}
func dialRPC(ctx *cli.Context) (*rpc.Client, error) {
endpoint := getIPCEndpoint(ctx)
log.Info("IPC endpoint", "path", endpoint)
return rpc.Dial(endpoint)
}
func getIPCEndpoint(ctx *cli.Context) string {
cfg := defaultNodeConfig
utils.SetNodeConfig(ctx, &cfg)
endpoint := cfg.IPCEndpoint()
if strings.HasPrefix(endpoint, "rpc:") || strings.HasPrefix(endpoint, "ipc:") {
// Backwards compatibility with geth < 1.5 which required
// these prefixes.
endpoint = endpoint[4:]
}
return endpoint
}

View File

@ -1,260 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// +build linux freebsd
package main
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/log"
)
type testFile struct {
filePath string
content string
}
// TestCLISwarmFsDefaultIPCPath tests if the most basic fs command, i.e., list
// can find and correctly connect to a running Swarm node on the default
// IPCPath.
func TestCLISwarmFsDefaultIPCPath(t *testing.T) {
cluster := newTestCluster(t, 1)
defer cluster.Shutdown()
handlingNode := cluster.Nodes[0]
list := runSwarm(t, []string{
"--datadir", handlingNode.Dir,
"fs",
"list",
}...)
list.WaitExit()
if list.Err != nil {
t.Fatal(list.Err)
}
}
// TestCLISwarmFs is a high-level test of swarmfs
//
// This test fails on travis for macOS as this executable exits with code 1
// and without any log messages in the log:
// /Library/Filesystems/osxfuse.fs/Contents/Resources/load_osxfuse.
// This is the reason for this file not being built on darwin architecture.
func TestCLISwarmFs(t *testing.T) {
cluster := newTestCluster(t, 3)
defer cluster.Shutdown()
// create a tmp dir
mountPoint, err := ioutil.TempDir("", "swarm-test")
log.Debug("swarmfs cli test", "1st mount", mountPoint)
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(mountPoint)
handlingNode := cluster.Nodes[0]
mhash := doUploadEmptyDir(t, handlingNode)
log.Debug("swarmfs cli test: mounting first run", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
mount := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"mount",
mhash,
mountPoint,
}...)
mount.ExpectExit()
filesToAssert := []*testFile{}
dirPath, err := createDirInDir(mountPoint, "testSubDir")
if err != nil {
t.Fatal(err)
}
dirPath2, err := createDirInDir(dirPath, "AnotherTestSubDir")
if err != nil {
t.Fatal(err)
}
dummyContent := "somerandomtestcontentthatshouldbeasserted"
dirs := []string{
mountPoint,
dirPath,
dirPath2,
}
files := []string{"f1.tmp", "f2.tmp"}
for _, d := range dirs {
for _, entry := range files {
tFile, err := createTestFileInPath(d, entry, dummyContent)
if err != nil {
t.Fatal(err)
}
filesToAssert = append(filesToAssert, tFile)
}
}
if len(filesToAssert) != len(dirs)*len(files) {
t.Fatalf("should have %d files to assert now, got %d", len(dirs)*len(files), len(filesToAssert))
}
hashRegexp := `[a-f\d]{64}`
log.Debug("swarmfs cli test: unmounting first run...", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
unmount := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"unmount",
mountPoint,
}...)
_, matches := unmount.ExpectRegexp(hashRegexp)
unmount.ExpectExit()
hash := matches[0]
if hash == mhash {
t.Fatal("this should not be equal")
}
log.Debug("swarmfs cli test: asserting no files in mount point")
//check that there's nothing in the mount folder
filesInDir, err := ioutil.ReadDir(mountPoint)
if err != nil {
t.Fatalf("had an error reading the directory: %v", err)
}
if len(filesInDir) != 0 {
t.Fatal("there shouldn't be anything here")
}
secondMountPoint, err := ioutil.TempDir("", "swarm-test")
log.Debug("swarmfs cli test", "2nd mount point at", secondMountPoint)
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(secondMountPoint)
log.Debug("swarmfs cli test: remounting at second mount point", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
//remount, check files
newMount := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"mount",
hash, // the latest hash
secondMountPoint,
}...)
newMount.ExpectExit()
time.Sleep(1 * time.Second)
filesInDir, err = ioutil.ReadDir(secondMountPoint)
if err != nil {
t.Fatal(err)
}
if len(filesInDir) == 0 {
t.Fatal("there should be something here")
}
log.Debug("swarmfs cli test: traversing file tree to see it matches previous mount")
for _, file := range filesToAssert {
file.filePath = strings.Replace(file.filePath, mountPoint, secondMountPoint, -1)
fileBytes, err := ioutil.ReadFile(file.filePath)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(fileBytes, bytes.NewBufferString(file.content).Bytes()) {
t.Fatal("this should be equal")
}
}
log.Debug("swarmfs cli test: unmounting second run", "ipc path", filepath.Join(handlingNode.Dir, handlingNode.IpcPath))
unmountSec := runSwarm(t, []string{
fmt.Sprintf("--%s", utils.IPCPathFlag.Name), filepath.Join(handlingNode.Dir, handlingNode.IpcPath),
"fs",
"unmount",
secondMountPoint,
}...)
_, matches = unmountSec.ExpectRegexp(hashRegexp)
unmountSec.ExpectExit()
if matches[0] != hash {
t.Fatal("these should be equal - no changes made")
}
}
func doUploadEmptyDir(t *testing.T, node *testNode) string {
// create a tmp dir
tmpDir, err := ioutil.TempDir("", "swarm-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
hashRegexp := `[a-f\d]{64}`
flags := []string{
"--bzzapi", node.URL,
"--recursive",
"up",
tmpDir}
log.Info("swarmfs cli test: uploading dir with 'swarm up'")
up := runSwarm(t, flags...)
_, matches := up.ExpectRegexp(hashRegexp)
up.ExpectExit()
hash := matches[0]
log.Info("swarmfs cli test: dir uploaded", "hash", hash)
return hash
}
func createDirInDir(createInDir string, dirToCreate string) (string, error) {
fullpath := filepath.Join(createInDir, dirToCreate)
err := os.MkdirAll(fullpath, 0777)
if err != nil {
return "", err
}
return fullpath, nil
}
func createTestFileInPath(dir, filename, content string) (*testFile, error) {
tFile := &testFile{}
filePath := filepath.Join(dir, filename)
if file, err := os.Create(filePath); err == nil {
tFile.content = content
tFile.filePath = filePath
_, err = io.WriteString(file, content)
if err != nil {
return nil, err
}
file.Close()
}
return tFile, nil
}

View File

@ -1,66 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"context"
"fmt"
"net"
"net/http"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/swarm/storage/mock"
"github.com/ethereum/go-ethereum/swarm/storage/mock/explorer"
cli "gopkg.in/urfave/cli.v1"
)
// serveChunkExplorer starts an http server in background with chunk explorer handler
// using the provided global store. Server is started if the returned shutdown function
// is not nil.
func serveChunkExplorer(ctx *cli.Context, globalStore mock.GlobalStorer) (shutdown func(), err error) {
if !ctx.IsSet("explorer-address") {
return nil, nil
}
corsOrigins := ctx.StringSlice("explorer-cors-origin")
server := &http.Server{
Handler: explorer.NewHandler(globalStore, corsOrigins),
IdleTimeout: 30 * time.Minute,
ReadTimeout: 2 * time.Minute,
WriteTimeout: 2 * time.Minute,
}
listener, err := net.Listen("tcp", ctx.String("explorer-address"))
if err != nil {
return nil, fmt.Errorf("explorer: %v", err)
}
log.Info("chunk explorer http", "address", listener.Addr().String(), "origins", corsOrigins)
go func() {
if err := server.Serve(listener); err != nil {
log.Error("chunk explorer", "err", err)
}
}()
return func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Error("chunk explorer: shutdown", "err", err)
}
}, nil
}

View File

@ -1,254 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"encoding/json"
"fmt"
"net/http"
"sort"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/swarm/storage/mock/explorer"
mockRPC "github.com/ethereum/go-ethereum/swarm/storage/mock/rpc"
)
// TestExplorer validates basic chunk explorer functionality by storing
// a small set of chunk and making http requests on exposed endpoint.
// Full chunk explorer validation is done in mock/explorer package.
func TestExplorer(t *testing.T) {
addr := findFreeTCPAddress(t)
explorerAddr := findFreeTCPAddress(t)
testCmd := runGlobalStore(t, "ws", "--addr", addr, "--explorer-address", explorerAddr)
defer testCmd.Kill()
client := websocketClient(t, addr)
store := mockRPC.NewGlobalStore(client)
defer store.Close()
nodeKeys := map[string][]string{
"a1": {"b1", "b2", "b3"},
"a2": {"b3", "b4", "b5"},
}
keyNodes := make(map[string][]string)
for addr, keys := range nodeKeys {
for _, key := range keys {
keyNodes[key] = append(keyNodes[key], addr)
}
}
invalidAddr := "c1"
invalidKey := "d1"
for addr, keys := range nodeKeys {
for _, key := range keys {
err := store.Put(common.HexToAddress(addr), common.Hex2Bytes(key), []byte("data"))
if err != nil {
t.Fatal(err)
}
}
}
endpoint := "http://" + explorerAddr
t.Run("has key", func(t *testing.T) {
for addr, keys := range nodeKeys {
for _, key := range keys {
testStatusResponse(t, endpoint+"/api/has-key/"+addr+"/"+key, http.StatusOK)
testStatusResponse(t, endpoint+"/api/has-key/"+invalidAddr+"/"+key, http.StatusNotFound)
}
testStatusResponse(t, endpoint+"/api/has-key/"+addr+"/"+invalidKey, http.StatusNotFound)
}
testStatusResponse(t, endpoint+"/api/has-key/"+invalidAddr+"/"+invalidKey, http.StatusNotFound)
})
t.Run("keys", func(t *testing.T) {
var keys []string
for key := range keyNodes {
keys = append(keys, key)
}
sort.Strings(keys)
testKeysResponse(t, endpoint+"/api/keys", explorer.KeysResponse{
Keys: keys,
})
})
t.Run("nodes", func(t *testing.T) {
var nodes []string
for addr := range nodeKeys {
nodes = append(nodes, common.HexToAddress(addr).Hex())
}
sort.Strings(nodes)
testNodesResponse(t, endpoint+"/api/nodes", explorer.NodesResponse{
Nodes: nodes,
})
})
t.Run("node keys", func(t *testing.T) {
for addr, keys := range nodeKeys {
testKeysResponse(t, endpoint+"/api/keys?node="+addr, explorer.KeysResponse{
Keys: keys,
})
}
testKeysResponse(t, endpoint+"/api/keys?node="+invalidAddr, explorer.KeysResponse{})
})
t.Run("key nodes", func(t *testing.T) {
for key, addrs := range keyNodes {
var nodes []string
for _, addr := range addrs {
nodes = append(nodes, common.HexToAddress(addr).Hex())
}
sort.Strings(nodes)
testNodesResponse(t, endpoint+"/api/nodes?key="+key, explorer.NodesResponse{
Nodes: nodes,
})
}
testNodesResponse(t, endpoint+"/api/nodes?key="+invalidKey, explorer.NodesResponse{})
})
}
// TestExplorer_CORSOrigin validates if chunk explorer returns
// correct CORS origin header in GET and OPTIONS requests.
func TestExplorer_CORSOrigin(t *testing.T) {
origin := "http://localhost/"
addr := findFreeTCPAddress(t)
explorerAddr := findFreeTCPAddress(t)
testCmd := runGlobalStore(t, "ws",
"--addr", addr,
"--explorer-address", explorerAddr,
"--explorer-cors-origin", origin,
)
defer testCmd.Kill()
// wait until the server is started
waitHTTPEndpoint(t, explorerAddr)
url := "http://" + explorerAddr + "/api/keys"
t.Run("get", func(t *testing.T) {
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
t.Fatal(err)
}
req.Header.Set("Origin", origin)
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
header := resp.Header.Get("Access-Control-Allow-Origin")
if header != origin {
t.Errorf("got Access-Control-Allow-Origin header %q, want %q", header, origin)
}
})
t.Run("preflight", func(t *testing.T) {
req, err := http.NewRequest(http.MethodOptions, url, nil)
if err != nil {
t.Fatal(err)
}
req.Header.Set("Origin", origin)
req.Header.Set("Access-Control-Request-Method", "GET")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
header := resp.Header.Get("Access-Control-Allow-Origin")
if header != origin {
t.Errorf("got Access-Control-Allow-Origin header %q, want %q", header, origin)
}
})
}
// testStatusResponse makes an http request to provided url
// and validates if response is explorer.StatusResponse for
// the expected status code.
func testStatusResponse(t *testing.T, url string, code int) {
t.Helper()
resp, err := http.Get(url)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != code {
t.Errorf("got status code %v, want %v", resp.StatusCode, code)
}
var r explorer.StatusResponse
if err := json.NewDecoder(resp.Body).Decode(&r); err != nil {
t.Fatal(err)
}
if r.Code != code {
t.Errorf("got response code %v, want %v", r.Code, code)
}
if r.Message != http.StatusText(code) {
t.Errorf("got response message %q, want %q", r.Message, http.StatusText(code))
}
}
// testKeysResponse makes an http request to provided url
// and validates if response machhes expected explorer.KeysResponse.
func testKeysResponse(t *testing.T, url string, want explorer.KeysResponse) {
t.Helper()
resp, err := http.Get(url)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != http.StatusOK {
t.Errorf("got status code %v, want %v", resp.StatusCode, http.StatusOK)
}
var r explorer.KeysResponse
if err := json.NewDecoder(resp.Body).Decode(&r); err != nil {
t.Fatal(err)
}
if fmt.Sprint(r.Keys) != fmt.Sprint(want.Keys) {
t.Errorf("got keys %v, want %v", r.Keys, want.Keys)
}
if r.Next != want.Next {
t.Errorf("got next %s, want %s", r.Next, want.Next)
}
}
// testNodeResponse makes an http request to provided url
// and validates if response machhes expected explorer.NodeResponse.
func testNodesResponse(t *testing.T, url string, want explorer.NodesResponse) {
t.Helper()
resp, err := http.Get(url)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != http.StatusOK {
t.Errorf("got status code %v, want %v", resp.StatusCode, http.StatusOK)
}
var r explorer.NodesResponse
if err := json.NewDecoder(resp.Body).Decode(&r); err != nil {
t.Fatal(err)
}
if fmt.Sprint(r.Nodes) != fmt.Sprint(want.Nodes) {
t.Errorf("got nodes %v, want %v", r.Nodes, want.Nodes)
}
if r.Next != want.Next {
t.Errorf("got next %s, want %s", r.Next, want.Next)
}
}

View File

@ -1,120 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"io"
"net"
"net/http"
"os"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm/storage/mock"
"github.com/ethereum/go-ethereum/swarm/storage/mock/db"
"github.com/ethereum/go-ethereum/swarm/storage/mock/mem"
cli "gopkg.in/urfave/cli.v1"
)
// startHTTP starts a global store with HTTP RPC server.
// It is used for "http" cli command.
func startHTTP(ctx *cli.Context) (err error) {
server, cleanup, err := newServer(ctx)
if err != nil {
return err
}
defer cleanup()
listener, err := net.Listen("tcp", ctx.String("addr"))
if err != nil {
return err
}
log.Info("http", "address", listener.Addr().String())
return http.Serve(listener, server)
}
// startWS starts a global store with WebSocket RPC server.
// It is used for "websocket" cli command.
func startWS(ctx *cli.Context) (err error) {
server, cleanup, err := newServer(ctx)
if err != nil {
return err
}
defer cleanup()
listener, err := net.Listen("tcp", ctx.String("addr"))
if err != nil {
return err
}
origins := ctx.StringSlice("origins")
log.Info("websocket", "address", listener.Addr().String(), "origins", origins)
return http.Serve(listener, server.WebsocketHandler(origins))
}
// newServer creates a global store and starts a chunk explorer server if configured.
// Returned cleanup function should be called only if err is nil.
func newServer(ctx *cli.Context) (server *rpc.Server, cleanup func(), err error) {
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(ctx.Int("verbosity")), log.StreamHandler(os.Stdout, log.TerminalFormat(false))))
cleanup = func() {}
var globalStore mock.GlobalStorer
dir := ctx.String("dir")
if dir != "" {
dbStore, err := db.NewGlobalStore(dir)
if err != nil {
return nil, nil, err
}
cleanup = func() {
if err := dbStore.Close(); err != nil {
log.Error("global store: close", "err", err)
}
}
globalStore = dbStore
log.Info("database global store", "dir", dir)
} else {
globalStore = mem.NewGlobalStore()
log.Info("in-memory global store")
}
server = rpc.NewServer()
if err := server.RegisterName("mockStore", globalStore); err != nil {
cleanup()
return nil, nil, err
}
shutdown, err := serveChunkExplorer(ctx, globalStore)
if err != nil {
cleanup()
return nil, nil, err
}
if shutdown != nil {
cleanup = func() {
shutdown()
if c, ok := globalStore.(io.Closer); ok {
if err := c.Close(); err != nil {
log.Error("global store: close", "err", err)
}
}
}
}
return server, cleanup, nil
}

View File

@ -1,207 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"context"
"io/ioutil"
"net"
"net/http"
"os"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rpc"
mockRPC "github.com/ethereum/go-ethereum/swarm/storage/mock/rpc"
)
// TestHTTP_InMemory tests in-memory global store that exposes
// HTTP server.
func TestHTTP_InMemory(t *testing.T) {
testHTTP(t, true)
}
// TestHTTP_Database tests global store with persisted database
// that exposes HTTP server.
func TestHTTP_Database(t *testing.T) {
dir, err := ioutil.TempDir("", "swarm-global-store-")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
// create a fresh global store
testHTTP(t, true, "--dir", dir)
// check if data saved by the previous global store instance
testHTTP(t, false, "--dir", dir)
}
// testWebsocket starts global store binary with HTTP server
// and validates that it can store and retrieve data.
// If put is false, no data will be stored, only retrieved,
// giving the possibility to check if data is present in the
// storage directory.
func testHTTP(t *testing.T, put bool, args ...string) {
addr := findFreeTCPAddress(t)
testCmd := runGlobalStore(t, append([]string{"http", "--addr", addr}, args...)...)
defer testCmd.Kill()
client, err := rpc.DialHTTP("http://" + addr)
if err != nil {
t.Fatal(err)
}
// wait until global store process is started as
// rpc.DialHTTP is actually not connecting
waitHTTPEndpoint(t, addr)
store := mockRPC.NewGlobalStore(client)
defer store.Close()
node := store.NewNodeStore(common.HexToAddress("123abc"))
wantKey := "key"
wantValue := "value"
if put {
err = node.Put([]byte(wantKey), []byte(wantValue))
if err != nil {
t.Fatal(err)
}
}
gotValue, err := node.Get([]byte(wantKey))
if err != nil {
t.Fatal(err)
}
if string(gotValue) != wantValue {
t.Errorf("got value %s for key %s, want %s", string(gotValue), wantKey, wantValue)
}
}
// TestWebsocket_InMemory tests in-memory global store that exposes
// WebSocket server.
func TestWebsocket_InMemory(t *testing.T) {
testWebsocket(t, true)
}
// TestWebsocket_Database tests global store with persisted database
// that exposes HTTP server.
func TestWebsocket_Database(t *testing.T) {
dir, err := ioutil.TempDir("", "swarm-global-store-")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
// create a fresh global store
testWebsocket(t, true, "--dir", dir)
// check if data saved by the previous global store instance
testWebsocket(t, false, "--dir", dir)
}
// testWebsocket starts global store binary with WebSocket server
// and validates that it can store and retrieve data.
// If put is false, no data will be stored, only retrieved,
// giving the possibility to check if data is present in the
// storage directory.
func testWebsocket(t *testing.T, put bool, args ...string) {
addr := findFreeTCPAddress(t)
testCmd := runGlobalStore(t, append([]string{"ws", "--addr", addr}, args...)...)
defer testCmd.Kill()
client := websocketClient(t, addr)
store := mockRPC.NewGlobalStore(client)
defer store.Close()
node := store.NewNodeStore(common.HexToAddress("123abc"))
wantKey := "key"
wantValue := "value"
if put {
err := node.Put([]byte(wantKey), []byte(wantValue))
if err != nil {
t.Fatal(err)
}
}
gotValue, err := node.Get([]byte(wantKey))
if err != nil {
t.Fatal(err)
}
if string(gotValue) != wantValue {
t.Errorf("got value %s for key %s, want %s", string(gotValue), wantKey, wantValue)
}
}
// findFreeTCPAddress returns a local address (IP:Port) to which
// global store can listen on.
func findFreeTCPAddress(t *testing.T) (addr string) {
t.Helper()
listener, err := net.Listen("tcp", "")
if err != nil {
t.Fatal(err)
}
defer listener.Close()
return listener.Addr().String()
}
// websocketClient waits until global store process is started
// and returns rpc client.
func websocketClient(t *testing.T, addr string) (client *rpc.Client) {
t.Helper()
var err error
for i := 0; i < 1000; i++ {
client, err = rpc.DialWebsocket(context.Background(), "ws://"+addr, "")
if err == nil {
break
}
time.Sleep(10 * time.Millisecond)
}
if err != nil {
t.Fatal(err)
}
return client
}
// waitHTTPEndpoint retries http requests to a provided
// address until the connection is established.
func waitHTTPEndpoint(t *testing.T, addr string) {
t.Helper()
var err error
for i := 0; i < 1000; i++ {
_, err = http.Get("http://" + addr)
if err == nil {
break
}
time.Sleep(10 * time.Millisecond)
}
if err != nil {
t.Fatal(err)
}
}

View File

@ -1,124 +0,0 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"os"
"github.com/ethereum/go-ethereum/log"
cli "gopkg.in/urfave/cli.v1"
)
var (
version = "0.1"
gitCommit string // Git SHA1 commit hash of the release (set via linker flags)
gitDate string
)
func main() {
err := newApp().Run(os.Args)
if err != nil {
log.Error(err.Error())
os.Exit(1)
}
}
// newApp construct a new instance of Swarm Global Store.
// Method Run is called on it in the main function and in tests.
func newApp() (app *cli.App) {
app = cli.NewApp()
app.Name = "global-store"
app.Version = version
if len(gitCommit) >= 8 {
app.Version += "-" + gitCommit[:8]
}
if gitDate != "" {
app.Version += "-" + gitDate
}
app.Usage = "Swarm Global Store"
// app flags (for all commands)
app.Flags = []cli.Flag{
cli.IntFlag{
Name: "verbosity",
Value: 3,
Usage: "Verbosity level.",
},
cli.StringFlag{
Name: "explorer-address",
Value: "",
Usage: "Chunk explorer HTTP listener address.",
},
cli.StringSliceFlag{
Name: "explorer-cors-origin",
Value: nil,
Usage: "Chunk explorer CORS origin (can be specified multiple times).",
},
}
app.Commands = []cli.Command{
{
Name: "http",
Aliases: []string{"h"},
Usage: "Start swarm global store with HTTP server.",
Action: startHTTP,
// Flags only for "start" command.
// Allow app flags to be specified after the
// command argument.
Flags: append(app.Flags,
cli.StringFlag{
Name: "dir",
Value: "",
Usage: "Data directory.",
},
cli.StringFlag{
Name: "addr",
Value: "0.0.0.0:3033",
Usage: "Address to listen for HTTP connections.",
},
),
},
{
Name: "websocket",
Aliases: []string{"ws"},
Usage: "Start swarm global store with WebSocket server.",
Action: startWS,
// Flags only for "start" command.
// Allow app flags to be specified after the
// command argument.
Flags: append(app.Flags,
cli.StringFlag{
Name: "dir",
Value: "",
Usage: "Data directory.",
},
cli.StringFlag{
Name: "addr",
Value: "0.0.0.0:3033",
Usage: "Address to listen for WebSocket connections.",
},
cli.StringSliceFlag{
Name: "origin",
Value: nil,
Usage: "WebSocket CORS origin (can be specified multiple times).",
},
),
},
}
return app
}

View File

@ -1,116 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command bzzhash computes a swarm tree hash.
package main
import (
"context"
"encoding/hex"
"fmt"
"os"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/contracts/ens"
"github.com/ethereum/go-ethereum/swarm/chunk"
"github.com/ethereum/go-ethereum/swarm/storage"
"gopkg.in/urfave/cli.v1"
)
var hashCommand = cli.Command{
Action: hash,
CustomHelpTemplate: helpTemplate,
Name: "hash",
Usage: "print the swarm hash of a file or directory",
ArgsUsage: "<file>",
Description: "Prints the swarm hash of file or directory",
Subcommands: []cli.Command{
{
CustomHelpTemplate: helpTemplate,
Name: "ens",
Usage: "converts a swarm hash to an ens EIP1577 compatible CIDv1 hash",
ArgsUsage: "<ref>",
Description: "",
Subcommands: []cli.Command{
{
Action: encodeEipHash,
CustomHelpTemplate: helpTemplate,
Name: "contenthash",
Usage: "converts a swarm hash to an ens EIP1577 compatible CIDv1 hash",
ArgsUsage: "<ref>",
Description: "",
},
{
Action: ensNodeHash,
CustomHelpTemplate: helpTemplate,
Name: "node",
Usage: "converts an ens name to an ENS node hash",
ArgsUsage: "<ref>",
Description: "",
},
},
},
}}
func hash(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm hash <file name>")
}
f, err := os.Open(args[0])
if err != nil {
utils.Fatalf("Error opening file " + args[1])
}
defer f.Close()
stat, _ := f.Stat()
fileStore := storage.NewFileStore(&storage.FakeChunkStore{}, storage.NewFileStoreParams(), chunk.NewTags())
addr, _, err := fileStore.Store(context.TODO(), f, stat.Size(), false)
if err != nil {
utils.Fatalf("%v\n", err)
} else {
fmt.Printf("%v\n", addr)
}
}
func ensNodeHash(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm hash ens node <ens name>")
}
ensName := args[0]
hash := ens.EnsNode(ensName)
stringHex := hex.EncodeToString(hash[:])
fmt.Println(stringHex)
}
func encodeEipHash(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {
utils.Fatalf("Usage: swarm hash ens <swarm hash>")
}
swarmHash := args[0]
hash := common.HexToHash(swarmHash)
ensHash, err := ens.EncodeSwarmHash(hash)
if err != nil {
utils.Fatalf("error converting swarm hash", err)
}
stringHex := hex.EncodeToString(ensHash)
fmt.Println(stringHex)
}

View File

@ -1,70 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"os"
"strings"
"text/tabwriter"
"github.com/ethereum/go-ethereum/cmd/utils"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
"gopkg.in/urfave/cli.v1"
)
var listCommand = cli.Command{
Action: list,
CustomHelpTemplate: helpTemplate,
Name: "ls",
Usage: "list files and directories contained in a manifest",
ArgsUsage: "<manifest> [<prefix>]",
Description: "Lists files and directories contained in a manifest",
}
func list(ctx *cli.Context) {
args := ctx.Args()
if len(args) < 1 {
utils.Fatalf("Please supply a manifest reference as the first argument")
} else if len(args) > 2 {
utils.Fatalf("Too many arguments - usage 'swarm ls manifest [prefix]'")
}
manifest := args[0]
var prefix string
if len(args) == 2 {
prefix = args[1]
}
bzzapi := strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client := swarm.NewClient(bzzapi)
list, err := client.List(manifest, prefix, "")
if err != nil {
utils.Fatalf("Failed to generate file and directory list: %s", err)
}
w := tabwriter.NewWriter(os.Stdout, 1, 2, 2, ' ', 0)
defer w.Flush()
fmt.Fprintln(w, "HASH\tCONTENT TYPE\tPATH")
for _, prefix := range list.CommonPrefixes {
fmt.Fprintf(w, "%s\t%s\t%s\n", "", "DIR", prefix)
}
for _, entry := range list.Entries {
fmt.Fprintf(w, "%s\t%s\t%s\n", entry.Hash, entry.ContentType, entry.Path)
}
}

View File

@ -1,475 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/ecdsa"
"encoding/hex"
"fmt"
"io/ioutil"
"os"
"os/signal"
"runtime"
"sort"
"strconv"
"strings"
"syscall"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/console"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/debug"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm"
bzzapi "github.com/ethereum/go-ethereum/swarm/api"
swarmmetrics "github.com/ethereum/go-ethereum/swarm/metrics"
"github.com/ethereum/go-ethereum/swarm/storage/mock"
mockrpc "github.com/ethereum/go-ethereum/swarm/storage/mock/rpc"
"github.com/ethereum/go-ethereum/swarm/tracing"
sv "github.com/ethereum/go-ethereum/swarm/version"
cli "gopkg.in/urfave/cli.v1"
)
const clientIdentifier = "swarm"
const helpTemplate = `NAME:
{{.HelpName}} - {{.Usage}}
USAGE:
{{if .UsageText}}{{.UsageText}}{{else}}{{.HelpName}}{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{end}}{{if .Category}}
CATEGORY:
{{.Category}}{{end}}{{if .Description}}
DESCRIPTION:
{{.Description}}{{end}}{{if .VisibleFlags}}
OPTIONS:
{{range .VisibleFlags}}{{.}}
{{end}}{{end}}
`
// Git SHA1 commit hash of the release (set via linker flags)
// this variable will be assigned if corresponding parameter is passed with install, but not with test
// e.g.: go install -ldflags "-X main.gitCommit=ed1312d01b19e04ef578946226e5d8069d5dfd5a" ./cmd/swarm
var gitCommit string
//declare a few constant error messages, useful for later error check comparisons in test
var (
SwarmErrNoBZZAccount = "bzzaccount option is required but not set; check your config file, command line or environment variables"
SwarmErrSwapSetNoAPI = "SWAP is enabled but --swap-api is not set"
)
// this help command gets added to any subcommand that does not define it explicitly
var defaultSubcommandHelp = cli.Command{
Action: func(ctx *cli.Context) { cli.ShowCommandHelpAndExit(ctx, "", 1) },
CustomHelpTemplate: helpTemplate,
Name: "help",
Usage: "shows this help",
Hidden: true,
}
var defaultNodeConfig = node.DefaultConfig
// This init function sets defaults so cmd/swarm can run alongside geth.
func init() {
sv.GitCommit = gitCommit
defaultNodeConfig.Name = clientIdentifier
defaultNodeConfig.Version = sv.VersionWithCommit(gitCommit)
defaultNodeConfig.P2P.ListenAddr = ":30399"
defaultNodeConfig.IPCPath = "bzzd.ipc"
// Set flag defaults for --help display.
utils.ListenPortFlag.Value = 30399
}
var app = utils.NewApp("", "", "Ethereum Swarm")
// This init function creates the cli.App.
func init() {
app.Action = bzzd
app.Version = sv.ArchiveVersion(gitCommit)
app.Copyright = "Copyright 2013-2016 The go-ethereum Authors"
app.Commands = []cli.Command{
{
Action: version,
CustomHelpTemplate: helpTemplate,
Name: "version",
Usage: "Print version numbers",
Description: "The output of this command is supposed to be machine-readable",
},
{
Action: keys,
CustomHelpTemplate: helpTemplate,
Name: "print-keys",
Flags: []cli.Flag{SwarmCompressedFlag},
Usage: "Print public key information",
Description: "The output of this command is supposed to be machine-readable",
},
// See upload.go
upCommand,
// See access.go
accessCommand,
// See feeds.go
feedCommand,
// See list.go
listCommand,
// See hash.go
hashCommand,
// See download.go
downloadCommand,
// See manifest.go
manifestCommand,
// See fs.go
fsCommand,
// See db.go
dbCommand,
// See config.go
DumpConfigCommand,
// hashesCommand
hashesCommand,
}
// append a hidden help subcommand to all commands that have subcommands
// if a help command was already defined above, that one will take precedence.
addDefaultHelpSubcommands(app.Commands)
sort.Sort(cli.CommandsByName(app.Commands))
app.Flags = []cli.Flag{
utils.IdentityFlag,
utils.DataDirFlag,
utils.BootnodesFlag,
utils.KeyStoreDirFlag,
utils.ListenPortFlag,
utils.DiscoveryV5Flag,
utils.NetrestrictFlag,
utils.NodeKeyFileFlag,
utils.NodeKeyHexFlag,
utils.MaxPeersFlag,
utils.NATFlag,
utils.IPCDisabledFlag,
utils.IPCPathFlag,
utils.PasswordFileFlag,
// bzzd-specific flags
CorsStringFlag,
EnsAPIFlag,
SwarmTomlConfigPathFlag,
SwarmSwapEnabledFlag,
SwarmSwapAPIFlag,
SwarmSyncDisabledFlag,
SwarmSyncUpdateDelay,
SwarmMaxStreamPeerServersFlag,
SwarmLightNodeEnabled,
SwarmDeliverySkipCheckFlag,
SwarmListenAddrFlag,
SwarmPortFlag,
SwarmAccountFlag,
SwarmNetworkIdFlag,
ChequebookAddrFlag,
// upload flags
SwarmApiFlag,
SwarmRecursiveFlag,
SwarmWantManifestFlag,
SwarmUploadDefaultPath,
SwarmUpFromStdinFlag,
SwarmUploadMimeType,
// bootnode mode
SwarmBootnodeModeFlag,
// storage flags
SwarmStorePath,
SwarmStoreCapacity,
SwarmStoreCacheCapacity,
SwarmGlobalStoreAPIFlag,
}
rpcFlags := []cli.Flag{
utils.WSEnabledFlag,
utils.WSListenAddrFlag,
utils.WSPortFlag,
utils.WSApiFlag,
utils.WSAllowedOriginsFlag,
}
app.Flags = append(app.Flags, rpcFlags...)
app.Flags = append(app.Flags, debug.Flags...)
app.Flags = append(app.Flags, swarmmetrics.Flags...)
app.Flags = append(app.Flags, tracing.Flags...)
app.Before = func(ctx *cli.Context) error {
runtime.GOMAXPROCS(runtime.NumCPU())
if err := debug.Setup(ctx, ""); err != nil {
return err
}
swarmmetrics.Setup(ctx)
tracing.Setup(ctx)
return nil
}
app.After = func(ctx *cli.Context) error {
debug.Exit()
return nil
}
}
func main() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func keys(ctx *cli.Context) error {
privateKey := getPrivKey(ctx)
pubkey := crypto.FromECDSAPub(&privateKey.PublicKey)
pubkeyhex := hex.EncodeToString(pubkey)
pubCompressed := hex.EncodeToString(crypto.CompressPubkey(&privateKey.PublicKey))
bzzkey := crypto.Keccak256Hash(pubkey).Hex()
if !ctx.Bool(SwarmCompressedFlag.Name) {
fmt.Println(fmt.Sprintf("bzzkey=%s", bzzkey[2:]))
fmt.Println(fmt.Sprintf("publicKey=%s", pubkeyhex))
}
fmt.Println(fmt.Sprintf("publicKeyCompressed=%s", pubCompressed))
return nil
}
func version(ctx *cli.Context) error {
fmt.Println(strings.Title(clientIdentifier))
fmt.Println("Version:", sv.VersionWithMeta)
if gitCommit != "" {
fmt.Println("Git Commit:", gitCommit)
}
fmt.Println("Go Version:", runtime.Version())
fmt.Println("OS:", runtime.GOOS)
return nil
}
func bzzd(ctx *cli.Context) error {
//build a valid bzzapi.Config from all available sources:
//default config, file config, command line and env vars
bzzconfig, err := buildConfig(ctx)
if err != nil {
utils.Fatalf("unable to configure swarm: %v", err)
}
cfg := defaultNodeConfig
//pss operates on ws
cfg.WSModules = append(cfg.WSModules, "pss")
//geth only supports --datadir via command line
//in order to be consistent within swarm, if we pass --datadir via environment variable
//or via config file, we get the same directory for geth and swarm
if _, err := os.Stat(bzzconfig.Path); err == nil {
cfg.DataDir = bzzconfig.Path
}
//optionally set the bootnodes before configuring the node
setSwarmBootstrapNodes(ctx, &cfg)
//setup the ethereum node
utils.SetNodeConfig(ctx, &cfg)
//disable dynamic dialing from p2p/discovery
cfg.P2P.NoDial = true
stack, err := node.New(&cfg)
if err != nil {
utils.Fatalf("can't create node: %v", err)
}
defer stack.Close()
//a few steps need to be done after the config phase is completed,
//due to overriding behavior
err = initSwarmNode(bzzconfig, stack, ctx, &cfg)
if err != nil {
return err
}
//register BZZ as node.Service in the ethereum node
registerBzzService(bzzconfig, stack)
//start the node
utils.StartNode(stack)
go func() {
sigc := make(chan os.Signal, 1)
signal.Notify(sigc, syscall.SIGTERM)
defer signal.Stop(sigc)
<-sigc
log.Info("Got sigterm, shutting swarm down...")
stack.Stop()
}()
// add swarm bootnodes, because swarm doesn't use p2p package's discovery discv5
go func() {
s := stack.Server()
for _, n := range cfg.P2P.BootstrapNodes {
s.AddPeer(n)
}
}()
stack.Wait()
return nil
}
func registerBzzService(bzzconfig *bzzapi.Config, stack *node.Node) {
//define the swarm service boot function
boot := func(_ *node.ServiceContext) (node.Service, error) {
var nodeStore *mock.NodeStore
if bzzconfig.GlobalStoreAPI != "" {
// connect to global store
client, err := rpc.Dial(bzzconfig.GlobalStoreAPI)
if err != nil {
return nil, fmt.Errorf("global store: %v", err)
}
globalStore := mockrpc.NewGlobalStore(client)
// create a node store for this swarm key on global store
nodeStore = globalStore.NewNodeStore(common.HexToAddress(bzzconfig.BzzKey))
}
return swarm.NewSwarm(bzzconfig, nodeStore)
}
//register within the ethereum node
if err := stack.Register(boot); err != nil {
utils.Fatalf("Failed to register the Swarm service: %v", err)
}
}
func getAccount(bzzaccount string, ctx *cli.Context, stack *node.Node) *ecdsa.PrivateKey {
//an account is mandatory
if bzzaccount == "" {
utils.Fatalf(SwarmErrNoBZZAccount)
}
// Try to load the arg as a hex key file.
if key, err := crypto.LoadECDSA(bzzaccount); err == nil {
log.Info("Swarm account key loaded", "address", crypto.PubkeyToAddress(key.PublicKey))
return key
}
// Otherwise try getting it from the keystore.
am := stack.AccountManager()
ks := am.Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
return decryptStoreAccount(ks, bzzaccount, utils.MakePasswordList(ctx))
}
// getPrivKey returns the private key of the specified bzzaccount
// Used only by client commands, such as `feed`
func getPrivKey(ctx *cli.Context) *ecdsa.PrivateKey {
// booting up the swarm node just as we do in bzzd action
bzzconfig, err := buildConfig(ctx)
if err != nil {
utils.Fatalf("unable to configure swarm: %v", err)
}
cfg := defaultNodeConfig
if _, err := os.Stat(bzzconfig.Path); err == nil {
cfg.DataDir = bzzconfig.Path
}
utils.SetNodeConfig(ctx, &cfg)
stack, err := node.New(&cfg)
if err != nil {
utils.Fatalf("can't create node: %v", err)
}
defer stack.Close()
return getAccount(bzzconfig.BzzAccount, ctx, stack)
}
func decryptStoreAccount(ks *keystore.KeyStore, account string, passwords []string) *ecdsa.PrivateKey {
var a accounts.Account
var err error
if common.IsHexAddress(account) {
a, err = ks.Find(accounts.Account{Address: common.HexToAddress(account)})
} else if ix, ixerr := strconv.Atoi(account); ixerr == nil && ix > 0 {
if accounts := ks.Accounts(); len(accounts) > ix {
a = accounts[ix]
} else {
err = fmt.Errorf("index %d higher than number of accounts %d", ix, len(accounts))
}
} else {
utils.Fatalf("Can't find swarm account key %s", account)
}
if err != nil {
utils.Fatalf("Can't find swarm account key: %v - Is the provided bzzaccount(%s) from the right datadir/Path?", err, account)
}
keyjson, err := ioutil.ReadFile(a.URL.Path)
if err != nil {
utils.Fatalf("Can't load swarm account key: %v", err)
}
for i := 0; i < 3; i++ {
password := getPassPhrase(fmt.Sprintf("Unlocking swarm account %s [%d/3]", a.Address.Hex(), i+1), i, passwords)
key, err := keystore.DecryptKey(keyjson, password)
if err == nil {
return key.PrivateKey
}
}
utils.Fatalf("Can't decrypt swarm account key")
return nil
}
// getPassPhrase retrieves the password associated with bzz account, either by fetching
// from a list of pre-loaded passwords, or by requesting it interactively from user.
func getPassPhrase(prompt string, i int, passwords []string) string {
// non-interactive
if len(passwords) > 0 {
if i < len(passwords) {
return passwords[i]
}
return passwords[len(passwords)-1]
}
// fallback to interactive mode
if prompt != "" {
fmt.Println(prompt)
}
password, err := console.Stdin.PromptPassword("Passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase: %v", err)
}
return password
}
// addDefaultHelpSubcommand scans through defined CLI commands and adds
// a basic help subcommand to each
// if a help command is already defined, it will take precedence over the default.
func addDefaultHelpSubcommands(commands []cli.Command) {
for i := range commands {
cmd := &commands[i]
if cmd.Subcommands != nil {
cmd.Subcommands = append(cmd.Subcommands, defaultSubcommandHelp)
addDefaultHelpSubcommands(cmd.Subcommands)
}
}
}
func setSwarmBootstrapNodes(ctx *cli.Context, cfg *node.Config) {
if ctx.GlobalIsSet(utils.BootnodesFlag.Name) || ctx.GlobalIsSet(utils.BootnodesV4Flag.Name) {
return
}
cfg.P2P.BootstrapNodes = []*enode.Node{}
for _, url := range SwarmBootnodes {
node, err := enode.ParseV4(url)
if err != nil {
log.Error("Bootstrap URL invalid", "enode", url, "err", err)
}
cfg.P2P.BootstrapNodes = append(cfg.P2P.BootstrapNodes, node)
}
}

View File

@ -1,353 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Command MANIFEST update
package main
import (
"fmt"
"os"
"strings"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
"gopkg.in/urfave/cli.v1"
)
var manifestCommand = cli.Command{
Name: "manifest",
CustomHelpTemplate: helpTemplate,
Usage: "perform operations on swarm manifests",
ArgsUsage: "COMMAND",
Description: "Updates a MANIFEST by adding/removing/updating the hash of a path.\nCOMMAND could be: add, update, remove",
Subcommands: []cli.Command{
{
Action: manifestAdd,
CustomHelpTemplate: helpTemplate,
Name: "add",
Usage: "add a new path to the manifest",
ArgsUsage: "<MANIFEST> <path> <hash>",
Description: "Adds a new path to the manifest",
},
{
Action: manifestUpdate,
CustomHelpTemplate: helpTemplate,
Name: "update",
Usage: "update the hash for an already existing path in the manifest",
ArgsUsage: "<MANIFEST> <path> <newhash>",
Description: "Update the hash for an already existing path in the manifest",
},
{
Action: manifestRemove,
CustomHelpTemplate: helpTemplate,
Name: "remove",
Usage: "removes a path from the manifest",
ArgsUsage: "<MANIFEST> <path>",
Description: "Removes a path from the manifest",
},
},
}
// manifestAdd adds a new entry to the manifest at the given path.
// New entry hash, the last argument, must be the hash of a manifest
// with only one entry, which meta-data will be added to the original manifest.
// On success, this function will print new (updated) manifest's hash.
func manifestAdd(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 3 {
utils.Fatalf("Need exactly three arguments <MHASH> <path> <HASH>")
}
var (
mhash = args[0]
path = args[1]
hash = args[2]
)
bzzapi := strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client := swarm.NewClient(bzzapi)
m, _, err := client.DownloadManifest(hash)
if err != nil {
utils.Fatalf("Error downloading manifest to add: %v", err)
}
l := len(m.Entries)
if l == 0 {
utils.Fatalf("No entries in manifest %s", hash)
} else if l > 1 {
utils.Fatalf("Too many entries in manifest %s", hash)
}
newManifest := addEntryToManifest(client, mhash, path, m.Entries[0])
fmt.Println(newManifest)
}
// manifestUpdate replaces an existing entry of the manifest at the given path.
// New entry hash, the last argument, must be the hash of a manifest
// with only one entry, which meta-data will be added to the original manifest.
// On success, this function will print hash of the updated manifest.
func manifestUpdate(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 3 {
utils.Fatalf("Need exactly three arguments <MHASH> <path> <HASH>")
}
var (
mhash = args[0]
path = args[1]
hash = args[2]
)
bzzapi := strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client := swarm.NewClient(bzzapi)
m, _, err := client.DownloadManifest(hash)
if err != nil {
utils.Fatalf("Error downloading manifest to update: %v", err)
}
l := len(m.Entries)
if l == 0 {
utils.Fatalf("No entries in manifest %s", hash)
} else if l > 1 {
utils.Fatalf("Too many entries in manifest %s", hash)
}
newManifest, _, defaultEntryUpdated := updateEntryInManifest(client, mhash, path, m.Entries[0], true)
if defaultEntryUpdated {
// Print informational message to stderr
// allowing the user to get the new manifest hash from stdout
// without the need to parse the complete output.
fmt.Fprintln(os.Stderr, "Manifest default entry is updated, too")
}
fmt.Println(newManifest)
}
// manifestRemove removes an existing entry of the manifest at the given path.
// On success, this function will print hash of the manifest which does not
// contain the path.
func manifestRemove(ctx *cli.Context) {
args := ctx.Args()
if len(args) != 2 {
utils.Fatalf("Need exactly two arguments <MHASH> <path>")
}
var (
mhash = args[0]
path = args[1]
)
bzzapi := strings.TrimRight(ctx.GlobalString(SwarmApiFlag.Name), "/")
client := swarm.NewClient(bzzapi)
newManifest := removeEntryFromManifest(client, mhash, path)
fmt.Println(newManifest)
}
func addEntryToManifest(client *swarm.Client, mhash, path string, entry api.ManifestEntry) string {
var longestPathEntry = api.ManifestEntry{}
mroot, isEncrypted, err := client.DownloadManifest(mhash)
if err != nil {
utils.Fatalf("Manifest download failed: %v", err)
}
// See if we path is in this Manifest or do we have to dig deeper
for _, e := range mroot.Entries {
if path == e.Path {
utils.Fatalf("Path %s already present, not adding anything", path)
} else {
if e.ContentType == api.ManifestType {
prfxlen := strings.HasPrefix(path, e.Path)
if prfxlen && len(path) > len(longestPathEntry.Path) {
longestPathEntry = e
}
}
}
}
if longestPathEntry.Path != "" {
// Load the child Manifest add the entry there
newPath := path[len(longestPathEntry.Path):]
newHash := addEntryToManifest(client, longestPathEntry.Hash, newPath, entry)
// Replace the hash for parent Manifests
newMRoot := &api.Manifest{}
for _, e := range mroot.Entries {
if longestPathEntry.Path == e.Path {
e.Hash = newHash
}
newMRoot.Entries = append(newMRoot.Entries, e)
}
mroot = newMRoot
} else {
// Add the entry in the leaf Manifest
entry.Path = path
mroot.Entries = append(mroot.Entries, entry)
}
newManifestHash, err := client.UploadManifest(mroot, isEncrypted)
if err != nil {
utils.Fatalf("Manifest upload failed: %v", err)
}
return newManifestHash
}
// updateEntryInManifest updates an existing entry o path with a new one in the manifest with provided mhash
// finding the path recursively through all nested manifests. Argument isRoot is used for default
// entry update detection. If the updated entry has the same hash as the default entry, then the
// default entry in root manifest will be updated too.
// Returned values are the new manifest hash, hash of the entry that was replaced by the new entry and
// a a bool that is true if default entry is updated.
func updateEntryInManifest(client *swarm.Client, mhash, path string, entry api.ManifestEntry, isRoot bool) (newManifestHash, oldHash string, defaultEntryUpdated bool) {
var (
newEntry = api.ManifestEntry{}
longestPathEntry = api.ManifestEntry{}
)
mroot, isEncrypted, err := client.DownloadManifest(mhash)
if err != nil {
utils.Fatalf("Manifest download failed: %v", err)
}
// See if we path is in this Manifest or do we have to dig deeper
for _, e := range mroot.Entries {
if path == e.Path {
newEntry = e
// keep the reference of the hash of the entry that should be replaced
// for default entry detection
oldHash = e.Hash
} else {
if e.ContentType == api.ManifestType {
prfxlen := strings.HasPrefix(path, e.Path)
if prfxlen && len(path) > len(longestPathEntry.Path) {
longestPathEntry = e
}
}
}
}
if longestPathEntry.Path == "" && newEntry.Path == "" {
utils.Fatalf("Path %s not present in the Manifest, not setting anything", path)
}
if longestPathEntry.Path != "" {
// Load the child Manifest add the entry there
newPath := path[len(longestPathEntry.Path):]
var newHash string
newHash, oldHash, _ = updateEntryInManifest(client, longestPathEntry.Hash, newPath, entry, false)
// Replace the hash for parent Manifests
newMRoot := &api.Manifest{}
for _, e := range mroot.Entries {
if longestPathEntry.Path == e.Path {
e.Hash = newHash
}
newMRoot.Entries = append(newMRoot.Entries, e)
}
mroot = newMRoot
}
// update the manifest if the new entry is found and
// check if default entry should be updated
if newEntry.Path != "" || isRoot {
// Replace the hash for leaf Manifest
newMRoot := &api.Manifest{}
for _, e := range mroot.Entries {
if newEntry.Path == e.Path {
entry.Path = e.Path
newMRoot.Entries = append(newMRoot.Entries, entry)
} else if isRoot && e.Path == "" && e.Hash == oldHash {
entry.Path = e.Path
newMRoot.Entries = append(newMRoot.Entries, entry)
defaultEntryUpdated = true
} else {
newMRoot.Entries = append(newMRoot.Entries, e)
}
}
mroot = newMRoot
}
newManifestHash, err = client.UploadManifest(mroot, isEncrypted)
if err != nil {
utils.Fatalf("Manifest upload failed: %v", err)
}
return newManifestHash, oldHash, defaultEntryUpdated
}
func removeEntryFromManifest(client *swarm.Client, mhash, path string) string {
var (
entryToRemove = api.ManifestEntry{}
longestPathEntry = api.ManifestEntry{}
)
mroot, isEncrypted, err := client.DownloadManifest(mhash)
if err != nil {
utils.Fatalf("Manifest download failed: %v", err)
}
// See if we path is in this Manifest or do we have to dig deeper
for _, entry := range mroot.Entries {
if path == entry.Path {
entryToRemove = entry
} else {
if entry.ContentType == api.ManifestType {
prfxlen := strings.HasPrefix(path, entry.Path)
if prfxlen && len(path) > len(longestPathEntry.Path) {
longestPathEntry = entry
}
}
}
}
if longestPathEntry.Path == "" && entryToRemove.Path == "" {
utils.Fatalf("Path %s not present in the Manifest, not removing anything", path)
}
if longestPathEntry.Path != "" {
// Load the child Manifest remove the entry there
newPath := path[len(longestPathEntry.Path):]
newHash := removeEntryFromManifest(client, longestPathEntry.Hash, newPath)
// Replace the hash for parent Manifests
newMRoot := &api.Manifest{}
for _, entry := range mroot.Entries {
if longestPathEntry.Path == entry.Path {
entry.Hash = newHash
}
newMRoot.Entries = append(newMRoot.Entries, entry)
}
mroot = newMRoot
}
if entryToRemove.Path != "" {
// remove the entry in this Manifest
newMRoot := &api.Manifest{}
for _, entry := range mroot.Entries {
if entryToRemove.Path != entry.Path {
newMRoot.Entries = append(newMRoot.Entries, entry)
}
}
mroot = newMRoot
}
newManifestHash, err := client.UploadManifest(mroot, isEncrypted)
if err != nil {
utils.Fatalf("Manifest upload failed: %v", err)
}
return newManifestHash
}

View File

@ -1,597 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"io/ioutil"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/ethereum/go-ethereum/swarm/api"
swarm "github.com/ethereum/go-ethereum/swarm/api/client"
swarmhttp "github.com/ethereum/go-ethereum/swarm/api/http"
)
// TestManifestChange tests manifest add, update and remove
// cli commands without encryption.
func TestManifestChange(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testManifestChange(t, false)
}
// TestManifestChange tests manifest add, update and remove
// cli commands with encryption enabled.
func TestManifestChangeEncrypted(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testManifestChange(t, true)
}
// testManifestChange performs cli commands:
// - manifest add
// - manifest update
// - manifest remove
// on a manifest, testing the functionality of this
// comands on paths that are in root manifest or a nested one.
// Argument encrypt controls whether to use encryption or not.
func testManifestChange(t *testing.T, encrypt bool) {
t.Parallel()
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
tmp, err := ioutil.TempDir("", "swarm-manifest-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
origDir := filepath.Join(tmp, "orig")
if err := os.Mkdir(origDir, 0777); err != nil {
t.Fatal(err)
}
indexDataFilename := filepath.Join(origDir, "index.html")
err = ioutil.WriteFile(indexDataFilename, []byte("<h1>Test</h1>"), 0666)
if err != nil {
t.Fatal(err)
}
// Files paths robots.txt and robots.html share the same prefix "robots."
// which will result a manifest with a nested manifest under path "robots.".
// This will allow testing manifest changes on both root and nested manifest.
err = ioutil.WriteFile(filepath.Join(origDir, "robots.txt"), []byte("Disallow: /"), 0666)
if err != nil {
t.Fatal(err)
}
err = ioutil.WriteFile(filepath.Join(origDir, "robots.html"), []byte("<strong>No Robots Allowed</strong>"), 0666)
if err != nil {
t.Fatal(err)
}
err = ioutil.WriteFile(filepath.Join(origDir, "mutants.txt"), []byte("Frank\nMarcus"), 0666)
if err != nil {
t.Fatal(err)
}
args := []string{
"--bzzapi",
srv.URL,
"--recursive",
"--defaultpath",
indexDataFilename,
"up",
origDir,
}
if encrypt {
args = append(args, "--encrypt")
}
origManifestHash := runSwarmExpectHash(t, args...)
checkHashLength(t, origManifestHash, encrypt)
client := swarm.NewClient(srv.URL)
// upload a new file and use its manifest to add it the original manifest.
t.Run("add", func(t *testing.T) {
humansData := []byte("Ann\nBob")
humansDataFilename := filepath.Join(tmp, "humans.txt")
err = ioutil.WriteFile(humansDataFilename, humansData, 0666)
if err != nil {
t.Fatal(err)
}
humansManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"up",
humansDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"add",
origManifestHash,
"humans.txt",
humansManifestHash,
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
for _, e := range newManifest.Entries {
if e.Path == "humans.txt" {
found = true
if e.Size != int64(len(humansData)) {
t.Errorf("expected humans.txt size %v, got %v", len(humansData), e.Size)
}
if e.ModTime.IsZero() {
t.Errorf("got zero mod time for humans.txt")
}
ct := "text/plain; charset=utf-8"
if e.ContentType != ct {
t.Errorf("expected content type %q, got %q", ct, e.ContentType)
}
break
}
}
if !found {
t.Fatal("no humans.txt in new manifest")
}
checkFile(t, client, newManifestHash, "humans.txt", humansData)
})
// upload a new file and use its manifest to add it the original manifest,
// but ensure that the file will be in the nested manifest of the original one.
t.Run("add nested", func(t *testing.T) {
robotsData := []byte(`{"disallow": "/"}`)
robotsDataFilename := filepath.Join(tmp, "robots.json")
err = ioutil.WriteFile(robotsDataFilename, robotsData, 0666)
if err != nil {
t.Fatal(err)
}
robotsManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"up",
robotsDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"add",
origManifestHash,
"robots.json",
robotsManifestHash,
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
loop:
for _, e := range newManifest.Entries {
if e.Path == "robots." {
nestedManifest := downloadManifest(t, client, e.Hash, encrypt)
for _, e := range nestedManifest.Entries {
if e.Path == "json" {
found = true
if e.Size != int64(len(robotsData)) {
t.Errorf("expected robots.json size %v, got %v", len(robotsData), e.Size)
}
if e.ModTime.IsZero() {
t.Errorf("got zero mod time for robots.json")
}
ct := "application/json"
if e.ContentType != ct {
t.Errorf("expected content type %q, got %q", ct, e.ContentType)
}
break loop
}
}
}
}
if !found {
t.Fatal("no robots.json in new manifest")
}
checkFile(t, client, newManifestHash, "robots.json", robotsData)
})
// upload a new file and use its manifest to change the file it the original manifest.
t.Run("update", func(t *testing.T) {
indexData := []byte("<h1>Ethereum Swarm</h1>")
indexDataFilename := filepath.Join(tmp, "index.html")
err = ioutil.WriteFile(indexDataFilename, indexData, 0666)
if err != nil {
t.Fatal(err)
}
indexManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"up",
indexDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"update",
origManifestHash,
"index.html",
indexManifestHash,
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
for _, e := range newManifest.Entries {
if e.Path == "index.html" {
found = true
if e.Size != int64(len(indexData)) {
t.Errorf("expected index.html size %v, got %v", len(indexData), e.Size)
}
if e.ModTime.IsZero() {
t.Errorf("got zero mod time for index.html")
}
ct := "text/html; charset=utf-8"
if e.ContentType != ct {
t.Errorf("expected content type %q, got %q", ct, e.ContentType)
}
break
}
}
if !found {
t.Fatal("no index.html in new manifest")
}
checkFile(t, client, newManifestHash, "index.html", indexData)
// check default entry change
checkFile(t, client, newManifestHash, "", indexData)
})
// upload a new file and use its manifest to change the file it the original manifest,
// but ensure that the file is in the nested manifest of the original one.
t.Run("update nested", func(t *testing.T) {
robotsData := []byte(`<string>Only humans allowed!!!</strong>`)
robotsDataFilename := filepath.Join(tmp, "robots.html")
err = ioutil.WriteFile(robotsDataFilename, robotsData, 0666)
if err != nil {
t.Fatal(err)
}
humansManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"up",
robotsDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"update",
origManifestHash,
"robots.html",
humansManifestHash,
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
loop:
for _, e := range newManifest.Entries {
if e.Path == "robots." {
nestedManifest := downloadManifest(t, client, e.Hash, encrypt)
for _, e := range nestedManifest.Entries {
if e.Path == "html" {
found = true
if e.Size != int64(len(robotsData)) {
t.Errorf("expected robots.html size %v, got %v", len(robotsData), e.Size)
}
if e.ModTime.IsZero() {
t.Errorf("got zero mod time for robots.html")
}
ct := "text/html; charset=utf-8"
if e.ContentType != ct {
t.Errorf("expected content type %q, got %q", ct, e.ContentType)
}
break loop
}
}
}
}
if !found {
t.Fatal("no robots.html in new manifest")
}
checkFile(t, client, newManifestHash, "robots.html", robotsData)
})
// remove a file from the manifest.
t.Run("remove", func(t *testing.T) {
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"remove",
origManifestHash,
"mutants.txt",
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
for _, e := range newManifest.Entries {
if e.Path == "mutants.txt" {
found = true
break
}
}
if found {
t.Fatal("mutants.txt is not removed")
}
})
// remove a file from the manifest, but ensure that the file is in
// the nested manifest of the original one.
t.Run("remove nested", func(t *testing.T) {
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"remove",
origManifestHash,
"robots.html",
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
loop:
for _, e := range newManifest.Entries {
if e.Path == "robots." {
nestedManifest := downloadManifest(t, client, e.Hash, encrypt)
for _, e := range nestedManifest.Entries {
if e.Path == "html" {
found = true
break loop
}
}
}
}
if found {
t.Fatal("robots.html in not removed")
}
})
}
// TestNestedDefaultEntryUpdate tests if the default entry is updated
// if the file in nested manifest used for it is also updated.
func TestNestedDefaultEntryUpdate(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testNestedDefaultEntryUpdate(t, false)
}
// TestNestedDefaultEntryUpdateEncrypted tests if the default entry
// of encrypted upload is updated if the file in nested manifest
// used for it is also updated.
func TestNestedDefaultEntryUpdateEncrypted(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip()
}
testNestedDefaultEntryUpdate(t, true)
}
func testNestedDefaultEntryUpdate(t *testing.T, encrypt bool) {
t.Parallel()
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
tmp, err := ioutil.TempDir("", "swarm-manifest-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
origDir := filepath.Join(tmp, "orig")
if err := os.Mkdir(origDir, 0777); err != nil {
t.Fatal(err)
}
indexData := []byte("<h1>Test</h1>")
indexDataFilename := filepath.Join(origDir, "index.html")
err = ioutil.WriteFile(indexDataFilename, indexData, 0666)
if err != nil {
t.Fatal(err)
}
// Add another file with common prefix as the default entry to test updates of
// default entry with nested manifests.
err = ioutil.WriteFile(filepath.Join(origDir, "index.txt"), []byte("Test"), 0666)
if err != nil {
t.Fatal(err)
}
args := []string{
"--bzzapi",
srv.URL,
"--recursive",
"--defaultpath",
indexDataFilename,
"up",
origDir,
}
if encrypt {
args = append(args, "--encrypt")
}
origManifestHash := runSwarmExpectHash(t, args...)
checkHashLength(t, origManifestHash, encrypt)
client := swarm.NewClient(srv.URL)
newIndexData := []byte("<h1>Ethereum Swarm</h1>")
newIndexDataFilename := filepath.Join(tmp, "index.html")
err = ioutil.WriteFile(newIndexDataFilename, newIndexData, 0666)
if err != nil {
t.Fatal(err)
}
newIndexManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"up",
newIndexDataFilename,
)
newManifestHash := runSwarmExpectHash(t,
"--bzzapi",
srv.URL,
"manifest",
"update",
origManifestHash,
"index.html",
newIndexManifestHash,
)
checkHashLength(t, newManifestHash, encrypt)
newManifest := downloadManifest(t, client, newManifestHash, encrypt)
var found bool
for _, e := range newManifest.Entries {
if e.Path == "index." {
found = true
newManifest = downloadManifest(t, client, e.Hash, encrypt)
break
}
}
if !found {
t.Fatal("no index. path in new manifest")
}
found = false
for _, e := range newManifest.Entries {
if e.Path == "html" {
found = true
if e.Size != int64(len(newIndexData)) {
t.Errorf("expected index.html size %v, got %v", len(newIndexData), e.Size)
}
if e.ModTime.IsZero() {
t.Errorf("got zero mod time for index.html")
}
ct := "text/html; charset=utf-8"
if e.ContentType != ct {
t.Errorf("expected content type %q, got %q", ct, e.ContentType)
}
break
}
}
if !found {
t.Fatal("no html in new manifest")
}
checkFile(t, client, newManifestHash, "index.html", newIndexData)
// check default entry change
checkFile(t, client, newManifestHash, "", newIndexData)
}
func runSwarmExpectHash(t *testing.T, args ...string) (hash string) {
t.Helper()
hashRegexp := `[a-f\d]{64,128}`
up := runSwarm(t, args...)
_, matches := up.ExpectRegexp(hashRegexp)
up.ExpectExit()
if len(matches) < 1 {
t.Fatal("no matches found")
}
return matches[0]
}
func checkHashLength(t *testing.T, hash string, encrypted bool) {
t.Helper()
l := len(hash)
if encrypted && l != 128 {
t.Errorf("expected hash length 128, got %v", l)
}
if !encrypted && l != 64 {
t.Errorf("expected hash length 64, got %v", l)
}
}
func downloadManifest(t *testing.T, client *swarm.Client, hash string, encrypted bool) (manifest *api.Manifest) {
t.Helper()
m, isEncrypted, err := client.DownloadManifest(hash)
if err != nil {
t.Fatal(err)
}
if encrypted != isEncrypted {
t.Error("new manifest encryption flag is not correct")
}
return m
}
func checkFile(t *testing.T, client *swarm.Client, hash, path string, expected []byte) {
t.Helper()
f, err := client.Download(hash, path)
if err != nil {
t.Fatal(err)
}
got, err := ioutil.ReadAll(f)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(got, expected) {
t.Errorf("expected file content %q, got %q", expected, got)
}
}

View File

@ -1,124 +0,0 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
// Standard "mime" package rely on system-settings, see mime.osInitMime
// Swarm will run on many OS/Platform/Docker and must behave similar
// This command generates code to add common mime types based on mime.types file
//
// mime.types file provided by mailcap, which follow https://www.iana.org/assignments/media-types/media-types.xhtml
//
// Get last version of mime.types file by:
// docker run --rm -v $(pwd):/tmp alpine:edge /bin/sh -c "apk add -U mailcap; mv /etc/mime.types /tmp"
import (
"bufio"
"bytes"
"flag"
"html/template"
"io/ioutil"
"strings"
"log"
)
var (
typesFlag = flag.String("types", "", "Input mime.types file")
packageFlag = flag.String("package", "", "Golang package in output file")
outFlag = flag.String("out", "", "Output file name for the generated mime types")
)
type mime struct {
Name string
Exts []string
}
type templateParams struct {
PackageName string
Mimes []mime
}
func main() {
// Parse and ensure all needed inputs are specified
flag.Parse()
if *typesFlag == "" {
log.Fatalf("--types is required")
}
if *packageFlag == "" {
log.Fatalf("--types is required")
}
if *outFlag == "" {
log.Fatalf("--out is required")
}
params := templateParams{
PackageName: *packageFlag,
}
types, err := ioutil.ReadFile(*typesFlag)
if err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(bytes.NewReader(types))
for scanner.Scan() {
txt := scanner.Text()
if strings.HasPrefix(txt, "#") || len(txt) == 0 {
continue
}
parts := strings.Fields(txt)
if len(parts) == 1 {
continue
}
params.Mimes = append(params.Mimes, mime{parts[0], parts[1:]})
}
if err = scanner.Err(); err != nil {
log.Fatal(err)
}
result := bytes.NewBuffer([]byte{})
if err := template.Must(template.New("_").Parse(tpl)).Execute(result, params); err != nil {
log.Fatal(err)
}
if err := ioutil.WriteFile(*outFlag, result.Bytes(), 0600); err != nil {
log.Fatal(err)
}
}
var tpl = `// Code generated by github.com/ethereum/go-ethereum/cmd/swarm/mimegen. DO NOT EDIT.
package {{ .PackageName }}
import "mime"
func init() {
var mimeTypes = map[string]string{
{{- range .Mimes -}}
{{ $name := .Name -}}
{{- range .Exts }}
".{{ . }}": "{{ $name | html }}",
{{- end }}
{{- end }}
}
for ext, name := range mimeTypes {
if err := mime.AddExtensionType(ext, name); err != nil {
panic(err)
}
}
}
`

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More