Compare commits

...

665 Commits

Author SHA1 Message Date
66e016c6dc swarm: release v0.4.3 (#1602)
* swarm: 0.4.3 release notes

* swarm: 0.4.3 stable version

* swarm: 0.4.3 release notes - fix
2019-07-25 13:08:29 +02:00
74b12e35bf network/retrieve: add bzz-retrieve protocol (#1589)
* network/retrieve: initial commit

* network: import dependencies from stream package branch

* network/retrieve: address pr comments

* network/retrieve: fix pr comments

* network/retrieve: create logger for peer

* network: address pr comments

* network/retrieve: lint

* network/retrieve: prevent forever loop

* network/retrieve: address pr comments

* network/retrieve: fix linter

* network/retrieve: pr comments

* network/retrieval: pr comments
2019-07-25 10:53:46 +02:00
388d8ccd9f PoC: Network simulation framework (#1555)
* simv2: wip

* simulation: exec adapter start/stop

* simulation: add node status to exec adapter

* simulation: initial simulation code

* simulation: exec adapter, configure path to executable

* simulation: initial docker adapter

* simulation: wip kubernetes adapter

* simulation: kubernetes adapter proxy

* simulation: implement GetAll/StartAll/StopAll

* simulation: kuberentes adapter - set env vars and resource limits

* simulation: discovery test

* simulation: remove port definitions within docker adapter

* simulation: simplify wait for healthy loop

* simulation: get nat ip addr from interface

* simulation: pull docker images automatically

* simulation: NodeStatus -> NodeInfo

* simulation: move discovery test to example dir

* simulation: example snapshot usage

* simulation: add goclient specific simulation

* simulation: add peer connections to snapshot

* simulation: close rpc client

* simulation: don't export kubernetes proxy server

* simulation: merge simulation code

* simulation: don't export nodemap

* simulation: rename SimulationSnapshot -> Snapshot

* simulation: linting fixes

* simulation: add k8s available helper func

* simulation: vendor

* simulation: fix 'no non-test Go files' when building

* simulation: remove errors from interface methods where non were returned

* simulation: run getHealthInfo check in parallel
2019-07-24 17:00:13 +02:00
9720da34db network: structured output for kademlia table (#1586) 2019-07-19 13:35:11 +02:00
090197227f client: add bzz client, update smoke tests (#1582) 2019-07-18 18:49:20 +02:00
48857ae88e swarm-smoke: fix check max prox hosts for pull/push sync modes (#1578) 2019-07-17 16:21:50 +02:00
ad77f43b9d cmd/swarm: allow using a network interface by name for nat purposes (#1557) 2019-07-16 09:35:04 +02:00
9d96d9bef5 pss: disable TestForwardBasic (#1544) 2019-07-11 18:17:26 +02:00
7101f65a8b api, network: count chunk deliveries per peer (#1534) 2019-07-09 22:50:06 +02:00
af3b5e9ce1 network/newstream: new stream! protocol base implementation (#1500)
network/newstream: merge new stream protocol wire protocol changes, changes to docs, add basic protocol handlers and placeholders
2019-07-08 13:25:56 +03:00
7deac5693c swarm: fix bzz_info.port when using dynamic port allocation (#1537) 2019-07-05 12:15:17 +02:00
d5f6ee4620 cmd/swarm: make bzzaccount flag optional and add bzzkeyhex flag (#1531)
* cmd/swarm: don't require bzzaccount flag

* docker: remove setup script because bzzaccount is not required anymore

* cmd/swarm: manage account creation/selection when bzzflag is not defined

* cmd/swarm: no bzzaccount flag tests

* cmd/swarm: use random network ports on tests

* cmd/swarm: fix typo

* cmd/swarm: add --bzzkeyhex flag

* cmd/swarm: use different ipc paths for tests

* readme: update example on how to run swarm

* cmd/swarm: rename getAccount -> getOrCreateAccount

* cmd/swarm: remove unneeded comment

* cmd/swarm: use shorter ipc path for test
2019-07-04 13:14:10 +02:00
fb73e6cb9b cmd/swarm: remove separate function to parse env vars (#1536) 2019-07-04 13:13:39 +02:00
a6e64aea67 network/bitvector: Multibit set/unset + string rep (#1530)
* network/bitvector: Multibit set/unset + string rep

* network/bitvector: Add code comments

* network/bitvector: Make Unset -> Set with bool false

* network/bitvector: Revert to Set/Unset

* network/stream: Update to new bitvector signature
2019-07-02 18:08:00 +02:00
b3f21ddbba swarm: 0.4.3 unstable (#1526)
* swarm: 0.4.3 unstable

* CHANGELOG: consistent date format
2019-06-28 12:22:07 +02:00
3cdc45ee0f travis: also build on release tags (#1527) 2019-06-28 12:02:47 +02:00
bef511363e swarm: release v0.4.2 (#1496)
* CHANGELOG: reflect latest merged PRs

* changelog: updated changelog for swarm 0.4.2

* version: mark 0.4.2 stable
2019-06-28 11:19:28 +02:00
fb75cdce03 network: bump bzz stream hive (#1522) 2019-06-28 10:21:14 +02:00
13774a8aee docker: update ca-certificates file (#1525) 2019-06-27 18:06:09 +02:00
4d66995fee Add swarm guide to /docs (#1513)
* docs: add swarm-guide

* readthedocs build configuration file
2019-06-26 10:18:32 +02:00
a0a14cc11c network/simulation: Add ExecAdapter capability to swarm simulations (#1503) 2019-06-24 09:48:40 +02:00
8afb316399 docs: add initial stream! protocol specification (#1454)
* docs: add initial pull sync spec
2019-06-20 12:21:07 +02:00
f2a12b38c4 docs: release process setup (#1487) 2019-06-19 11:12:56 +02:00
f828da674d build: enable ubuntu ppa disco (19.04) builds (#1495) 2019-06-18 15:34:30 +02:00
9fa6d29c88 network/stream: remove send.offered.hashes trace (#1497) 2019-06-18 14:53:18 +02:00
d589af14a8 simple fetchers (#1492)
* network: disable shouldNOTRequestAgain
* vendor singleflight
* network: disable shouldNOTRequestAgain
* network, storage: fix tests
* network/storage: remove HopCount and SkipCheck
* network/stream: use localstore when providing chunks a node has offered
* storage: refactor lnetstore
* storage: rename FetcherItem to Fetcher
* storage/feed: no distinction between catastrophic err or chunk not found
* network/stream: remove TestDeliveryFromNodes, as FindPeer is changed, and Swarm connectivity is not a chain
* network/stream: fixed intervals tests
* swarm: fixes for linter
* storage: use LRU cache for fetchers
* network, storage: better godoc
* storage/netstore: explicit errors
* storage/feed/lookup: Clarify ReadFunc expected return error values
* storage: address comments by elad
2019-06-18 08:47:27 +02:00
f57d4f0802 swarm/storage: Support for uploading 100gb files (#1395)
* swarm/storage: Fix gouroutine leak while uploading large files
* swarm/storage: Fix pyramid chunker to wrap properly level3 and above
2019-06-18 08:46:20 +02:00
604960938b Revert "simplification of Fetchers (#1344)" (#1491)
This reverts commit 0b724bd4d5.
2019-06-17 10:30:55 +02:00
0b724bd4d5 simplification of Fetchers (#1344) 2019-06-17 10:01:12 +02:00
21adbe2fac docker: include git commit hash in swarm version (#1488)
* dockerignore: remove .git - needed for git hash on builds

* changelog: include #1488
2019-06-14 15:49:18 +02:00
8b37def961 cmd/swarm-smoke: fix what we wanted to avoid (#1486) 2019-06-14 14:49:21 +02:00
fce3e45f2b build: don't upload buildinfo (#1484) 2019-06-14 14:24:12 +02:00
99db027993 docker: fix run-smoke.sh path to binary (#1485) 2019-06-14 14:22:55 +02:00
8a1f6a6c6f README: add docker section and fix license section (#1483)
* readme: add docker section

* readme: fix license info

* readme: omit sections in toc
2019-06-14 12:18:56 +02:00
9704ced894 cmd/swarm-smoke: add paging to trackChunks (#1464)
* cmd/swarm-smoke: add paging to trackChunks
2019-06-14 10:46:41 +02:00
99a431cf50 version: update to v0.4.2 unstable (#1470) 2019-06-13 17:45:36 +02:00
0b9a3ea407 swarm: release v0.4.1 (#1469) 2019-06-13 16:32:55 +02:00
2f0b94fa1a network: bump proto versions due to change in OfferedHashesMsg (#1465) 2019-06-13 16:11:46 +02:00
815a6d120f changelog: initial setup (#1466) 2019-06-13 16:04:16 +02:00
4744b7c9d7 p2p/protocols: do not vendor swarm related packages (#1462) 2019-06-13 15:11:00 +02:00
3243c7ba07 docker: create new dockerfiles that are context aware (#1463) 2019-06-13 14:30:13 +02:00
76e71cca3b network/stream: disable flaky TestStartNetworkSync test (#1458) 2019-06-12 15:50:44 +02:00
b3f601427c storage: fix alignement panics on 32 bit arch (#1460) 2019-06-12 12:55:36 +02:00
f709083e47 api: temporary fix test failing on windows due to access denied (#1450) 2019-06-12 10:00:46 +02:00
e865445f17 network/stream: remove cast for msg type (#1449) 2019-06-12 08:55:15 +02:00
de220b8323 docs: arch diagram - use source of truth images (#1434) 2019-06-11 10:17:45 +02:00
c1126aaa1b swarm/network/stream: add syncing test (#1399)
* swarm/network/stream: remove old syncing test and replace it with two more granular tests, with the first one testing a full sync between two nodes (makes sure that when all subscriptions are established on all bins - all content is replicated from one node to the other), and a second test that adds a bigger amount of nodes to a star network topology, effectively making the pivot node have a depth > 0 to sync just certain content to the connected nodes (due to a star topology this test (always 1 hop) does not check that, for example, specific content is routed across multiple nodes, but assumes content should be only on the most proximate node)

* api, cmd/swarm-smoke, fuse, network, storage: fix data dir leak when using ephemeral filestore, pull test params to be easily adjustable
2019-06-10 10:52:19 +02:00
ee7ccbdb4d docs: initial swarm arch diagram (#1432)
Important high-level diagrams added
2019-06-07 13:27:20 +02:00
bedd06762d swarm-smoke: add debug flag (#1428) 2019-06-06 15:06:18 +02:00
5663cb2c76 docs: push syncing tags doc (#1429) 2019-06-06 14:51:58 +02:00
a454aa6c8d swarm,cmd: fix migration link, change loglevel severity (#1420) 2019-06-06 14:46:28 +02:00
bd1887189c swarm/network/stream: remove dead code (#1422)
* swarm/network/stream: remove dead code

* swarm/network/stream: remove HandoverProof
2019-06-05 12:59:46 +02:00
6d0902da3c p2p/protocols: import to codebase, remove from vendor (#1416) 2019-06-04 14:32:44 +02:00
70320ceeae Makefile: added support for "make swarm" command (#1412) 2019-06-03 20:59:48 +02:00
3e522f2e7d appveyor: remove unused nsis task (#1410) 2019-06-03 15:33:09 +02:00
1c8c8bac45 build/deb: fix deb.control for ethereum-swarm (#1414)
* build/deb: fix deb.control for ethereum-swarm

* build: env.sh fix
2019-06-03 15:26:37 +02:00
809165fc5e ci/cd: fix azure blobstorage name (#1413) 2019-06-03 15:16:56 +02:00
2c80a4198f Appveyor branches: master only (#1409) 2019-06-03 14:18:41 +02:00
a83a84460e .github: fix contributing.md (#1408) 2019-06-03 14:08:40 +02:00
c5430df218 README.md: modify installation instructions after repo split (#1407) 2019-06-03 13:10:20 +02:00
b046760db1 swarm: codebase split from go-ethereum (#1405) 2019-06-03 12:28:18 +02:00
7a22da98b9 accounts/scwallet: flag to specify path to smartcard daemon (#19439)
* accounts/scwallet: Add a switch to enable smartcard support

* accounts: change the meaning of the switch

* disable card support in windows until tested
* only activate account if pcscd socket file is present
* the switch is now the path to the socket file

* accounts/scwallet: holiman's review feedback

* accounts/scwallet: send the path to go-pcsclite

* accounts/scwallet: add default, per platform path

* accounts/scwallet: fix error log warning

* accounts/scwallet: update pcsc lib to latest

* accounts/scwallet: use default path from pcsclite

* scwallet: forgot to change switch name

* cmd: minor style cleanups (error handling first, then happy path)
2019-05-31 12:30:28 +03:00
30263ad37d swarm/storage: set false, only when we get a chunk back (#19599) 2019-05-31 11:13:34 +02:00
f2612ac948 les: short circuit in the unregister if peer is not registered (#19644) 2019-05-31 10:54:50 +03:00
58497f46bd les, les/flowcontrol: implement LES/3 (#19329)
les, les/flowcontrol: implement LES/3
2019-05-30 20:51:13 +02:00
3d58268bba github: update code owners (#19638)
* Update codeowners

* Add karalabe to clique

* remove codeowner for consensus/clique
2019-05-30 15:34:44 +03:00
008d250e3c swarm/api/http: fix bzz-hash to return ens resolved hash directly (#19594) 2019-05-29 00:16:09 +02:00
cf38a3dc65 swarm/api: update mission statement (#19612) 2019-05-29 00:14:24 +02:00
048df258dc accounts/scwallet: change sc url scheme to keycard (#19632) 2019-05-28 19:47:53 +03:00
2388e425f2 crypto/bn256/cloudflare: fix comments to describe the updated curve parameters (#19577)
* Removed comment section referring to Cloudflare's bn curve parameters

* Added comment to clarify the nature of the parameters

* Changed value of xi to i+9
2019-05-28 09:13:30 +03:00
5429dc75bd cmd/abigen: allow using abigen --pkg flag with standard input (#19207) 2019-05-27 20:28:17 +02:00
c4de228e18 Merge pull request #19630 from karalabe/fix-commit-strings
internal/build: fix Travis and AppVeyor commit string injection
2019-05-27 18:43:07 +03:00
f0bced30bb internal/build: fix Travis and AppVeyor commit string injection 2019-05-27 18:41:47 +03:00
f8a4e37968 Merge pull request #19629 from karalabe/duktape-2.3.0
vendor: update go-duktape to v2.3.0
2019-05-27 17:32:01 +03:00
17cf0e5863 Merge pull request #19524 from gballet/scwallet-puk-count
accounts/scwallet: Display PUK retry count
2019-05-27 17:30:59 +03:00
7bc1cb3677 accounts/scwallet: fix public key confirmation regression 2019-05-27 17:29:02 +03:00
75a860880c accounts/scwallet: display PUK retry count, validate PIN/PUK length 2019-05-27 17:29:01 +03:00
9cd338054f vendor: update go-duktape to v2.3.0 2019-05-27 16:38:35 +03:00
fc85777a21 core: concurrent database reinit from freezer dump
* core: reinit chain from freezer in batches

* core/rawdb: concurrent database reinit from freezer dump

* core/rawdb: reinit from freezer in sequential order
2019-05-27 15:48:30 +03:00
a184ab7a61 accounts/keystore: enable fallback for darwin,!cgo (#19614)
Without this, accounts/keystore fails to build for Darwin with
CGO_ENABLED=0.
2019-05-27 12:28:06 +02:00
db0cc211f7 Merge pull request #19628 from karalabe/nofreeze-genesis
core/rawdb: keep genesis in key-value store for full sync too
2019-05-27 13:02:37 +03:00
7392f59e7c core/rawdb: keep genesis in key-value store for full sync too 2019-05-27 12:25:24 +03:00
611113e967 core: never delete genesis block (#19617) 2019-05-27 12:05:45 +03:00
4e0c1a1a6b eth, les: reject light client connection is server is not synced (#19616)
* eth, les: reject light client connection is server is not synced

* eth, les: rename function and variables

* les: format
2019-05-26 19:15:05 +03:00
922e757f19 accounts/usbwallet: enable the Nano X and upcoming Ledger IDs (#19623) 2019-05-26 06:57:54 +02:00
fec3b56f7f accounts, p2p, rpc: make CGO_ENABLED=0 build again (#19593)
* p2p: remove direct import of cgo-library

* accounts, rpc: more nocgo alternatives

* rpc: move unix path constant into separate file

* accounts/scwallet: address review concerns, remove copy-pasta
2019-05-26 01:07:10 +03:00
9efc1a847e crypto/bn256/cloudflare: checks for nil pointers in Marshal functions (#19609)
* Added checks for nil pointers in Marshal functions

* Set nil pointer to identity in GT before marshaling
2019-05-26 00:57:07 +03:00
4b622b277e core/state: unified function receiver names (#19615) 2019-05-26 00:52:10 +03:00
2cd6059e51 tests: make transaction tests run again, fix #19033 (#19529)
* tests: make transaction tests run again, fix #19033

* tests: refactor transaction tests
2019-05-21 14:58:06 +03:00
a54142987c log: do not pad values longer than 40 characters (#19592)
* log: Do not pad too long values

* log: gofmt
2019-05-20 16:26:29 +03:00
4cf8505d22 build: fix Launchpad typo (#19597) 2019-05-20 16:11:23 +03:00
97d3615612 les: avoid fetcher deadlock on requestChn (#19571)
* les: avoid fetcher deadlock on requestChn
2019-05-17 20:39:39 +02:00
e687d063c3 accounts/abi: fix TestUnpackMethodIntoMap (#19484) 2019-05-17 15:24:04 +02:00
509facd631 swarm/version: v0.4.1 unstable (#19587) 2019-05-17 10:14:39 +02:00
c3b317a4fc swarm/version: v0.4.0 stable (#19586) 2019-05-17 10:12:03 +02:00
2f855bfa28 Merge pull request #19591 from karalabe/64bit-align
core/rawdb, eth/downloader: align 64bit atomic fields
2019-05-17 02:13:50 +03:00
8cce620311 build: disable swarm packages (#19585)
* build: disable swarm packages

* build: remove allCrossCompiledArchiveFiles; inline allToolsArchiveFiles

* build: get rid of some superfluous comments
2019-05-17 02:06:20 +03:00
f35975ea21 core/rawdb, eth/downloader: align 64bit atomic fields 2019-05-17 01:45:05 +03:00
f5d89cdb72 Merge pull request #19244 from karalabe/freezer-2
cmd, core, eth, les, node: chain freezer on top of db rework
2019-05-16 19:10:58 +03:00
60386b3545 swarm/network: bump network id for 0.4 release (#19580)
* swarm/network: bump network id for 0.4 release

* swarm/network: bump bzz protocol version

* swarm/docs: migration document v0.3 to v0.4

* swarm/storage/feed: gofmt lookup_test.go
2019-05-16 18:29:12 +03:00
9eba3a9fff cmd/geth, core/rawdb: seamless freezer consistency, friendly removedb 2019-05-16 17:01:56 +03:00
1e067202a2 swarm/feeds: Parallel feed lookups (#19414) 2019-05-16 15:47:11 +02:00
536b3b416c cosensus, core, eth, params, trie: fixes + clique history cap 2019-05-16 10:39:35 +03:00
37d280da41 core, cmd, vendor: fixes and database inspection tool (#15)
* core, eth: some fixes for freezer

* vendor, core/rawdb, cmd/geth: add db inspector

* core, cmd/utils: check ancient store path forceily

* cmd/geth, common, core/rawdb: a few fixes

* cmd/geth: support windows file rename and fix rename error

* core: support ancient plugin

* core, cmd: streaming file copy

* cmd, consensus, core, tests: keep genesis in leveldb

* core: write txlookup during ancient init

* core: bump database version
2019-05-16 10:39:34 +03:00
42c746d6f4 freezer: disable compression on hashes and difficulties (#14)
* freezer: disable compression on hashes and difficulties

* core/rawdb: address review concerns

* core/rawdb: address review concerns
2019-05-16 10:39:33 +03:00
331de17e4d core/rawdb: support starting offset for future deletion 2019-05-16 10:39:33 +03:00
80469bea0c all: integrate the freezer with fast sync
* all: freezer style syncing

core, eth, les, light: clean up freezer relative APIs

core, eth, les, trie, ethdb, light: clean a bit

core, eth, les, light: add unit tests

core, light: rewrite setHead function

core, eth: fix downloader unit tests

core: add receipt chain insertion test

core: use constant instead of hardcoding table name

core: fix rollback

core: fix setHead

core/rawdb: remove canonical block first and then iterate side chain

core/rawdb, ethdb: add hasAncient interface

eth/downloader: calculate ancient limit via cht first

core, eth, ethdb: lots of fixes

* eth/downloader: print ancient disable log only for fast sync
2019-05-16 10:39:32 +03:00
b6cac42e9f core/rawdb: add file lock for freezer 2019-05-16 10:39:31 +03:00
b69bdc2a4f freezer: implement split files for data
* freezer: implement split files for data

* freezer: add tests

* freezer: close old head-file when opening next

* freezer: fix truncation

* freezer: more testing around close/open

* rawdb/freezer: address review concerns

* freezer: fix minor review concerns

* freezer: fix remaining concerns + testcases around truncation

* freezer: docs

* freezer: implement multithreading

* core/rawdb: fix freezer nitpicks + change offsets to uint32

* freezer: preopen files, simplify lock constructs

* freezer: delete files during truncation
2019-05-16 10:39:30 +03:00
006c21efc7 cmd, core, eth, les, node: chain freezer on top of db rework 2019-05-16 10:39:29 +03:00
0c5f8c078a accounts,signer: better support for EIP-191 intended validator (#19523) 2019-05-15 21:26:07 +02:00
b548b5aeb0 p2p/discover: fix crash in Resolve (#19579) 2019-05-15 11:11:17 -04:00
4b9c3bd39a swarm/storage: disable open tracing on indices (#19578) 2019-05-15 15:06:57 +02:00
9b0d1b9ab2 swarm/metrics: collect metrics on datadir disk usage (#19576) 2019-05-15 12:22:32 +02:00
350a87dd3c p2p/discover: add support for EIP-868 (v4 ENR extension) (#19540)
This change implements EIP-868. The UDPv4 transport announces support
for the extension in ping/pong and handles enrRequest messages.

There are two uses of the extension: If a remote node announces support
for EIP-868 in their pong, node revalidation pulls the node's record.
The Resolve method requests the record unconditionally.
2019-05-15 06:47:45 +02:00
8deec2e45a rlp: fixes for two corner cases and documentation (#19527)
These changes fix two corner cases related to internal handling of types
in package rlp: The "tail" struct tag can only be applied to the last field.
The check for this was wrong and didn't allow for private fields after the
field with the tag. Unsupported types (e.g. structs containing int) which
implement either the Encoder or Decoder interface but not both 
couldn't be encoded/decoded.

Also fixes #19367
2019-05-14 15:09:56 +02:00
184af72e4e accounts/abi: fix documentation (#19568) 2019-05-14 12:38:34 +03:00
07d2d83c31 Merge pull request #19563 from karalabe/faucet-remove-g+-mention
cmd/faucet: remove Google+ mention from web assets too
2019-05-13 15:50:17 +03:00
9effd64290 core, eth, trie: bloom filter for trie node dedup during fast sync (#19489)
* core, eth, trie: bloom filter for trie node dedup during fast sync

* eth/downloader, trie: address review comments

* core, ethdb, trie: restart fast-sync bloom construction now and again

* eth/downloader: initialize fast sync bloom on startup

* eth: reenable eth/62 until we properly remove it
2019-05-13 15:28:01 +03:00
f22c00b161 cmd/faucet: remove Google+ mention from web assets too 2019-05-13 14:52:05 +03:00
40cdcf8c47 les, light: implement ODR transaction lookup by hash (#19069)
* les, light: implement ODR transaction lookup by hash

* les: delete useless file

* internal/ethapi: always use backend to find transaction

* les, eth, internal/ethapi: renamed GetCanonicalTransaction to GetTransaction

* light: add canonical header verification to GetTransaction
2019-05-13 14:41:10 +03:00
f4fb1a1801 les: fixed cost table update (#19546) 2019-05-13 14:26:47 +03:00
751effa35e core: fix formatting error (trailing whitepace) 2019-05-13 14:07:55 +03:00
a0b81097ad Merge pull request #19562 from holiman/fix_tabcrash
p2p/discover: fix nil-dereference due to race
2019-05-13 13:31:45 +03:00
ec2131c8d3 core: move error variable to error.go (#19560)
* move error variable to error.go

* Update error.go

Edit "Genesis" to "genesis"
2019-05-13 13:23:32 +03:00
95263914fc p2p/discover: fix a race where table loop would self-lookup before returning from constructor 2019-05-13 11:30:31 +02:00
db83ba4067 cmd/swarm: skip export test on windows builds (#19555) 2019-05-13 10:27:27 +02:00
c8a77d8604 swarm/metrics: track runtime metrics (#19557) 2019-05-13 09:55:11 +02:00
7744241886 swarm/network/stream: add pure retrieval test (#19552) 2019-05-11 12:55:06 +02:00
6ec6b29051 core: fix import errors on clique crashes + empty blocks (#19544)
* core: fix import errors on clique crashes + empty blocks

* cosensus/clique, core: add test for the mirrored state issue

* core: address todo question wrt log count

* core: raise a louder warning for non-clique known blocks
2019-05-10 17:04:10 +03:00
494f5d448a Merge pull request #19550 from ethersphere/swarm-rather-stable
swarm v0.4-rc1
2019-05-10 14:09:01 +03:00
9b1543c282 swarm/network: update syncer metrics 2019-05-10 12:29:27 +02:00
297fa1855c swarm/network: measure addPeer and deletePeer to know if Kad rearranged
swarm/storage: remove traces for put/get/set (#1389)

* swarm/storage: remove traces for put/get/set

* swarm/storage: remove Has traces
2019-05-10 12:29:27 +02:00
84dfaea246 swarm: instrument setNextBatch
swarm/storage/localstore: add gc metrics, disable flaky test
2019-05-10 12:29:22 +02:00
3e9ba57669 swarm/storage: improve instrumentation
swarm/storage/localstore: fix broken metric (#1373)

p2p/protocols: count different messages (#1374)

cmd/swarm: disable snapshot create test due to constant flakes (#1376)

swarm/network: remove redundant goroutine (#1377)
2019-05-10 12:27:04 +02:00
7f753461ca swarm/pss: disable failing handshake test 2019-05-10 12:26:59 +02:00
8802b9ce7f swarm-smoke: add syncDelay flag
swarm/network: add want delay timer to syncing (#1367)

swarm/network: synchronise peer.close() (#1369)
2019-05-10 12:26:55 +02:00
ad6c39012f swarm: push tags integration - request flow
swarm/api: integrate tags to count chunks being split and stored
swarm/api/http: integrate tags in middleware for HTTP `POST` calls and assert chunks being calculated and counted correctly
swarm: remove deprecated and unused code, add swarm hash to DoneSplit signature, remove calls to the api client from the http package
2019-05-10 12:26:52 +02:00
3030893a21 swarm/network: update syncing 2019-05-10 12:26:49 +02:00
12240baf61 swarm/chunk: add tags data type
* swarm/chunk: add tags backend to chunk package
2019-05-10 12:26:45 +02:00
f8eb8fe64c cmd/swarm-smoke: check if chunks are at most prox host
swarm/network: measure how many chunks a node delivers (#1358)
2019-05-10 12:26:41 +02:00
a1cd7e6e92 p2p/protocols, swarm/network: fix resource leak with p2p teardown 2019-05-10 12:26:37 +02:00
c1213bd00c swarm: LocalStore metrics
* swarm/shed: remove metrics fields from DB struct

* swarm/schunk: add String methods to modes

* swarm/storage/localstore: add metrics and traces

* swarm/chunk: unknown modes without spaces in String methods

* swarm/storage/localstore: remove bin number from pull subscription metrics

* swarm/storage/localstore: add resetting time metrics and code improvements
2019-05-10 12:26:33 +02:00
993b145f25 swarm/storage/localstore: fix export db.Put signature
cmd/swarm/swarm-smoke: improve smoke tests (#1337)

swarm/network: remove dead code (#1339)

swarm/network: remove FetchStore and SyncChunkStore in favor of NetStore (#1342)
2019-05-10 12:26:30 +02:00
996755c4a8 cmd/swarm, swarm: LocalStore storage integration 2019-05-10 12:26:26 +02:00
c94d582aa7 Merge pull request #19539 from karalabe/faucet-updates
cmd/faucet: embed git commit hash/date, disable Google+
2019-05-09 00:49:14 +03:00
ab10c15578 cmd/faucet: sunset Google+ authentication 2019-05-08 17:14:54 +03:00
9454508b23 cmd/faucet: embed git commit hash and date into the version 2019-05-08 17:11:33 +03:00
be4d74f8d2 cmd, internal/build, docker: advertise commit date in unstable build versions (#19522)
* add-date-to unstable

* fields-insteadof-split

* internal/build: support building with missing git

* docker: add git history back to support commit date in version

* internal/build: use PR commits hashes for PR builds
2019-05-08 16:44:28 +03:00
c113723fdb core: handle importing known blocks more gracefully (#19417)
* core: import known blocks if they can be inserted as canonical blocks

* core: insert knowns blocks

* core: remove useless

* core: doesn't process head block in reorg function
2019-05-08 14:30:36 +03:00
78477e4118 Merge pull request #19534 from karalabe/downloader-delay-fix
eth/downloader: fix header delays during chain dedup
2019-05-08 12:15:17 +03:00
e770f62445 appveyor: Upgrade Go to v1.12.5 (#19536) 2019-05-08 12:11:08 +03:00
9b831d74fb accounts/usbwallet: fix a comment typo in trezor driver (#19535) 2019-05-07 19:22:24 +03:00
85726fdb02 eth/downloader: fix header delays during chain dedup 2019-05-07 16:09:11 +03:00
107c67d74e accounts, cmd, internal, signer: add note about backing up the keystore (#19432)
* accounts: add note about backing up the keystore

* cmd, accounts: move the printout to accountCreate

* internal, signer: add info when new account is created via rpc

* cmd, internal, signer: split logs

* cmd/geth: make account new output a bit more verbose
2019-05-07 15:49:51 +03:00
c8cf360f29 core: fix canonicality confusion (#19514)
* core: add tests for canonicality confusion

* core: delete stale future canon number mappings during reorg to shorter+heavier chain
2019-05-07 14:26:00 +02:00
14868a37fb trie: clarify why verifyProof doesn't check hashes (#19530)
* trie: fix merkle proof

* trie: use hasher instead of allocate keccack256 every time

* trie: add comments
2019-05-07 15:06:07 +03:00
abeba0a1de core/rawdb: fix typo (#19526) 2019-05-04 12:17:47 +02:00
b6c0234e0b Merge pull request #19513 from fjl/p2p-discover-split-v4
p2p/discover: split out discv4 code
2019-05-02 17:30:49 +03:00
5036992b06 eth, les: add error when accessing missing block state (#18346)
This change makes getBalance, getCode, getStorageAt, getProof,
call, getTransactionCount return an error if the block number in
the request doesn't exist. getHeaderByNumber still returns null
for missing headers.
2019-05-02 14:50:23 +02:00
4c90efdf57 consensus,core,miner: avoid overhead of creating a new block (#19301)
* consensus,core,miner: avoid overhead of creating a new block

* consensus: nitpick dot

* consensus: fix some comment formatting nits
2019-04-30 16:42:36 +03:00
dba1750eda p2p/discover: split out discv4 code
This change restructures the internals of p2p/discover to make room for
the discv5 code which will soon be added to this package.

- packet type names now have a "V4" suffix.
- ListenUDP returns *UDPv4 instead of *Table. This technically breaks
  the API but the only caller in go-ethereum is package p2p, which uses
  a compatible interface and doesn't need changes.
- The internal transport interface is changed to make Table reusable for v5.
- The 'lookup' code moves from table to transport. This required
  updating the lookup unit test to use udpTest instead of a custom transport.
2019-04-30 13:13:22 +02:00
a43ec8bba5 internal/testlog: add logger for unit tests 2019-04-30 13:12:11 +02:00
befca7e8b0 Merge pull request #19500 from karalabe/cht-txpool-open-limit
eth: enforce chain above CHT before accepting txs into the pool
2019-04-30 11:03:57 +03:00
af96b6644e internal/ethapi: estimate gas usage automatically (#19508) 2019-04-30 11:02:34 +03:00
ed47f29bc8 eth: enforce chain above CHT before accepting txs into the pool 2019-04-26 12:45:12 +03:00
504f88b65b core/rawdb: typo fix storea => stores (#19498)
* typo fix

* change to stores
2019-04-26 12:22:21 +03:00
3873a7314d swarm/network: fix data races in TestInitialPeersMsg test (#19490)
* swarm/network: fix data races in TestInitialPeersMsg test

* swarm/network: add Kademlia.Saturation method with lock

* swarm/network: add Hive.Peer method to safely retrieve a bzz peer

* swarm/network: remove duplicate comment

* p2p/testing: prevent goroutine leak in ProtocolTester

* swarm/network: fix data race in newBzzBaseTesterWithAddrs

* swarm/network: fix goroutone leaks in testInitialPeersMsg

* swarm/network: raise number of peer check attempts in testInitialPeersMsg

* swarm/network: use Hive.Peer in Hive.PeerInfo function

* swarm/network: reduce the scope of mutex lock in newBzzBaseTesterWithAddrs

* swarm/storage: disable TestCleanIndex with race detector
2019-04-25 21:33:18 +02:00
92a849a509 Merge pull request #19497 from karalabe/peers-50
cmd/utils, node: switch over default peer count to 50
2019-04-25 17:25:09 +03:00
937417527c core: lookup txs by block number instead of block hash (#19431)
* core: lookup txs by block number instead of block hash

Transaction hashes now store a reference to their corresponding
block number as opposed to their hash. In benchmarks this was
shown to reduce storage by over 12 GB.

The main limitation of this approach is that transactions on
non-canonical blocks could never be looked up, however that is
currently not supported.

The database version has been upgraded to version 5 and the
transaction lookup process is backwards-compatible with the
prior two transaction lookup formats prexisting in the
database instance. Tests have been added to ensure this.

* core/rawdb: tiny review nit fixes
2019-04-25 17:24:55 +03:00
7c91038bff Merge pull request #19438 from karalabe/ledger-new-derivation-path
accounts: switch Ledger derivation path to canonical one
2019-04-25 13:33:17 +03:00
0758d7fe5c cmd/utils, node: switch over default peer count to 50 2019-04-25 12:54:33 +03:00
749ccab9a4 eth/downloader: enable unsync-protection for light client (#19496)
* eth/downloader: enable unsync-protection for light client

* eth/downloader: fix tests
2019-04-25 11:31:23 +03:00
6269e5574c miner: polish miner configuration (#19480)
* cmd, eth, miner: disable advance sealing if user require

* cmd, console, miner, les, eth: wrap the miner config

* eth: remove todo

* cmd, miner: revert noadvance flag

The reason for this is: if the transaction execution is even longer
than block time, then this kind of transactions is DoS attack.
2019-04-23 10:08:51 +03:00
d9403690ec swarm/pss: Fix flaky TestProxNetwork (#19471) 2019-04-19 11:15:17 +02:00
d8dc37c85b Merge pull request #18168 from karalabe/trie-better-cache-size-estimation
trie: approximate the wasted cache metaspace closer
2019-04-18 12:46:27 +03:00
29bba5d0b2 p2p: fix typo in dialstate comment (#19476) 2019-04-18 10:02:11 +03:00
78ec90717a swarm/version: bump version due to Geth hotfix release 2019-04-17 15:50:41 +03:00
f496927a93 Merge pull request #19468 from karalabe/enforce-fastsync-checkpoints
eth, les, light: enforce CHT checkpoints on fast-sync too
2019-04-17 14:50:51 +03:00
38f6b85638 eth, les, light: enforce CHT checkpoints on fast-sync too 2019-04-17 13:16:15 +03:00
921b3160db les: fix p2p.Protocol.PeerInfo (#19472) 2019-04-17 10:57:53 +03:00
c9d9a2d48a Merge pull request #19470 from SamuelMarks/go1.12.4
appveyor.yml: Upgraded to Go 1.12.4
2019-04-16 17:44:11 +03:00
04a75a1863 appveyor.yml: Upgraded to Go 1.12.4 2019-04-17 00:14:06 +10:00
85b6823d16 les: check required message types in cost table (#19454) 2019-04-16 14:30:47 +03:00
78d90c47f7 Merge pull request #19345 from Matthalp/optimize-receipt-storage
core, eth, les, light: avoid storing computable receipt metadata
2019-04-16 09:51:46 +03:00
ce9a289fa4 core/types: fix cummulative gas bug and legacy decoding tests 2019-04-16 09:50:11 +03:00
73fc65bb5b swarm/storage/feed: add context handling/cancellation to Swarm Feeds lookup, fix bad hint lookup bug (#19353)
* swarm/storage/feed/lookup: Add context handling/forwarding

* swarm/storage/feed/lookup: Add test to catch bad hint

* swarm/storage/feed/lookup: Added context cancellation test
2019-04-16 07:13:02 +02:00
7221cb1434 core, eth, les, light: scope receipt functionality a bit cleaner 2019-04-15 13:42:26 +03:00
6b0ddd141e core, eth, les, light: store transaction receipts without txHash and gasCost 2019-04-15 13:15:39 +03:00
d9d60a5a7f cmd: special case default cache allowance (4GB mainnet, 128MB ligh) 2019-04-12 12:06:43 +03:00
4a4abc41d4 trie: approximate the wasted cache metaspace closer 2019-04-12 11:43:16 +03:00
1528b791ac node: do not continue if 'signer' is used but connection fails (#19441)
This makes geth fails instead of falling back to local keystore, if the command line flag `--signer` is used
2019-04-12 10:18:03 +02:00
d5af3a584c cmd/clef, signer: make fourbyte its own package, break dep cycle (#19450)
* cmd/clef, signer: make fourbytes its own package, break dep cycle

* signer/fourbyte: pull in a sanitized 4byte database
2019-04-11 20:01:11 +03:00
26b50e3ebe cmd/swarm: fix resource leaks in tests (#19443)
* swarm/api: fix file descriptor leak in NewTestSwarmServer

Swarm storage (localstore) was not closed. That resulted a
"too many open files" error if `TestClientUploadDownloadRawEncrypted`
was run with `-count 1000`.

* cmd/swarm: speed up StartNewNodes() by parallelization

Reduce cluster startup time from 13s to 7s.

* swarm/api: disable flaky TestClientUploadDownloadRawEncrypted with -race

* swarm/storage: disable flaky TestLDBStoreCollectGarbage (-race)

With race detection turned on the disabled cases often fail with:
"ldbstore_test.go:535: expected surplus chunk 150 to be missing, but got no error"

* cmd/swarm: fix process leak in TestACT and TestSwarmUp

Each test run we start 3 nodes, but we did not terminate them. So
those 3 nodes continued eating up 1.2GB (3.4GB with -race) after test
completion.

6b6c4d1c27 changed how we start clusters
to speed up tests. The changeset merged together test cases
and introduced a global cluster. But "forgot" about termination.

Let's get rid of "global cluster" so we have a clear owner of
termination (some time sacrifice), while leaving subtests to use the
same cluster.
2019-04-11 12:44:15 +02:00
54dfce8af7 cmd/clef: bundle 4byte db into clef, (#19112)
* clef: bundle 4byte db into clef, fix #19048

* clef: add go-generate directive, remove internal abidb parser tool

* cmd/clef: extend go generate to format asset file
2019-04-11 13:22:48 +03:00
31bc2a2434 metrics/prometheus: expose metrics in prometheus format too (#17077)
* metrics/prometheus: added prometheus http server and metrics collector

* metrics/prometheus: minor cleanups

* metrics/prometheus: named keys instead name in tag

* metrics/prometheus: minor typo cleanups, sorted report
2019-04-11 12:56:19 +03:00
74acde4b08 clef: update warning-text (#19442)
* clef: update warning-text

* Update cmd/clef/main.go
2019-04-10 18:38:12 +03:00
b7dd225179 swarm/version: bump Swarm due to Geth hotfix release 2019-04-10 16:13:35 +03:00
a1c5017bc5 accounts/scwallet: fix card pairing instruction message (#19436) 2019-04-10 13:46:35 +03:00
ae7344d799 accounts: switch Ledger derivation path to canonical one 2019-04-10 13:09:08 +03:00
8cf764da89 Revert "Can now specify the number of empty accounts to derive"
This reverts commit 5b30aa59d6.
2019-04-10 12:51:22 +03:00
f0b878d56d accounts/scwallet: Update README for v2.2.1 support (#19425)
Update the app download link to the latest version, as requested in #19418
2019-04-10 10:51:45 +02:00
3fa76298e4 p2p: remove useless parameter (#19433) 2019-04-10 11:49:02 +03:00
e4cb7b80d5 rpc: cancel root context after all requests are served (#19430) 2019-04-10 11:47:09 +03:00
22e1d2ce03 Merge pull request #19426 from karalabe/vendor-fix-freegeoip
vendor: fix some vendor config leftover
2019-04-10 00:12:57 +03:00
da99c0691c vendor: fix some vendor config leftover 2019-04-10 00:11:59 +03:00
cf1a6d7c56 vendor: upgrade go-libpcsclite (#19420)
* vendor: remove leftover trace

* Upgrade go-libpcsclite to the latest version
2019-04-09 23:40:53 +03:00
33d28f37c7 Merge pull request #19423 from SamuelMarks/go1.12.3
appveyor.yml: Upgraded to Go 1.12.3
2019-04-09 18:30:32 +03:00
60ab5faf54 appveyor.yml: Upgraded to Go 1.12.3 2019-04-10 00:55:56 +10:00
1fc3e44ffe accounts:smartcard wallet without the dependency on libpcsclite (#19273)
* accounts, core, internal, node: Add support for smartcard wallets

* accounts, internal: Changes in response to review

* vendor: pull in missing go-echd library

* accounts/scwallet, console: user friendly card opening

* accounts/scwallet: ordered wallets, tighter events, derivation logs

* accounts, console: frendly card errors, support pin unblock

* accounts/scwallet: fix crypto API change

* accounts/scwallet: rebase and update

* Fix some linter issues

* Remove the direct dependency on libpcsclite

Instead, use a go library that communicates with pcscd over a socket.

Also update the changes introduced by @gravityblast since this PR's
inception

* Temporary fix to the ADBU status call

* fix wallet status update

This is a temporary fix, better checks need to
be performed once the whole process has been
validated.

* Fix key derivation

* Add some documentation

* Update a comment to reflect the workings of the updated system

* Vendor keycard-go/derivationpath

* Formatting fixes

* Add instructions on how to install the card

* Achieve full transaction signature+sending

* PK derivation has to be supported by the card

* Fix linter issues

* Upgrade to keycard app v2.1.1

* Set gballet as codeowner of the smartcard wallet dir

* fix unnecessary condition linter warning

* refuse to overwrite the master key of a previously initialized card

* refresh the account list when initializing the card

* Update the card preparation instructions based on review feedback

* 'sanitize' JSON input

Co-Authored-By: gballet <gballet@gmail.com>

* Apply suggestions from code review

Co-Authored-By: gballet <gballet@gmail.com>

* fix a serialization error

* more review feedback

* More review feedback

* Can now specify the number of empty accounts to derive

* Fix rebase error: include norm package

* Update bip-39 ref and remove ebfe/scard from vendor

* Add missing dependency
2019-04-09 11:53:58 +02:00
5fc5971438 swarm/version: bump version due to Geth-only hotfix release 2019-04-09 12:19:24 +03:00
f538d15241 clef: fix chainId key being present in domain map (#19303)
This PR fixes this, moving domain.ChainId from the map's initializer down to a separate if statement which checks the existance of ChainId's value, similar to the rest of the fields, before adding it. I've also included a new test to demonstrate the issue
2019-04-09 10:17:09 +02:00
7c28ecbcc3 Add missing dependency 2019-04-09 08:52:25 +02:00
86806d8b24 Update bip-39 ref and remove ebfe/scard from vendor 2019-04-08 19:16:27 +02:00
5b947c5004 swarm/version: bump version due to Geth maintenance release 2019-04-08 16:08:22 +03:00
8ee5bb2289 Fix rebase error: include norm package 2019-04-08 14:17:29 +02:00
7c08e48141 Merge pull request #19403 from zsfelfoldi/remove-les1
les: remove support for LES/1
2019-04-08 14:04:14 +02:00
e2f3465e83 eth, les, geth: implement cli-configurable global gas cap for RPC calls (#19401)
* eth, les, geth: implement cli-configurable global gas cap for RPC calls

* graphql, ethapi: place gas cap in DoCall

* ethapi: reformat log message
2019-04-08 14:49:52 +03:00
ed97517ff4 p2p/discover: bump failure counter only if no nodes were provided (#19362)
This resolves a minor issue where neighbors responses containing less
than 16 nodes would bump the failure counter, removing the node. One
situation where this can happen is a private deployment where the total
number of extant nodes is less than 16.

Issue found by @jsying.
2019-04-08 14:35:11 +03:00
5b30aa59d6 Can now specify the number of empty accounts to derive 2019-04-08 13:21:22 +02:00
6f21520a82 More review feedback 2019-04-08 13:21:22 +02:00
fc3000d649 more review feedback 2019-04-08 13:21:22 +02:00
d2daff4258 fix a serialization error 2019-04-08 13:21:22 +02:00
aae61ab16e Apply suggestions from code review
Co-Authored-By: gballet <gballet@gmail.com>
2019-04-08 13:21:22 +02:00
df5409c952 'sanitize' JSON input
Co-Authored-By: gballet <gballet@gmail.com>
2019-04-08 13:21:22 +02:00
3b3e1bc07e Update the card preparation instructions based on review feedback 2019-04-08 13:21:22 +02:00
8c786a1f99 refresh the account list when initializing the card 2019-04-08 13:21:22 +02:00
79f4cfac2e refuse to overwrite the master key of a previously initialized card 2019-04-08 13:21:22 +02:00
1d1bee528e fix unnecessary condition linter warning 2019-04-08 13:21:22 +02:00
28c8f41b90 Set gballet as codeowner of the smartcard wallet dir 2019-04-08 13:21:22 +02:00
714675cd2a Upgrade to keycard app v2.1.1 2019-04-08 13:21:22 +02:00
35b80f1865 Fix linter issues 2019-04-08 13:21:22 +02:00
bcf3c52ac9 PK derivation has to be supported by the card 2019-04-08 13:21:22 +02:00
2625057fe3 Achieve full transaction signature+sending 2019-04-08 13:21:22 +02:00
189a032987 Add instructions on how to install the card 2019-04-08 13:21:22 +02:00
ec4fba83d4 Formatting fixes 2019-04-08 13:21:22 +02:00
0a0b93708d Vendor keycard-go/derivationpath 2019-04-08 13:21:22 +02:00
21b01f590d Update a comment to reflect the workings of the updated system 2019-04-08 13:21:22 +02:00
9b66a8520a Add some documentation 2019-04-08 13:21:22 +02:00
e273031dce Fix key derivation 2019-04-08 13:21:22 +02:00
7ec6fa03d3 fix wallet status update
This is a temporary fix, better checks need to
be performed once the whole process has been
validated.
2019-04-08 13:21:22 +02:00
42c76a2ba1 Temporary fix to the ADBU status call 2019-04-08 13:21:22 +02:00
5617dca1c9 Remove the direct dependency on libpcsclite
Instead, use a go library that communicates with pcscd over a socket.

Also update the changes introduced by @gravityblast since this PR's
inception
2019-04-08 13:21:22 +02:00
ae82c58631 Fix some linter issues 2019-04-08 13:19:37 +02:00
7b230b7ef1 accounts/scwallet: rebase and update 2019-04-08 13:19:37 +02:00
a900e80a89 accounts/scwallet: fix crypto API change 2019-04-08 13:19:37 +02:00
7d5886dcf4 accounts, console: frendly card errors, support pin unblock 2019-04-08 13:19:37 +02:00
386943943f accounts/scwallet: ordered wallets, tighter events, derivation logs 2019-04-08 13:19:37 +02:00
114de0fe2a accounts/scwallet, console: user friendly card opening 2019-04-08 13:19:37 +02:00
475e8719ba vendor: pull in missing go-echd library 2019-04-08 13:19:37 +02:00
78375608a4 accounts, internal: Changes in response to review 2019-04-08 13:19:37 +02:00
f7027dd68c accounts, core, internal, node: Add support for smartcard wallets 2019-04-08 13:19:37 +02:00
64f9c1ea09 les, light: remove support for les/1 4096 block CHT sections 2019-04-08 13:17:24 +02:00
5515f364ae les: removed les/1 protocol messages 2019-04-08 13:17:24 +02:00
3996bc1ad9 Merge pull request #19411 from holiman/uncle_abort_early
consensus,core: shortcut uncle validation
2019-04-08 13:02:33 +03:00
2a8a07c2b3 Merge pull request #19412 from karalabe/rinkeby-petersburg
params: set Rinkeby Petersburg fork block (4th May, 2019)
2019-04-08 12:22:51 +03:00
dd0cfe5e11 params: set Rinkeby Petersburg fork block (4th May, 2019) 2019-04-08 12:15:47 +03:00
d763b49ae3 consensus,core: shortcut uncle validation 2019-04-08 10:57:15 +02:00
78644f0377 Merge pull request #19405 from SamuelMarks/go1.12.2
appveyor: Upgrade to go1.12.2
2019-04-08 11:07:25 +03:00
de195bf152 travis: update builders to xenial to shadow Go releases 2019-04-08 10:43:01 +03:00
212b25869d appveyor.yml: Upgrade to go1.12.2 2019-04-06 13:02:21 +11:00
bca140b73d Merge pull request #19400 from karalabe/nuke-bug
cmd: nuke geth bug, nobody is using it anyway
2019-04-05 13:56:30 +03:00
8b427296c9 Merge pull request #19402 from karalabe/trie-disallow-metaroot-retrieval
trie: there's no point in retrieving the metaroot
2019-04-05 13:56:14 +03:00
4bf0d11e7c trie: there's no point in retrieving the metaroot 2019-04-05 13:09:28 +03:00
da19f302b8 cmd: nuke geth bug, nobody is using it anyway 2019-04-05 12:44:45 +03:00
ee376f6766 Merge pull request #19399 from karalabe/nuke-monitor
cmd/geth, internal, node, vendor: nuke geth monitor
2019-04-05 12:42:49 +03:00
29bc982d75 cmd/geth, internal, node, vendor: nuke geth monitor 2019-04-05 12:13:56 +03:00
36f81118f6 core/state: fix state iterator (#19127)
* core/state: fix state iterator

* core: fix state iterator more elegant
2019-04-05 09:44:02 +03:00
7dd3194710 Merge pull request #18322 from rjl493456442/reomit-log-events
core: re-emit new log event when logs rebirth
2019-04-04 17:03:32 +03:00
a8dd1f93c6 node: switching prometheus flock location to tsdb (#19376)
* node: switching prometheus flock location to tsdb

* rookie mistake
2019-04-04 16:59:18 +03:00
43631aa1d6 core: minor code polishes + rebase fixes 2019-04-04 16:29:25 +03:00
690bd8a417 core: re-omit new log event when logs rebirth 2019-04-04 14:17:43 +03:00
d5cae48bae accounts, cmd, internal: disable unlock account on open HTTP (#17037)
* cmd, accounts, internal, node, rpc, signer: insecure unlock protect

* all: strict unlock API by rpc

* cmd/geth: check before printing warning log

* accounts, cmd/geth, internal: tiny polishes
2019-04-04 14:03:10 +03:00
9b3601cfce core/vm: fix typos in comments (#19391) 2019-04-04 12:30:10 +02:00
36b78abe61 core/vm: instruction tests (#16327)
This PR makes it easy to generate and execute testcases for VM arithmetic operations. By enabling and running the testcase TestWriteExpectedValues, a set of json files are created which contain input and output for each arith operation.
The test TestJsonTestcases executes all of those tests.

While meaningless as is, this PR makes it less risky to make changes (optimizations) to the vm operations, since there will be a larger body of testcases.
2019-04-04 11:19:38 +02:00
5164274872 les: extend error message for coinbase API calls (#19380) 2019-04-03 10:15:15 +03:00
0b4fe8d192 all: simplify timestamps to uint64 (#19372)
* all: simplify timestamps to uint64

* tests: update definitions

* clef, faucet, mobile: leftover uint64 fixups

* ethash: fix tests

* graphql: update schema for timestamp

* ethash: remove unused variable
2019-04-02 23:28:48 +03:00
e14f8a408c Merge pull request #19328 from karalabe/preload
core: prefetch next block state concurrently
2019-04-02 17:03:12 +03:00
88d7119ebb Merge pull request #19374 from karalabe/console-fix-coinbase-printout
console: handle eth.coinbase throws
2019-04-02 16:55:39 +03:00
3baed8dd9a console: handle eth.coinbase throws 2019-04-02 15:18:05 +03:00
c4109d790f core: fix typo in insertChain method doc (#19371) 2019-04-02 13:01:02 +03:00
6caf35684d Merge pull request #19369 from karalabe/les-update-chts
light, params: update CHTs, integrate CHT for Goerli too
2019-04-02 12:06:55 +03:00
ccffad5553 light, params: update CHTs, integrate CHT for Goerli too 2019-04-02 11:47:01 +03:00
72c98dc41f cmd/flags: fix typo in --exitwhensynced flag (#19364)
Corrected error for ExitWhenSyncedFlag, clarifying that the program exits after syncing completes.
2019-04-02 10:40:30 +03:00
0529015091 swarm/network: hive bug: needed shallow peers are not sent to nodes beyond connection's proximity order (#19326)
* swarm/network: fix hive bug not sending shallow peers

-  hive bug: needed shallow peers were not sent to nodes beyond connection's proximity order
- add extensive protocol exchange tests for initial subPeersMsg-peersMsg exchange
- modify bzzProtocolTester to allow pregenerated overlay addresses

* swarm/network: attempt to fix hive persistance test

* swarm/network: fix TestHiveStatePersistance (#1320)

* swarm/network: remove trace lines from the hive persistance test

* address PR review comments

* swarm/network: address PR comments on TestInitialPeersMsg

 * eliminate *testing.T argument from bzz/hive protocoltesters
 * add sorting (only runs in test code) on peersMsg payload
 * add random (0 to MaxPeersPerPO) peers for each po
 * add extra peers closer to pivot than control
2019-04-02 09:15:16 +02:00
92faf1bf7a Merge pull request #19348 from LiangMa/overflowPR
core/vm: Correct the Memory Gas Overflow condition
2019-04-01 17:12:13 +03:00
9294b8f10f core/vm: polish gas PR, fix tests, make table driven 2019-04-01 17:10:42 +03:00
cd79bc61a9 accounts/abi: generic unpacking of event logs into map[string]interface{} (#18440)
Add methods that allow for the unpacking of event logs into maps (allows for agnostic unpacking of logs)
2019-04-01 15:42:59 +02:00
ed34a5e08a cmd, core, eth: support disabling the concurrent state prefetcher 2019-04-01 11:52:11 +03:00
bb9631c399 core: prefetch next block state concurrently 2019-04-01 11:06:15 +03:00
86e77900c5 Merge pull request #19351 from karalabe/txpool-precache-signatures
core: cache tx signature before obtaining lock
2019-03-29 12:34:09 +02:00
fbe7caf136 core: cache tx signature before obtaining lock 2019-03-29 12:01:29 +02:00
157f09e5b6 core/vm: Correct the Memory Gas Overflow condition
previous overflow condition is too big to use.
0x7FFFFFFFF squre operation is overflowed uint64.

0x7FFFFFFFF^2 = 0x3F FFFF FFF0 0000 0001
2019-03-28 21:04:31 +00:00
5b0d3fa393 accounts/abi: Add the original name as json-structtag for tuples. 2019-03-28 14:32:09 +01:00
67fc0377e1 contracts/ens: revert bmt to keccak256 (#19323)
* contracts/ens: revert bmt to keccak256

* contracts/ens: fix keccak256 hash code comment
2019-03-27 14:07:03 +01:00
7fb89697fd core/types: add block location fields to receipt (#17662)
Solves #15210 without changing consensus, in a backwards compatible way,
by adding tx inclusion information to the Receipt struct.
2019-03-27 13:39:25 +01:00
42e2c586fd Merge pull request #19343 from karalabe/trie-metrics-split
core: 3rd try on splitting the trie metrics correctly
2019-03-27 14:23:51 +02:00
b17e4a8713 Merge pull request #19344 from karalabe/eth-remove-redundant-chainconfig
eth: remove redundant chain config fields
2019-03-27 14:21:31 +02:00
ac3e7c9b3d eth: remove redundant chain config fields 2019-03-27 13:23:08 +02:00
dba336e612 eth: fix EIP158 account cleanup on chain tracing (#19341)
Fixes #19337
2019-03-27 13:16:28 +02:00
a732c93309 core: 3rd try on splitting the trie metrics correctly 2019-03-27 13:02:04 +02:00
59e1953246 core, ethdb, trie: mode dirty data to clean cache on flush (#19307)
This PR is a more advanced form of the dirty-to-clean cacher (#18995),
where we reuse previous database write batches as datasets to uncache,
saving a dirty-trie-iteration and a dirty-trie-rlp-reencoding per block.
2019-03-26 15:48:31 +01:00
df717abc99 whisper/whisperv6: fix PoW calculations to match the spec (#19330)
This PR fixes two issues in the PoW calculation of a Whisper envelope,
compared to the spec (see PoW Requirements):

- The pow is supposed to take the leading number of zeroes (i.e. most
  significant zeroes) and what it did was to take the number of trailing
  zeroes (i.e. least significant zeroes). It has been fixed to match what
  the spec and Parity does.
- The spec expects to use the size of the RLP encoded envelope, and it took
  something else, as described in #18070.
2019-03-26 10:23:59 +01:00
b8b4fb004c Merge pull request #19308 from holiman/fix_reset_txpool
core: make txpool handle reorg due to setHead
2019-03-26 11:00:35 +02:00
f03402232c Merge pull request #19331 from karalabe/fix-trie-metrics
core: split trie op metrics from the correct chain metrics
2019-03-25 16:33:56 +02:00
435020f9b3 core: split trie op metrics from the correct chain metrics 2019-03-25 16:27:46 +02:00
acbb8a1439 Merge pull request #19327 from karalabe/fix-expensive-metrics
metrics: fix expensive metrics flag processing
2019-03-25 10:47:29 +02:00
88c756c83d metrics: fix expensive metrics flag processing 2019-03-25 10:40:46 +02:00
71cb816a74 appveyor: bump Windows Go builders to 1.12.1 (#19294) 2019-03-25 10:26:48 +02:00
86989e3fcd core: split out detailed trie access metrics from insertion time (#19316)
* core: split out detailed trie access metrics from insertion time

* cmd, core, metrics: support expensive optional metrics
2019-03-25 10:01:18 +02:00
e852505ace les: fix block announcements (#19322) 2019-03-25 09:17:55 +02:00
2f5b6cb442 swarm/network: Use different privatekey for bzz overlay in sim (#19313)
* cmd/swarm, p2p, swarm: Enable ENR in binary/execadapter

* cmd/p2p/swarm: Remove comments + config.Enode nomarshal

* p2p/simulations: Remove superfluous error check

* p2p/simulation: Move init enode comment

* swarm, p2p/simulations, cmd/swarm: Use nodekey in binary record sign

* swarm/network, swarm/pss: Dervice bzzkey

* swarm/pss: Remove unused function

* swarm/network: Store swarm private key in simulation bucket

* swarm/pss: Shorten TextProxNetwork shortrunning test timeout

* swarm/pss: Increase prox test timeout

* swarm/pss: Increase timeout slightly on shortrunning proxtest

* swarm/network: Simplify bucket instantiation in servicectx func

* p2p/simulations: Tcpport -> udpport

* swarm/network, swarm/pss: Simplify + correct lock in servicefunc sim

* swarm/network: Cleanup after rebase on extract swarm enode new

* p2p/simulations, swarm/network: Make exec disc test pass

* swarm/network: Prune ye olde comment

* swarm/pss: Correct revised bzzkey method call

* swarm/network: Clarify comment about privatekey generation data

* swarm/pss: Fix syntax errors after rebase

* swarm/network: Rename misleadingly named method

(amend commit to trigger ci - attempt 5)
2019-03-22 21:37:25 +01:00
876f357364 trie: disable fnv64a hashing of hashes for bigcache (#19314)
* trie: disable fnv64a hashing of hashes for bigcache

* trie/database: add very important period
2019-03-22 17:13:28 +02:00
8d04154691 p2p/simulations: wait until all connections are recreated when uploading snapshot (#19312)
* swarm/network/simulation: test cases refactored

* swarm/pss: minor refactoring

* swarm/simulation: UploadSnapshot updated

* swarm/network: style fix

* swarm/pss: bugfix
2019-03-22 11:20:17 +01:00
09924cbcaa cmd/swarm, p2p, swarm: Enable ENR in binary/execadapter (#19309)
* cmd/swarm, p2p, swarm: Enable ENR in binary/execadapter

* cmd/p2p/swarm: Remove comments + config.Enode nomarshal

* p2p/simulations: Remove superfluous error check

* p2p/simulation: Move init enode comment

* swarm/api: Check error in config test

* swarm, p2p/simulations, cmd/swarm: Use nodekey in binary record sign

* cmd/swarm: Make nodekey available for swarm api config
2019-03-22 05:55:47 +01:00
3585351888 travis: extend race detection for swarm p2p packages (#19287)
* travis: remove verbose from Swarm race tests

By removing -v our output will be cleaner, but the Travis job still
won't be terminated - due to 'no output for 10 minutes' - as keepalive
.sh produces a log line every 5 minutes.

* travis: extend Swarm race detection to p2p subpackages

As p2p/protocols, p2p/simulations and p2p/testing packages mostly
belong to the Swarm team.
2019-03-21 12:06:11 +01:00
650ad19c2d core: make txpool handle reorg due to setHead 2019-03-21 11:42:56 +01:00
baded64d88 swarm/network: measure time of messages in priority queue (#19250) 2019-03-20 21:30:34 +01:00
c53c5e616f les: fix peer id and reply error handling (#19289)
* les: fixed peer id format

* les: fixed peer reply error handling
2019-03-20 10:35:05 +02:00
e7d1867964 contracts, swarm: implement EIP-1577 (#19285)
* contracts/ens: update public resolver solidity code

* contracts/ens: update public resolver, update go bindings

* update build

* fix ens.sol

* contracts/ens: change contract interface

* contracts/ens: implement public resolver changes

* contracts/ens: added ENSRegistry contract

* contracts/ens: reinstate old contract code

* contracts/ens: update README.md

* contracts/ens: added test coverage for fallback contract

* contracts/ens: added support for fallback contract

* contracts/ens: removed unused contract code

* contracts/ens: add todo and decode multicodec stub

* add encode

* vendor: add ipfs cid libraries

* contracts/ens: cid sanity tests

* contracts/ens: more cid sanity checks

* contracts/ens: wip integration

* wip

* Revert "vendor: add ipfs cid libraries"

This reverts commit 29d9b6b294.

* contracts/ens: removed multiformats dependencies

* contracts/ens: added decode tests

* contracts/ens: added eip spec test, minor changes to exiting tests

* contracts/ens: moved cid decoding to own file

* contracts/ens: added unit test to encode hash to content hash

* contracts/ens: removed unused code

* contracts/ens: fix ens tests to use cid decode and encode

* contracts/ens: adjust swarm multicodecs after pr merge

* contracts/ens: fix linter error

* constracts/ens: address PR comments

* cmd, contracts: make peoples lives easier

* contracts/ens: fix linter error

* contracts/ens: address PR comments
2019-03-20 09:33:24 +01:00
fb458280d1 Modified Abigen to Support Vyper (#19120) 2019-03-18 13:29:26 +01:00
47c03c0f8c Merge pull request #19288 from karalabe/les-verbose-errors
les, light: verbose errors on state retrieval issues
2019-03-18 13:51:28 +02:00
211ec46284 les, light: verbose errors on state retrieval issues 2019-03-18 13:19:40 +02:00
54cd3e89a4 dashboard: fix deprecated or problematic dependencies (#19271)
* dashboard: fix github alerts

* dashboard: run go generate
2019-03-18 10:30:36 +02:00
def1b0d7e1 vendor: udpate leveldb upstream (#19284) 2019-03-18 10:28:47 +02:00
acebccc3bf graphql: Updates to graphql support to match EIP1767 (#19238)
Updates to match EIP1767
2019-03-18 07:24:43 +01:00
6e401792ce swarm/pss: negihbourhood addressing simulation tests (#19278)
* swarm/pss: fixed bug in pss.process, test added

* swarm/pss: test case updated

* swarm/pss: WaitTillSnapshotRecreated() func added

* swarm/pss: snapshot test updated

* swarm/pss: WaitTillSnapshotLoaded() fixed

* swarm/pss: gofmt applied

* swarm/pss: refactoring, file renamed

* swarm/pss: input data fixed

* swarm/pss: race condition fixed

* swarm/pss: test timeout increased

* swarm/pss: eliminated the global variables

* swarm/pss: tests added

* swarm/pss: comments added

* swarm/pss: comment fixed

* swarm/pss: refactored according to review

* swarm/pss: style fix

* swarm/pss: increased timeout
2019-03-16 08:39:38 +01:00
3d067b0cea cmd/swarm/swarm-smoke: do not fail if a node does not respond to rpc (#19280) 2019-03-15 17:36:39 +01:00
f180981273 swarm/shed: add vector uint64 field (#19279) 2019-03-15 15:12:45 +01:00
4b4f03ca37 swarm, p2p: Prerequities for ENR replacing handshake (#19275)
* swarm/api, swarm/network, p2p/simulations: Prerequisites for handshake remove

* swarm, p2p: Add full sim node configs for protocoltester

* swarm/network: Make stream package pass tests

* swarm/network: Extract peer and addr types out of protocol file

* p2p, swarm: Make p2p/protocols tests pass + rename types.go

* swarm/network: Deactivate ExecAdapter test until binary ENR prep

* swarm/api: Remove comments

* swarm/network: Uncomment bootnode record load
2019-03-15 11:27:17 +01:00
df488975bd cmd/swarm: dont connect to bootnodes in tests (#19270)
* cmd/swarm: dont connect to bootnodes in tests

* cmd/utils: check for empty string when parsing enode
2019-03-15 06:20:21 +01:00
91eec1251c cmd, core, eth, trie: get rid of trie cache generations (#19262)
* cmd, core, eth, trie: get rid of trie cache generations

* core, trie: get rid of remainder of cache gen boilerplate
2019-03-14 15:25:12 +02:00
e270a753be Merge pull request #19269 from holiman/alt_18957
bind: Static byte arrays should be right-padded
2019-03-14 13:01:33 +02:00
5ce192ce44 accounts/abi/bind: simulated test case for fixed bytes logs 2019-03-14 12:58:18 +02:00
7640c9c933 bind: Static byte arrays should be right-padded
Per https://solidity.readthedocs.io/en/v0.5.3/abi-spec.html:

"bytes<M>: enc(X) is the sequence of bytes in X padded with trailing zero-bytes to a length of 32 bytes"
2019-03-14 12:51:10 +02:00
6c312a24b6 eth/downloader: fix ancestor searching for light syncing (#19136) 2019-03-14 12:19:03 +02:00
95b2ec7169 changed file name grahpql.go to graphql.go (#19267)
graphql: fix typo in file name
2019-03-14 11:38:54 +02:00
768b4c2e6b asm: remove unused parameter for function Lex (#18058) 2019-03-14 10:35:55 +01:00
1a29bf0ee2 dashboard, p2p, vendor: visualize peers (#19247)
* dashboard, p2p: visualize peers

* dashboard: change scale to green to red
2019-03-13 14:53:52 +02:00
1591b63306 Merge pull request #19261 from karalabe/less-blocks
core: use headers only where blocks are unnecessary
2019-03-13 13:12:37 +02:00
aa69ec64d0 Merge pull request #19259 from ligi/address_19246
README: Mention go 1.10 as minumum - context #19246
2019-03-13 12:55:06 +02:00
4f457859a2 core: use headers only where blocks are unnecessary 2019-03-13 12:32:47 +02:00
8a03bf2155 README: Mention go 1.10 as minumum - context #19246 2019-03-12 16:52:26 +01:00
b87a68407b Merge pull request #19252 from karalabe/badgerdb
ethdb: tiny API tidy-up from the database rework pr
2019-03-12 12:57:23 +02:00
8111b9dda5 ethdb, trie: tiny API tidy-up from the database rework pr 2019-03-12 12:32:02 +02:00
7504dbd6eb core/vm: 64 bit memory and gas calculations (#19210)
* core/vm: remove function call for stack validation from evm runloop

* core/vm: separate gas  calc into static + dynamic

* core/vm: optimize push1

* core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER

* core/vm: use generic error message for jump/jumpi, to avoid string interpolation

* testdata: fix tests for new error message

* core/vm: use 64-bit memory calculations

* core/vm: fix error in memory calculation

* core/vm: address review concerns

* core/vm: avoid unnecessary use of big.Int:BitLen()
2019-03-12 11:40:05 +02:00
da5de012c3 state: fix emptyStatet to emptyRoot (#19254) 2019-03-12 11:14:24 +02:00
04a4a23c2d graphql: make gballet codeowner for review notifications (#19251) 2019-03-12 08:42:01 +01:00
1a3e25e4c1 swarm: tracing improvements (#19249) 2019-03-11 11:45:34 +01:00
9a58a9b91a swarm/storage/localstore: global batch write lock (#19245)
* swarm/storage/localstore: most basic database

* swarm/storage/localstore: fix typos and comments

* swarm/shed: add uint64 field Dec and DecInBatch methods

* swarm/storage/localstore: decrement size counter on ModeRemoval update

* swarm/storage/localstore: unexport modeAccess and modeRemoval

* swarm/storage/localstore: add WithRetrievalCompositeIndex

* swarm/storage/localstore: add TestModeSyncing

* swarm/storage/localstore: fix test name

* swarm/storage/localstore: add TestModeUpload

* swarm/storage/localstore: add TestModeRequest

* swarm/storage/localstore: add TestModeSynced

* swarm/storage/localstore: add TestModeAccess

* swarm/storage/localstore: add TestModeRemoval

* swarm/storage/localstore: add mock store option for chunk data

* swarm/storage/localstore: add TestDB_pullIndex

* swarm/storage/localstore: add TestDB_gcIndex

* swarm/storage/localstore: change how batches are written

* swarm/storage/localstore: add updateOnAccess function

* swarm/storage/localhost: add DB.gcSize

* swarm/storage/localstore: update comments

* swarm/storage/localstore: add BenchmarkNew

* swarm/storage/localstore: add retrieval tests benchmarks

* swarm/storage/localstore: accessors redesign

* swarm/storage/localstore: add semaphore for updateGC goroutine

* swarm/storage/localstore: implement basic garbage collection

* swarm/storage/localstore: optimize collectGarbage

* swarm/storage/localstore: add more garbage collection tests cases

* swarm/shed, swarm/storage/localstore: rename IndexItem to Item

* swarm/shed: add Index.CountFrom

* swarm/storage/localstore: persist gcSize

* swarm/storage/localstore: remove composite retrieval index

* swarm/shed: IterateWithPrefix and IterateWithPrefixFrom Index functions

* swarm/storage/localstore: writeGCSize function with leveldb batch

* swarm/storage/localstore: unexport modeSetRemove

* swarm/storage/localstore: update writeGCSizeWorker comment

* swarm/storage/localstore: add triggerGarbageCollection function

* swarm/storage/localstore: call writeGCSize on DB Close

* swarm/storage/localstore: additional comment in writeGCSizeWorker

* swarm/storage/localstore: add MetricsPrefix option

* swarm/storage/localstore: fix a typo

* swamr/shed: only one Index Iterate function

* swarm/storage/localstore: use shed Iterate function

* swarm/shed: pass a new byte slice copy to index decode functions

* swarm/storage/localstore: implement feed subscriptions

* swarm/storage/localstore: add more subscriptions tests

* swarm/storage/localsore: add parallel upload test

* swarm/storage/localstore: use storage.MaxPO in subscription tests

* swarm/storage/localstore: subscription of addresses instead chunks

* swarm/storage/localstore: lock item address in collectGarbage iterator

* swarm/storage/localstore: fix TestSubscribePull to include MaxPO

* swarm/storage/localstore: improve subscriptions

* swarm/storage/localstore: add TestDB_SubscribePull_sinceAndUntil test

* swarm/storage/localstore: adjust pull sync tests

* swarm/storage/localstore: remove writeGCSizeDelay and use literal

* swarm/storage/localstore: adjust subscriptions tests delays and comments

* swarm/storage/localstore: add godoc package overview

* swarm/storage/localstore: fix a typo

* swarm/storage/localstore: update package overview

* swarm/storage/localstore: remove repeated index change

* swarm/storage/localstore: rename ChunkInfo to ChunkDescriptor

* swarm/storage/localstore: add comment in collectGarbageWorker

* swarm/storage/localstore: replace atomics with mutexes for gcSize and tests

* swarm/storage/localstore: protect addrs map in pull subs tests

* swarm/storage/localstore: protect slices in push subs test

* swarm/storage/localstore: protect chunks in TestModePutUpload_parallel

* swarm/storage/localstore: fix a race in TestDB_updateGCSem defers

* swarm/storage/localstore: remove parallel flag from tests

* swarm/storage/localstore: fix a race in testDB_collectGarbageWorker

* swarm/storage/localstore: remove unused code

* swarm/storage/localstore: add more context to pull sub log messages

* swarm/storage/localstore: BenchmarkPutUpload and global lock option

* swarm/storage/localstore: pre-generate chunks in BenchmarkPutUpload

* swarm/storage/localstore: correct useGlobalLock in collectGarbage

* swarm/storage/localstore: fix typos and update comments

* swarm/storage/localstore: update writeGCSize comment

* swarm/storage/localstore: global batch write lock

* swarm/storage/localstore: remove global lock option

* swarm/storage/localstore: simplify DB.Close
2019-03-09 00:06:39 +01:00
f82185a4a1 p2p/protocols: fix data race in TestProtocolHook (#19242)
dummyHook's fields were concurrently written by nodes and read by
the test. The simplest solution is to protect all fields with a mutex.

Enable: TestMultiplePeersDropSelf, TestMultiplePeersDropOther as they
seemingly accidentally stayed disabled during a refactor/rewrite
since 1836366ac1.

resolves ethersphere/go-ethereum#1286
2019-03-08 17:30:16 +01:00
bb55b0fb53 swarm/storage: add comparison towards leveldb.ErrNotFound (#19243)
* swarm/storage: add comparison towards leveldb.ErrNotFound

* swarm/storage: wrap leveldb ErrNotFound
2019-03-08 17:28:57 +01:00
2cfe0bed9f swarm: fix relationship between spans in open tracing (#19236)
* swarm/network: propagate span with ctx

* swarm/network: try to stop stream.send.request spans on time

* swarm/storage: add chunk ref as a log to netstore.fetcher span
2019-03-08 08:52:25 +01:00
ceeb047e69 cmd/swarm/swarm-smoke: better logs when debug mode triggers (#19237)
* cmd/swarm/swarm-smoke: better logs for debug functionality;

* cmd/swarm/swarm-smoke: fixup
2019-03-08 08:52:05 +01:00
a6e5c6a2cc swarm/storage/localstore: fix synchronization in TestDB_gcSize (#19235) 2019-03-08 05:59:59 +01:00
4687391213 cmd/swarm: do not ignore cache size=0 (#19231) 2019-03-07 12:44:00 +01:00
316b63daf0 clef: fix erroneous api version (#19234) 2019-03-07 12:09:25 +01:00
2eed46dadf Merge pull request #19232 from karalabe/dl-fix-local-drop-sync
eth/downloader: fix nil droppeer in state sync
2019-03-07 13:06:36 +02:00
1612267a4b eth/downloader: fix nil droppeer in state sync 2019-03-07 12:37:03 +02:00
2fa9e99fc1 usbwallet: check error returned by driver close (#18057)
Although current two implementations(ledgerDriver, trezorDriver) of interface driver.Close do not actually return any error. Instead, they only return nil.
But since the declaration of Close function returns error, it is better to check the returned error in case in future some new implementation of Close function returns error and we may forget to modify the function which invokes Close function at that time.
2019-03-07 12:13:06 +02:00
5f94f8c7e7 signer: change the stdio jsonrpc to use legacy namespace conventions (#19047)
This PR will will break existing UIs, since it changes all calls like ApproveSignTransaction to be on the form ui_approveSignTransaction.

This is to make it possible for the UI to reuse the json-rpc library from go-ethereum, which uses this convention.

Also, this PR removes some unused structs, after import/export were removed from the external api (so no longer needs internal methods for approval)

One more breaking change is introduced, removing passwords from the ApproveSignTxResponse and the likes. This makes the manual interface more like the rulebased interface, and integrates nicely with the credential storage. Thus, the way it worked before, it would be tempting for the UI to implement 'remember password' functionality. The way it is now, it will be easy instead to tell clef to store passwords and use them.

If a pw is not found in the credential store, the user is prompted to provide the password.
2019-03-07 11:56:08 +02:00
eb199f1fc2 swarm: localstore hasser (#19230) 2019-03-07 10:07:54 +01:00
d45f8d1880 swarm/network: remove *WithServer tests from stream package (#19223)
These tests never run as the build tag excluded them from the CI
execution. As a results the (dead) code got out of sync with other
parts of Swarm and now they would not even compile. => Removed.

resolves ethersphere/go-ethereum#1238
2019-03-07 09:27:56 +01:00
a87776a5fe swarm/network/stream: Fix flaky tests in GetSubscriptionsRPC test (#19227)
* swarm/network/stream: fixed timing issues

* swarm/network/stream: only count first iteration of subscriptions

* swarm/network/stream/: fix linter errors
2019-03-07 09:24:28 +01:00
72b21db2d3 Merge pull request #19021 from karalabe/database-cleanup
all: clean up and properly abstract database accesses
2019-03-07 10:21:40 +02:00
f2d6310354 p2p/protocols: fix race condition in TestAccountingSimulation (#19228)
p2p/protocols: Fix race condition in TestAccountingSimulation
2019-03-07 08:13:11 +01:00
ce3ea8c9b1 Merge pull request #19222 from karalabe/clique-fix-test
consensus/clique: fix test copy paste error, test what's documented
2019-03-06 13:41:00 +02:00
054412e335 all: clean up and proerly abstract database access 2019-03-06 13:35:03 +02:00
2f24e254a9 consensus/clique: fix test copy paste error, test what's documented 2019-03-06 12:42:08 +02:00
15eee47ebf accounts: prefer nil slices over zero-length slices (#19079) 2019-03-06 12:30:39 +02:00
34c85def3e cmd/swarm/swarm-smoke: sliding window test should not time out (#19152) 2019-03-05 17:43:05 +01:00
81ed700157 Enable longrunning tests to run (#19208)
* p2p/simulations: increased snapshot load timeout for debugging

* swarm/network/stream: less nodes for snapshot longrunning tests

* swarm/network: fixed longrunning tests

* swarm/network/stream: store kademlia in bucket

* swarm/network/stream: disabled healthy check in delivery tests

* swarm/network/stream: longer SyncUpdateDelay for longrunning tests

* swarm/network/stream: more debug output

* swarm/network/stream: reduced longrunning snapshot tests to 64 nodes

* swarm/network/stream: don't WaitTillHealthy in SyncerSimulation

* swarm/network/stream: cleanup for PR
2019-03-05 12:54:46 +01:00
216bd2ceba swarm/storage/localstore: fix testDB_collectGarbageWorker data race (#19206) 2019-03-04 22:19:57 +01:00
a1099bb7e9 Merge pull request #19205 from holiman/alltools_clef
build: add clef to alltools and deb
2019-03-04 15:46:20 +02:00
603a85218b vendor: update leveldb (#19201) 2019-03-04 15:43:45 +02:00
e2d322b25a build: add clef to alltools and deb 2019-03-04 12:02:58 +01:00
f9aa1cd21f Revert "swarm/network: Use actual remote peer ip in underlay (#19137)" (#19193)
This reverts commit 460d206f30.
2019-03-02 08:45:07 +01:00
b797dd07d2 swarm/shed, swarm/storage/localstore: add LastPullSubscriptionChunk (#19190)
* swarm/shed, swarm/storage/localstore: add LastPullSubscriptionChunk

* swarm/shed: fix comments

* swarm/shed: fix TestIncByteSlice test

* swarm/storage/localstore: fix TestDB_LastPullSubscriptionChunk
2019-03-02 08:44:22 +01:00
729bf365b5 whisper: Remove v5 (#18432) 2019-03-01 12:36:41 +01:00
4e9230ea7a swarm: enable p2p/discovery and disable dynamic dialling (#19189) 2019-03-01 12:20:37 +01:00
94eca08ad8 build: enable Ubuntu Disco Dingo PPA builds 2019-03-01 11:31:10 +02:00
0594deb652 Merge pull request #19187 from karalabe/fix-ppa-go1.11-cache-2
build/deb: fix PPA env var setting
2019-03-01 10:51:36 +02:00
696a65b016 build/deb: fix PPA env var setting 2019-03-01 10:50:21 +02:00
5c03baaf6f Merge pull request #19184 from karalabe/fix-ppa-go1.11-cache
build/deb: use custom cache for PPA builder
2019-03-01 10:43:11 +02:00
994326ba00 swarm: new snapshot files (#19185) 2019-02-28 22:30:36 +01:00
509ea3ef9b build/deb: use custom cache for PPA builder 2019-02-28 16:34:37 +02:00
313576530f Merge pull request #19183 from karalabe/bn256-arm64-go1.12-fix
crypto/bn256/cloudflare: pull in upstream fix for Go 1.12 R18
2019-02-28 14:59:41 +02:00
0f41356b95 Merge pull request #19182 from karalabe/fix-legacy-receipt-decoding
core/types: fix receipt legacy decoding
2019-02-28 14:59:26 +02:00
39bd2609ca crypto/bn256/cloudflare: pull in upstream fix for Go 1.12 R18 2019-02-28 14:53:44 +02:00
1bc7f3f906 core/types: fix receipt legacy decoding 2019-02-28 14:16:36 +02:00
dac7cbcf21 p2p/enode: use localItemKey for local sequence number (#19131)
* p2p/discover: remove unused function

* p2p/enode: use localItemKey for local sequence number

I added localItemKey for this purpose in #18963, but then
forgot to actually use it. This changes the database layout
yet again and requires bumping the version number.
2019-02-28 13:14:45 +02:00
dd28ba378a node: require LocalAppData variable (#19132)
* node: require LocalAppData variable

This avoids path inconsistencies on Windows XP.
Hat tip to @MicahZoltu for catching this so quickly.

* node: fix typo
2019-02-28 13:11:52 +02:00
62d9d63858 swarm/network: WIP consider all nodes for healthy iteration (#19155)
* swarm/network: WIP consider all nodes for healthy iteration

* swarm/network/simulation: extend TestWaitTillHealthy to really check kads are healthy

* cmd/swarm/swarm-snapshot: fixed bugs in snapshot creation binary

* swarm/network/simulation: addressed PR comments

* swarm/network/simulation: defer sim.Clsoe()

* swarm/network/simulation: fixed wrong sim.Close()

* swarm/network/simulation: addressed PR comments

* cmd/swarm/swarm-snapshot: reducing default to 8 nodes, more to 4

* cmd/swarm/swarm-snapshot: extended timeout to 3 mins, or 256 nodes snapshot times out

* swarm/network/simulation: More PR comments
2019-02-28 08:12:50 +01:00
505a49e689 Merge pull request #19166 from SamuelMarks/go-1.12
Upgrade to Go 1.12
2019-02-27 15:09:59 +02:00
2b6158a51e common/fdlimit: fix macos file descriptors for Go 1.12 2019-02-27 14:21:02 +02:00
e43bc36226 travis, appveyor, Dockerfile: upgrade to Go 1.12 2019-02-27 14:21:02 +02:00
7ebd2fa5db vendor: update leveldb upstream which include a compaction fix (#19163) 2019-02-27 14:10:10 +02:00
f0233948d2 swarm/chunk: move chunk related declarations to chunk package (#19170) 2019-02-26 16:09:32 +01:00
b7e0dec6bd travis, build: switch to NDK 19b, fix gomobile builds (#19171)
* travis, build: switch to NDK 19b, fix gomobile builds

* travis, build: move NDK into its final bundle location

* travis: disable Android build on PRs once again
2019-02-26 16:56:12 +02:00
6d652ce1b7 Merge pull request #19164 from karalabe/nuke-old-containers
containers/docker: nuke per the 1.8.0 deprecation note
2019-02-26 13:51:09 +02:00
c2003ed63b les, les/flowcontrol: improved request serving and flow control (#18230)
This change

- implements concurrent LES request serving even for a single peer.
- replaces the request cost estimation method with a cost table based on
  benchmarks which gives much more consistent results. Until now the
  allowed number of light peers was just a guess which probably contributed
  a lot to the fluctuating quality of available service. Everything related
  to request cost is implemented in a single object, the 'cost tracker'. It
  uses a fixed cost table with a global 'correction factor'. Benchmark code
  is included and can be run at any time to adapt costs to low-level
  implementation changes.
- reimplements flowcontrol.ClientManager in a cleaner and more efficient
  way, with added capabilities: There is now control over bandwidth, which
  allows using the flow control parameters for client prioritization.
  Target utilization over 100 percent is now supported to model concurrent
  request processing. Total serving bandwidth is reduced during block
  processing to prevent database contention.
- implements an RPC API for the LES servers allowing server operators to
  assign priority bandwidth to certain clients and change prioritized
  status even while the client is connected. The new API is meant for
  cases where server operators charge for LES using an off-protocol mechanism.
- adds a unit test for the new client manager.
- adds an end-to-end test using the network simulator that tests bandwidth
  control functions through the new API.
2019-02-26 12:32:48 +01:00
c2b33a117f graphql: fix typos in comments (#19041) 2019-02-26 12:19:43 +01:00
a42afa766c Merge pull request #19169 from karalabe/ppa-bump
build: bump PPA builders to Go 1.11
2019-02-26 12:30:55 +02:00
647692fbfe build: bump PPA builders to Go 1.11 2019-02-26 11:30:56 +02:00
c83ba9e794 swarm/storage/localstore: fix tests for windows os (#19161) 2019-02-26 10:02:05 +01:00
340a53a98b swarm/pss: fix data race on HandshakeController.symKeyIndex (#19162)
* swarm/pss: fix data race on HandshakeController.symKeyIndex

The HandshakeController.symKeyIndex map was accessed concurrently.
Since insufficient test coverage the race is not detected every time.
However, running TestClientHandshake a 100 times seems to be enough to
reproduce the race.

Note: I've chosen HandshakeController.lock to protect
HandshakeController.symKeyIndex as that was already protected in a few
functions by that lock.

Additionally:
- removed unused testStore
- enabled tests in handshake_test.go as they pass
- removed code duplication by adding getSymKey()

* swarm/pss: fix a data race on HandshakeController.keyC

* swarm/pss: fix data races with on Pss.symKeyPool
2019-02-26 08:17:20 +01:00
badaf43019 les: remove redundant type specifiers (#19091) 2019-02-25 12:49:49 +02:00
f5f742d1ac containers/docker: nuke per the 1.8.0 deprecation note 2019-02-25 12:40:33 +02:00
7d881e45bd rlp: added pooling of streams using sync (#19044)
Prevents reallocation, improves performance
2019-02-25 12:34:08 +02:00
872370e3bc swarm/network/simulation: do not copy node mutex in UploadSnapshot (#19160) 2019-02-25 10:03:31 +01:00
81babe1509 swarm/*: remove redundant type specifiers (#19089) 2019-02-25 08:58:18 +01:00
4e87b444ca swarm/pss: remove unused function (#19100) 2019-02-24 12:41:22 +01:00
90b6cdaadf cmd,swarm: enforce camel case variable names (#19060) 2019-02-24 12:39:23 +01:00
64d10c0872 swarm: mock store listings (#19157)
* swarm/storage/mock: implement listings methods for mem and rpc stores

* swarm/storage/mock/rpc: add comments and newTestStore helper function

* swarm/storage/mock/mem: add missing comments

* swarm/storage/mock: add comments to new types and constants

* swarm/storage/mock/db: implement listings for mock/db global store

* swarm/storage/mock/test: add comments for MockStoreListings

* swarm/storage/mock/explorer: initial implementation

* cmd/swarm/global-store: add chunk explorer

* cmd/swarm/global-store: add chunk explorer tests

* swarm/storage/mock/explorer: add tests

* swarm/storage/mock/explorer: add swagger api definition

* swarm/storage/mock/explorer: not-zero test values for invalid addr and key

* swarm/storage/mock/explorer: test wildcard cors origin

* swarm/storage/mock/db: renames based on Fabio's suggestions

* swarm/storage/mock/explorer: add more comments to testHandler function

* cmd/swarm/global-store: terminate subprocess with Kill in tests
2019-02-23 10:47:33 +01:00
02c28046a0 swarm: Fix localstore test deadlock with race detector (#19153)
* swarm/storage/localstore: close localstore in two tests

* swarm/storage/localstore: fix a possible deadlock in tests

* swarm/storage/localstore: re-enable pull subs tests for travis race

* swarm/storage/localstore: stop sending to errChan on context done in tests

* swarm/storage/localstore: better want check in readPullSubscriptionBin

* swarm/storage/localstore: protect chunk put with addr lock in tests

* swamr/storage/localstore: wait for gc and writeGCSize workers on Close

* swarm/storage/localstore: more correct testDB_collectGarbageWorker

* swarm/storage/localstore: set DB Close timeout to 5s
2019-02-22 23:19:09 +01:00
d9adcd3a27 travis.yml: add race detector job for Swarm (#19148) 2019-02-22 14:20:21 +01:00
1993227311 .travis.yml: make build independent of repo name (#17607) 2019-02-22 12:07:04 +01:00
c8da76e63d swarm/shed: fix a deadlock in meter function (#19149) 2019-02-21 20:42:53 +01:00
836c846812 swarm/network/master: protect SetNextBatch iterator after close (#19147) 2019-02-21 18:33:49 +01:00
b9808e392f swarm/version: bump to v0.3.12 unstable 2019-02-21 15:25:42 +02:00
7fd0ccaa68 core: remove unnecessary fields in logs, receipts and tx lookups (#17106)
* core: remove unnecessary fields in log

* core: bump blockchain database version

* core, les: remove unnecessary fields in txlookup

* eth: print db version explicitly

* core/rawdb: drop txlookup entry struct wrapper
2019-02-21 15:14:35 +02:00
8577b5b020 core: more tests for sidechain import, fixes #19105 (#19113)
* blockchain: more tests for sidechain import, fixes #19105

* core/blockchain: rework import of pruned canon blocks and canon-prepended sidechains

* core/blockchain: minor clarity change

* core/blockchain: remove unused method
2019-02-21 12:36:49 +02:00
628a0bde3f common/fdlimit: Fix compilation error in freebsd, Raise() returns uint64 (#19141)
cast limit.Cur (int64) to uint64, fix test and compilation error
2019-02-21 11:16:51 +02:00
fbedf62f3d swarm/storage: fix loop bound for database cleanup (#19085)
The current loop continuation condition is always true as a uint8
is always being checked whether it is less than 255 (its maximum
value). Since the loop starts with the value 1, the loop termination
can be guarranteed to exit once the value overflows to 0.
2019-02-21 06:37:32 +01:00
9d5e10f5bb cmd/swarm/global-store: use kill instead interrupt in tests (#19142) 2019-02-20 22:58:25 +01:00
e38b227ce6 Ci race detector handle failing tests (#19143)
* swarm/storage: increase mget timeout in common_test.go

 TestDbStoreCorrect_1k sometimes timed out with -race on Travis.

--- FAIL: TestDbStoreCorrect_1k (24.63s)
    common_test.go:194: testStore failed: timed out after 10s

* swarm: remove unused vars from TestSnapshotSyncWithServer

nodeCount and chunkCount is returned from setupSim and those values
we use.

* swarm: move race/norace helpers from stream to testutil

As we will need to use the flag in other packages, too.

* swarm: refactor TestSwarmNetwork case

Extract long running test cases for better visibility.

* swarm/network: skip TestSyncingViaGlobalSync with -race

As panics on Travis.

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7e351b]

* swarm: run TestSwarmNetwork with fewer nodes with -race

As otherwise we always get test failure with `network_test.go:374:
context deadline exceeded` even with raised `Timeout`.

* swarm/network: run TestDeliveryFromNodes with fewer nodes with -race

Test on Travis times out with 8 or more nodes if -race flag is present.

* swarm/network: smaller node count for discovery tests with -race

TestDiscoveryPersistenceSimulationSimAdapters failed on Travis with
`-race` flag present. The failure was due to extensive memory usage,
coming from the CGO runtime. Using a smaller node count resolves the
issue.

=== RUN   TestDiscoveryPersistenceSimulationSimAdapter
==7227==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL    github.com/ethereum/go-ethereum/swarm/network/simulations/discovery     804.826s

* swarm/network: run TestFileRetrieval with fewer nodes with -race

Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes.

=== RUN   TestFileRetrieval
==7366==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL	github.com/ethereum/go-ethereum/swarm/network/stream	155.165s

* swarm/network: run TestRetrieval with fewer nodes with -race

Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes ("ThreadSanitizer failed to
allocate").

* swarm/network: skip flaky TestGetSubscriptionsRPC on Travis w/ -race

Test fails a lot with something like:
 streamer_test.go:1332: Real subscriptions and expected amount don't match; real: 0, expected: 20

* swarm/storage: skip TestDB_SubscribePull* tests on Travis w/ -race

Travis just hangs...

ok  	github.com/ethereum/go-ethereum/swarm/storage/feed/lookup	1.307s
keepalive
keepalive
keepalive

or panics after a while.

Without these tests the race detector job is now stable. Let's
invetigate these tests in a separate issue:
https://github.com/ethersphere/go-ethereum/issues/1245
2019-02-20 22:57:42 +01:00
d36e974ba3 swarm/network: Keep span across roundtrip (#19140)
* swarm/newtork: WIP Span request span until delivery and put

* swarm/storage: Introduce new trace across single fetcher lifespan

* swarm/network: Put span ids for sendpriority in context value

* swarm: Add global span store in tracing

* swarm/tracing: Add context key constants

* swarm/tracing: Add comments

* swarm/storage: Remove redundant fix for filestore

* swarm/tracing: Elaborate constants comments

* swarm/network, swarm/storage, swarm:tracing: Minor cleanup
2019-02-20 14:50:37 +01:00
460d206f30 swarm/network: Use actual remote peer ip in underlay (#19137)
* swarm/network: Logline to see handshake addr

* swarm/network: Replace remote ip in handshake uaddr

* swarm/network: Add test for enode uaddr rewrite method

* swarm/network: Remove redundance pointer return from sanitize

* swarm/network: Obeying the linting machine

* swarm/network: Add panic comment

(travis trigger take 1)
2019-02-20 14:46:00 +01:00
ba2dfa5ce4 swarm/network/stream: fix a goroutine leak in Registry (#19139)
* swarm/network/stream: fix a goroutine leak in Registry

* swarm/network, swamr/network/stream: Kademlia close addr count and depth change chans

* swarm/network/stream: rename close channel to quit

* swarm/network/stream: fix sync between NewRegistry goroutine and Close method
2019-02-20 14:45:25 +01:00
f49f95e2b0 accounts/abi: mutex lock in TransactionByHash and code cleanup (#19133) 2019-02-20 08:08:54 +01:00
d3ccedc767 p2p/simulations: enforce camel case variable names (#19053) 2019-02-19 19:50:59 +02:00
2e8a5e5659 core/vm: remove unused constants (#19095) 2019-02-19 19:49:45 +02:00
8af6c9e6a2 eth: extract check for tracing transaction in block file (#19107)
Simplifies the transaction presense check to use a function to
determine if the transaction is present in the block provided
to trace, which originally had a redundant parenthesis and used
a `exist` flag to dictate control flow.
2019-02-19 19:49:24 +02:00
514a9472ad trie: prefer nil slices over zero-length slices (#19084) 2019-02-19 14:50:11 +01:00
57f959af41 p2p/enode: use localItemKey for local sequence number
I added localItemKey for this purpose in #18963, but then
forgot to actually use it. This changes the database layout
yet again and requires bumping the version number.
2019-02-19 13:29:41 +01:00
cf147c71d5 p2p/discover: remove unused function 2019-02-19 13:29:19 +01:00
f1537b774c p2p/discover: make maximum packet size a constant (#19061) 2019-02-19 12:27:29 +01:00
bf42535d31 core: remove redundant parentheses (#19106) 2019-02-19 13:25:42 +02:00
b5e5b3567c crypto: fix build when CGO_ENABLED=0 (#19121)
Package crypto works with or without cgo, which is great. However, to make it
work without cgo required setting the build tag `nocgo`. It's common to disable
cgo by instead just setting the environment variable `CGO_ENABLED=0`. Setting
this environment variable does _not_ implicitly set the build tag `nocgo`. So
projects that try to build the crypto package with `CGO_ENABLED=0` will fail. I
have done this myself several times. Until today, I had just assumed that this
meant that this package requires cgo.

But a small build tag change will make this case work. Instead of using `nocgo`
and `!nocgo`, we can use `!cgo` and `cgo`, respectively. The `cgo` build tag is
automatically set if cgo is enabled, and unset if it is disabled.
2019-02-19 12:18:37 +01:00
f7f6a46029 eth, node: use APPDATA env to support cygwin/msys correctly (#17786)
This changes default location of the data directory to use the LOCALAPPDATA
environment variable, resolving issues with remote home directories an improving
compatibility with Cygwin.

Fixes #2239 
Fixes #2237 
Fixes #16437
2019-02-19 12:15:15 +01:00
d2256244c4 rpc: fixup change to not verify websocket origin (#19128)
This corrects the previous change which broke the build and
was pushed by accident.
2019-02-19 12:58:32 +02:00
26d3a8ca80 rpc: skip websocket origin check if there is no origin header 2019-02-19 11:49:43 +01:00
c283d9b5e8 signer/core: handle JSON unmarshal error (#19123) 2019-02-19 10:48:19 +02:00
4a090a1bab accounts/abi: fix error message format (#19122) 2019-02-19 10:34:55 +02:00
9c7e65c435 accounts: fix typos from the SignData merge (#19119) 2019-02-19 10:33:42 +02:00
c5eccaefb8 core/vm: remove unused constants 2019-02-18 07:48:19 -08:00
d88c6ce6b0 swarm: Reinstate Pss Protocol add call through swarm service (#19117)
* swarm: Reinstate Pss Protocol add call through swarm service

* swarm: Even less self
2019-02-18 16:44:50 +01:00
3c62f965e7 eth: remove redundant parentheses (#19108) 2019-02-18 17:42:22 +02:00
2a0e1bb32b crypto/ecies: remove unused function (#19096) 2019-02-18 14:09:07 +02:00
f57c80d22e metrics: remove redundant type specifiers (#19090) 2019-02-18 13:37:31 +02:00
7b3e3d549e node: prefer nil slices over zero-length slices (#19083) 2019-02-18 13:31:22 +02:00
3930f3795b core: remove unused function (#19097) 2019-02-18 13:27:31 +02:00
a1f366ecf6 core/vm: update annotation (#19050) 2019-02-18 12:14:49 +01:00
df7c4618cd signer/core: remove unused function (#19099) 2019-02-18 12:10:28 +01:00
62d7688d0a cmd/swarm/swarm-smoke: Trigger chunk debug on timeout (#19101)
* cmd/swarm/swarm-smoke: first version trigger has-chunks on timeout

* cmd/swarm/swarm-smoke: finalize trigger to chunk debug

* cmd/swarm/swarm-smoke: fixed httpEndpoint for trigger

* cmd/swarm/swarm-smoke: port

* cmd/swarm/swarm-smoke: ws not rpc

* cmd/swarm/swarm-smoke: added debug output

* cmd/swarm/swarm-smoke: addressed PR comments

* cmd/swarm/swarm-smoke: renamed track-timeout and track-chunks
2019-02-18 12:05:22 +01:00
b6ce358a9b Merge pull request #19114 from holiman/update_bigcache
vendor: update bigcache
2019-02-18 12:43:41 +02:00
c0b9c763bb build: explicitly force .xz compression (old debuild picks gzip) (#19118) 2019-02-18 12:12:26 +02:00
75a931470e travis.yml: add launchpad SSH public key (#19115) 2019-02-18 10:24:11 +01:00
37e5a908e7 vendor: update bigcache 2019-02-18 09:33:05 +01:00
50b872bf05 p2p, swarm: fix node up races by granular locking (#18976)
* swarm/network: DRY out repeated giga comment

I not necessarily agree with the way we wait for event propagation.
But I truly disagree with having duplicated giga comments.

* p2p/simulations: encapsulate Node.Up field so we avoid data races

The Node.Up field was accessed concurrently without "proper" locking.
There was a lock on Network and that was used sometimes to access
the  field. Other times the locking was missed and we had
a data race.

For example: https://github.com/ethereum/go-ethereum/pull/18464
The case above was solved, but there were still intermittent/hard to
reproduce races. So let's solve the issue permanently.

resolves: ethersphere/go-ethereum#1146

* p2p/simulations: fix unmarshal of simulations.Node

Making Node.Up field private in 13292ee897
broke TestHTTPNetwork and TestHTTPSnapshot. Because the default
UnmarshalJSON does not handle unexported fields.

Important: The fix is partial and not proper to my taste. But I cut
scope as I think the fix may require a change to the current
serialization format. New ticket:
https://github.com/ethersphere/go-ethereum/issues/1177

* p2p/simulations: Add a sanity test case for Node.Config UnmarshalJSON

* p2p/simulations: revert back to defer Unlock() pattern for Network

It's a good patten to call `defer Unlock()` right after `Lock()` so
(new) error cases won't miss to unlock. Let's get back to that pattern.

The patten was abandoned in 85a79b3ad3,
while fixing a data race. That data race does not exist anymore,
since the Node.Up field got hidden behind its own lock.

* p2p/simulations: consistent naming for test providers Node.UnmarshalJSON

* p2p/simulations: remove JSON annotation from private fields of Node

As unexported fields are not serialized.

* p2p/simulations: fix deadlock in Network.GetRandomDownNode()

Problem: GetRandomDownNode() locks -> getDownNodeIDs() ->
GetNodes() tries to lock -> deadlock

On Network type, unexported functions must assume that `net.lock`
is already acquired and should not call exported functions which
might try to lock again.

* p2p/simulations: ensure method conformity for Network

Connect* methods were moved to p2p/simulations.Network from
swarm/network/simulation. However these new methods did not follow
the pattern of Network methods, i.e., all exported method locks
the whole Network either for read or write.

* p2p/simulations: fix deadlock during network shutdown

`TestDiscoveryPersistenceSimulationSimAdapter` often got into deadlock.
The execution was stuck on two locks, i.e, `Kademlia.lock` and
`p2p/simulations.Network.lock`. Usually the test got stuck once in each
20 executions with high confidence.

`Kademlia` was stuck in `Kademlia.EachAddr()` and `Network` in
`Network.Stop()`.

Solution: in `Network.Stop()` `net.lock` must be released before
calling `node.Stop()` as stopping a node (somehow - I did not find
the exact code path) causes `Network.InitConn()` to be called from
`Kademlia.SuggestPeer()` and that blocks on `net.lock`.

Related ticket: https://github.com/ethersphere/go-ethereum/issues/1223

* swarm/state: simplify if statement in DBStore.Put()

* p2p/simulations: remove faulty godoc from private function

The comment started with the wrong method name.

The method is simple and self explanatory. Also, it's private.
=> Let's just remove the comment.
2019-02-18 07:38:14 +01:00
12ca3b172a swarm/pss: refactoring (#19110)
* swarm/pss: split pss and keystore

* swarm/pss: moved whisper to keystore

* swarm/pss: goimports fixed
2019-02-17 06:29:41 +01:00
4f85c2b88b trie: fix error in node decoding (#19111) 2019-02-16 16:16:12 +01:00
5b8ae7885e swarm/storage: fix influxdb gc metrics report (#19102) 2019-02-15 07:41:42 +01:00
7f6b05aa43 Merge pull request #19087 from karalabe/ios-fix-take-2
vendor: pull in upstream syscall fixes for non-linux/arm64
2019-02-15 02:01:28 +02:00
fa87929a2f cmd: prefer nil slices over zero-length slices (#19077) 2019-02-15 01:02:11 +02:00
e26a119c9b console: prefer nil slices over zero-length slices (#19076) 2019-02-15 00:59:54 +02:00
9d3ea8df1c vendor: pull in upstream syscall fixes for non-linux/arm64 2019-02-15 00:51:34 +02:00
2b75fa9d61 core: enforce camel case variable names (#19058) 2019-02-14 21:14:05 +02:00
2af24724dd swarm/network: Saturation check for healthy networks (#19071)
* swarm/network: new saturation for  implementation

* swarm/network: re-added saturation func in Kademlia as it is used elsewhere

* swarm/network: saturation with higher MinBinSize

* swarm/network: PeersPerBin with depth check

* swarm/network: edited tests to pass new saturated check

* swarm/network: minor fix saturated check

* swarm/network/simulations/discovery: fixed renamed RPC call

* swarm/network: renamed to isSaturated and returns bool

* swarm/network: early depth check
2019-02-14 19:01:50 +01:00
fab8c5a1cd Merge pull request #19072 from karalabe/update-syscalls
vendor: update syscalls dependency
2019-02-14 18:42:02 +02:00
dcc045f03c vendor: update syscalls dependency 2019-02-14 18:14:28 +02:00
ba90a4aaa4 common/fdlimit: fix windows build (#19068) 2019-02-14 17:32:22 +02:00
325334f61a light: enforce camel case variable names (#19054) 2019-02-14 17:18:32 +02:00
a8ddf7ad83 build: avoid dput and upload with sftp directly (#19067) 2019-02-14 17:10:09 +02:00
7d24a73192 eth/tracers: enforce camel case variable names (#19057) 2019-02-14 16:38:55 +02:00
e6c06a1da8 console, internal: enforce camel case variable names (#19059) 2019-02-14 16:38:07 +02:00
3ee09ba035 swarm/storage/netstore: add fetcher cancellation on shutdown (#19049)
swarm/network/stream: remove netstore internal wg
swarm/network/stream: run individual tests with t.Run
2019-02-14 07:51:57 +01:00
e9f70c9064 clef: documentation generator + docs (#19020)
* clef: implement documentation generation + remove unused struct

* clef: formatting + spelling

* clef: updates to doc
2019-02-13 21:37:59 +01:00
3fd6db2bf6 swarm: fix network/stream data races (#19051)
* swarm/network/stream: newStreamerTester cleanup only if err is nil

* swarm/network/stream: raise newStreamerTester waitForPeers timeout

* swarm/network/stream: fix data races in GetPeerSubscriptions

* swarm/storage: prevent data race on LDBStore.batchesC

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461775049

* swarm/network/stream: fix TestGetSubscriptionsRPC data race

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461768477

* swarm/network/stream: correctly use Simulation.Run callback

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-461783804

* swarm/network: protect addrCountC in Kademlia.AddrCountC function

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-462273444

* p2p/simulations: fix a deadlock calling getRandomNode with lock

https://github.com/ethersphere/go-ethereum/issues/1198#issuecomment-462317407

* swarm/network/stream: terminate disconnect goruotines in tests

* swarm/network/stream: reduce memory consumption when testing data races

* swarm/network/stream: add watchDisconnections helper function

* swarm/network/stream: add concurrent counter for tests

* swarm/network/stream: rename race/norace test files and use const

* swarm/network/stream: remove watchSim and its panic

* swarm/network/stream: pass context in watchDisconnections

* swarm/network/stream: add concurrent safe bool for watchDisconnections

* swarm/storage: fix LDBStore.batchesC data race by not closing it
2019-02-13 13:03:23 +01:00
d596bea2d5 swarm: fix uptime gauge update goroutine leak by introducing cleanup functions (#19040) 2019-02-13 08:15:03 +01:00
555b3652dc accounts/abi/bind/backends: add TransactionByHash to SimulatedBackend (#19026) 2019-02-13 00:23:10 +01:00
3d22a46c94 swarm/storage: fix HashExplore concurrency bug ethersphere#1211 (#19028)
* swarm/storage: fix HashExplore concurrency bug ethersphere#1211

*  swarm/storage: lock as value not pointer

* swarm/storage: wait for  to complete

* swarm/storage: fix linter problems

* swarm/storage: append to nil slice
2019-02-13 00:17:44 +01:00
b30109df3c swarm/pss: mutex lifecycle fixed (#19045) 2019-02-13 00:12:41 +01:00
8771fbf3c8 rpc: make stdio usable over custom channels (#19046) 2019-02-12 18:05:28 +01:00
b5d471a739 clef: bidirectional communication with UI (#19018)
* clef: initial implementation of bidirectional RPC communication for the UI

* signer: fix tests to pass + formatting

* clef: fix unused import + formatting

* signer: gosimple nitpicks
2019-02-12 17:38:46 +01:00
75d292bcf6 clef: external signing fixes + signing data (#19003)
* signer/clef: make use of json-rpc notification

* signer: tidy up output of OnApprovedTx

* accounts/external, signer: implement remote signing of text, make accounts_sign take hexdata

* clef: added basic testscript

* signer, external, api: add clique signing test to debug rpc, fix clique signing in clef

* signer: fix clique interoperability between geth and clef

* clef: rename networkid switch to chainid

* clef: enable chainid flag

* clef, signer: minor changes from review

* clef: more tests for signer
2019-02-12 14:00:02 +01:00
edf976ee8e .travis.yml: fix upload destination (#19043) 2019-02-12 13:02:40 +01:00
f48da43bae common/fdlimit: cap on MacOS file limits, fixes #18994 (#19035)
* common/fdlimit: cap on MacOS file limits, fixes #18994

* common/fdlimit: fix Maximum-check to respect OPEN_MAX

* common/fdlimit: return error if OPEN_MAX is exceeded in Raise()

* common/fdlimit: goimports

* common/fdlimit: check value after setting fdlimit

* common/fdlimit: make comment a bit more descriptive

* cmd/utils: make fdlimit happy path a bit cleaner
2019-02-12 12:29:05 +02:00
3de19c8b31 build: use SFTP for launchpad uploads (#19037)
* build: use sftp for launchpad uploads

* .travis.yml: configure sftp export

* build: update CI docs
2019-02-12 11:55:25 +02:00
6cb7d52a29 swarm/docker: add global-store and split docker images (#19038) 2019-02-12 08:34:08 +01:00
27e3f96819 swarm: CI race detector test adjustments (#19017) 2019-02-08 17:07:11 +01:00
cde02e017e swarm/pss: transition to whisper v6 (#19023) 2019-02-08 17:05:10 +01:00
0c10d37606 swarm/network, swarm/storage: Preserve opentracing contexts (#19022) 2019-02-08 16:57:48 +01:00
0436412412 Merge pull request #18988 from holiman/repro18977
core: repro #18977
2019-02-08 12:27:15 +02:00
940e317094 core: fix pruner panic when importing low-diff-large-sidechain 2019-02-08 11:56:25 +02:00
da1efdae0c core: repro #18977 2019-02-08 10:29:51 +02:00
4f3d22f06c swarm/storage/localstore: new localstore package (#19015) 2019-02-07 18:40:26 +01:00
41597c2856 swarm: Debug API and HasChunks() API endpoint (#18980) 2019-02-07 15:49:19 +01:00
33d0a0efa6 cmd/swarm/global-store: global store cmd (#19014) 2019-02-07 15:46:58 +01:00
685eec3128 Merge pull request #19012 from holiman/default155
ethapi: default to use eip-155 protected transactions
2019-02-07 16:19:28 +02:00
9fa4c3ce94 Merge pull request #18991 from karalabe/archive-write-cache
cmd/utils, eth: relinquish GC cache to read cache in archive mode
2019-02-07 16:03:43 +02:00
d212535ddd cmd/swarm/swarm-smoke: refactor generateEndpoints (#19006) 2019-02-07 14:38:32 +01:00
7f55b0cbd8 cmd/swarm: hashes command (#19008) 2019-02-07 13:51:24 +01:00
d6225ab846 cmd/utils, eth: relinquish GC cache to read cache in archive mode 2019-02-07 14:36:25 +02:00
85b3b1c8d6 Merge pull request #16619 from kielbarry/contractsgolint
contracts/*: golint updates for this or self warning
2019-02-07 14:07:03 +02:00
159e991196 contracts/chequebook: polishes and naked return removals 2019-02-07 13:16:55 +02:00
53b823afc8 contracts/*: golint updates for this or self warning 2019-02-07 13:15:14 +02:00
2ac61a9914 ethapi: default to use eip-155 protected transactions 2019-02-07 11:59:17 +01:00
6f714ed73e light: make chain receiver names consistent (#18997) 2019-02-07 12:53:45 +02:00
7c339ff442 light: make transaction pool receiver names consistent (#19000) 2019-02-07 12:53:00 +02:00
4097ba6a00 mobile: add ability to create transactions for deploying contracts (#16104) 2019-02-07 12:49:22 +02:00
26aea73673 cmd, node, p2p/simulations: fix node account manager leak (#19004)
* node: close AccountsManager in new Close method

* p2p/simulations, p2p/simulations/adapters: handle node close on shutdown

* node: move node ephemeralKeystore cleanup to stop method

* node: call Stop in Node.Close method

* cmd/geth: close node.Node created with makeFullNode in cli commands

* node: close Node instances in tests

* cmd/geth, node: minor code style fixes

* cmd, console, miner, mobile: proper node Close() termination
2019-02-07 12:40:36 +02:00
81801ccc2b core/state: more memory efficient preimage allocation (#16663) 2019-02-07 10:44:45 +01:00
c57166caa3 Merge pull request #19005 from karalabe/puppeth-fix-petersburg
cmd/puppeth: handle pre-set Petersburg number, save changed fork rules
2019-02-06 16:11:41 +02:00
9747df295e cmd/puppeth: handle pre-set Petersburg number, save changed fork rules 2019-02-06 14:34:08 +02:00
3eff652a7b swarm/storage: Get all chunk references for a given file (#19002) 2019-02-06 12:16:43 +01:00
75c9570c31 docs: add audit reports (#18996)
* docs: add audit reports

* docs: better filenames
2019-02-06 11:47:54 +02:00
572baae10a signer, clef: implement EIP191/712 (#17789)
* Named functions and defined a basic EIP191 content type list

* Written basic content type functions

* Added ecRecover method in the clef api

* Updated the extapi changelog and addded indications in the README

* Changed the version of the external API

* Added tests for 0x45

* Implementing UnmarshalJSON() for TypedData

* Working on TypedData

* Solved the auditlog issue

* Changed method to signTypedData

* Changed mimes and implemented the 'encodeType' function for EIP-712

* Polished docstrings, ran goimports and swapped fmt.Errorf with errors.New where possible

* Drafted recursive encodeData

* Ran goimports and gofmt

* Drafted first version of EIP-712, including tests

* Temporarily switched to using common.Address in tests

* Drafted text/validator and and rewritten []byte as hexutil.Bytes

* Solved stringified address encoding issue

* Changed the property type required by signData from bytes to interface{}

* Fixed bugs in 'data/typed' signs

* Brought legal warning back after temporarily disabling it for development

* Added example RPC calls for account_signData and account_signTypedData

* Named functions and defined a basic EIP191 content type list

* Written basic content type functions

* Added ecRecover method in the clef api

* Updated the extapi changelog and addded indications in the README

* Added tests for 0x45

* Implementing UnmarshalJSON() for TypedData

* Working on TypedData

* Solved the auditlog issue

* Changed method to signTypedData

* Changed mimes and implemented the 'encodeType' function for EIP-712

* Polished docstrings, ran goimports and swapped fmt.Errorf with errors.New where possible

* Drafted recursive encodeData

* Ran goimports and gofmt

* Drafted first version of EIP-712, including tests

* Temporarily switched to using common.Address in tests

* Drafted text/validator and and rewritten []byte as hexutil.Bytes

* Solved stringified address encoding issue

* Changed the property type required by signData from bytes to interface{}

* Fixed bugs in 'data/typed' signs

* Brought legal warning back after temporarily disabling it for development

* Added example RPC calls for account_signData and account_signTypedData

* Polished and fixed PR

* Polished and fixed PR

* Solved malformed data panics and also wrote tests

* Solved malformed data panics and also wrote tests

* Added alphabetical sorting to type dependencies

* Added alphabetical sorting to type dependencies

* Added pretty print to data/typed UI

* Added pretty print to data/typed UI

* signer: more tests for typed data

* signer: more tests for typed data

* Fixed TestMalformedData4 errors and renamed IsValid to Validate

* Fixed TestMalformedData4 errors and renamed IsValid to Validate

* Fixed more new failing tests and deanonymised some functions

* Fixed more new failing tests and deanonymised some functions

* Added types to EIP712 output in cliui

* Added types to EIP712 output in cliui

* Fixed regexp issues

* Fixed regexp issues

* Added pseudo-failing test

* Added pseudo-failing test

* Fixed false positive test

* Fixed false positive test

* Added PrettyPrint method

* Added PrettyPrint method

* signer: refactor formatting and UI

* signer: make ui use new message format for signing

* Fixed breaking changes

* Fixed rules_test failing test

* Added extra regexp for reference types

* signer: more hard types

* Fixed failing test, formatted files

* signer: use golang/x keccak

* Fixed goimports error

* clef, signer: address some review concerns

* Implemented latest recommendations

* Fixed comments and uintint256 issue

* accounts, signer: fix mimetypes, add interface to sign data with passphrase

* signer, accounts: remove duplicated code, pass hash preimages to signing

* signer: prevent panic in type assertions, make cliui print rawdata as quotable-safe

* signer: linter fixes, remove deprecated crypto dependency

* accounts: fix goimport
2019-02-06 08:30:49 +01:00
7c60d0a6a2 swarm/pss: Remove pss service leak in test (#18992) 2019-02-05 14:35:20 +01:00
1c3aa8d9b1 swarm/storage: fix test timeout with -race by increasing mget timeout 2019-02-05 14:34:34 +01:00
f413a3dbb2 vendor: update github.com/peterh/liner (#18990)
Fixes #16286
2019-02-05 13:00:42 +02:00
43e8efe895 accounts, eth, clique, signer: support for external signer API (#18079)
* accounts, eth, clique: implement external backend + move sighash calc to backend

* signer: implement account_Version on external API

* accounts/external: enable ipc, add copyright

* accounts, internal, signer: formatting

* node: go fmt

* flags: disallow --dev in combo with --externalsigner

* accounts: remove clique-specific signing method, replace with more generic

* accounts, consensus: formatting + fix error in tests

* signer/core: remove (test-) import cycle

* clique: remove unused import

* accounts: remove CliqueHash and avoid dependency on package crypto

* consensus/clique: unduplicate header encoding
2019-02-05 11:23:57 +01:00
520024dfd6 README: fix some grammar mistakes (#18981) 2019-02-04 15:28:46 +01:00
a0fd4b7e57 core/vm: unshadow err to make it visible in tracers(#18504) 2019-02-04 15:26:21 +01:00
822dc1bfb1 Merge pull request #18121 from karalabe/goerli
cmd, core, params: add support for Goerli
2019-02-04 15:21:24 +02:00
b0ed083ead cmd, core, params: add support for Goerli 2019-02-04 14:53:12 +02:00
245f3146c2 rpc: implement full bi-directional communication (#18471)
New APIs added:

    client.RegisterName(namespace, service) // makes service available to server
    client.Notify(ctx, method, args...)     // sends a notification
    ClientFromContext(ctx)                  // to get a client in handler method

This is essentially a rewrite of the server-side code. JSON-RPC
processing code is now the same on both server and client side. Many
minor issues were fixed in the process and there is a new test suite for
JSON-RPC spec compliance (and non-compliance in some cases).

List of behavior changes:

- Method handlers are now called with a per-request context instead of a
  per-connection context. The context is canceled right after the method
  returns.
- Subscription error channels are always closed when the connection
  ends. There is no need to also wait on the Notifier's Closed channel
  to detect whether the subscription has ended.
- Client now omits "params" instead of sending "params": null when there
  are no arguments to a call. The previous behavior was not compliant
  with the spec. The server still accepts "params": null.
- Floating point numbers are allowed as "id". The spec doesn't allow
  them, but we handle request "id" as json.RawMessage and guarantee that
  the same number will be sent back.
- Logging is improved significantly. There is now a message at DEBUG
  level for each RPC call served.
2019-02-04 13:47:34 +01:00
ec3432bccb core: fix error in block iterator (#18986) 2019-02-04 13:30:19 +01:00
bb7c786b09 trie: add missing unlock call in error case (#18985) 2019-02-04 12:42:46 +01:00
98e0bedcd7 common/compiler: fixed testSource (#18978) 2019-02-03 12:41:38 +01:00
05d21438de eth: make tracers respect pre- EIP 158/161 rule 2019-02-01 10:30:59 +01:00
597597e8b2 swarm/network: refactor simulation tests bootstrap (#18975) 2019-02-01 09:58:46 +01:00
a89170cfb2 p2p/discover: improve table addition code (#18974)
This change clears up confusion around the two ways in which nodes
can be added to the table.

When a neighbors packet is received as a reply to findnode, the nodes
contained in the reply are added as 'seen' entries if sufficient space
is available.

When a ping is received and the endpoint verification has taken place,
the remote node is added as a 'verified' entry or moved to the front of
the bucket if present. This also updates the node's IP address and port
if they have changed.
2019-01-31 11:48:54 +01:00
43e1b7b124 swarm: GetPeerSubscriptions RPC (#18972) 2019-01-30 21:03:08 +01:00
8cfe1a6832 README: Fix typo (#18966) 2019-01-30 15:16:12 +01:00
592bf6a59c swarm: fix flaky delivery tests (#18971) 2019-01-30 14:03:11 +01:00
c5c9cef5c0 cmd/swarm/swarm-smoke: remove wrong metrics (#18970) 2019-01-30 14:02:15 +01:00
f9401ae011 swarm/network: Remove extra random peer, connect test sanity, comments (#18964) 2019-01-30 09:49:58 +01:00
b91bf08876 cmd/swarm/swarm-smoke: sliding window test (#18967) 2019-01-30 09:46:44 +01:00
d88441025f cmd,eth: 16400 Add an option to stop geth once in sync. WIP for light mode (#17321)
* cmd, eth: Added in the flag to step geth once sync based on input

* cmd, eth: 16400 Add an option to stop geth once in sync.

* cmd: 16400 Add an option to stop geth once in sync. WIP

* cmd/geth/main, les/fletcher: added in light mode support

* cmd/geth/main, les/fletcher: Cleaned Comments and code for light mode

* cmd: 16400 Fixed formatting issue and cleaned code

* cmd, eth, les: 16400 Fixed formatting issues

* cmd, eth, les: Performed gofmt to update formatting

* cmd, eth, les: Fixed bugs resulting formatting

* cmd/geth, eth/, les: switched to downloader event

* eth: Fixed styling and gen_config

* eth/: Fix nil error in config file

* cmd/geth: Updated countdown log

* les/fetcher.go: Removed depcreated channel

* eth/downloader.go: Removed deprecated select

* cmd/geth, cmd/utils: Fixed minor issues

* eth: Reverted config files to proper format

* eth: Fixed typo in config file

* cmd/geth, eth/down: Updated code to use header time stamp

* eth/downloader: Changed the time threshold to 10 minutes

* cmd/geth, eth/downloader: Updated downloading event to pass latest header

* cmd/geth: Updated main to use right timer object

* cmd/geth: Removed unused failed event

* cmd/geth: added in correct time field with type assertion

* cmd/geth, cmd/utils: Updated flag to use boolean

* cmd/geth, cmd/utils, eth/downloader: Cleaned up code based on recommendations

* cmd/geth: Removed unneeded import

* cmd/geth, eth/downloader: fixed event field and suggested changes

* cmd/geth, cmd/utils: Updated flag and linting issue
2019-01-30 08:40:36 +01:00
f4094d09cd params, swarm/version: Geth v1.9.0 unstable, Swarm v0.3.11-unstable 2019-01-29 17:43:13 +01:00
1a19c596bf params: new CHTs (#18577) 2019-01-29 17:39:57 +01:00
f0c6f92140 p2p/discover, p2p/enode: rework endpoint proof handling, packet logging (#18963)
This change resolves multiple issues around handling of endpoint proofs.
The proof is now done separately for each IP and completing the proof
requires a matching ping hash.

Also remove waitping because it's equivalent to sleep. waitping was
slightly more efficient, but that may cause issues with findnode if
packets are reordered and the remote end sees findnode before pong.

Logging of received packets was hitherto done after handling the packet,
which meant that sent replies were logged before the packet that
generated them. This change splits up packet handling into 'preverify'
and 'handle'. The error from 'preverify' is logged, but 'handle' happens
after the message is logged. This fixes the order. Packet logs now
contain the node ID.
2019-01-29 17:39:20 +01:00
74c38902ec p2p/protocols: fix possible metrics loss in AccountingMetrics (#18956) 2019-01-29 15:19:54 +01:00
a0ac3b6a1a p2p/protocols: fix rare data race in Peer.Handshake() (#18951) 2019-01-29 14:51:13 +01:00
e9daed189e build: tweak debian source package build/upload options (#18962)
dput --passive should make repo pushes from Travis work again.
dput --no-upload-log works around an issue I had while uploading locally.

debuild -d says that debuild shouldn't check for build dependencies when
creating the source package. This option is needed to make builds work
in environments where the installed Go version doesn't match the
declared dependency in the source package.
2019-01-29 13:20:21 +01:00
Gus
4eee99aac5 core/types: remove use of package unsafe 2019-01-29 10:17:40 +01:00
21acf0bc8d cmd/utils: allow for multiple influxdb tags (#18520)
This PR is replacing the metrics.influxdb.host.tag cmd-line flag with metrics.influxdb.tags - a comma-separated key/value tags, that are passed to the InfluxDB reporter, so that we can index measurements with multiple tags, and not just one host tag.

This will be useful for Swarm, where we want to index measurements not just with the host tag, but also with bzzkey and git commit version (for long-running deployments).
2019-01-29 09:14:24 +01:00
b04da9d482 ethereum: improve FilterQuery comment (#18955) 2019-01-28 17:50:31 +01:00
104e6b2050 swarm/pss/notify: shutdown net in TestStart to fix OOM issue (#18953) 2019-01-28 16:08:33 +01:00
616cf78203 travis, appveyor: bump to Go 1.11.5 (#18947) 2019-01-27 18:54:11 +01:00
277a8c975b eth/fetcher: blockFilter is not used anymore (#17971) 2019-01-26 14:32:46 +01:00
dddd6ef006 accounts/usbwallet/trezor: expose protobuf package (#17980)
When some of the same messages are redefined anywhere in a Go project,
the protobuf package panics (see
https://github.com/golang/protobuf/issues/178).

Since this package is internal, there is no way to work around it, as
one cannot use it directly, but also cannot define the same messages.

There is no downside in making the package accessible.
2019-01-26 14:30:47 +01:00
2209fede4e swarm/pss: fix data race on topicHandlerCaps map (#18523) 2019-01-25 20:18:28 +01:00
17723a5294 cmd/bootnode: print node URL on startup (#18516)
Also say that cmd/bootnode is not for production use.
2019-01-25 12:39:52 +01:00
f28da4f602 swarm/metrics: Send the accounting registry to InfluxDB (#18470) 2019-01-24 18:57:20 +01:00
2abeb35d54 p2p/testing, swarm: remove unused testing.T in protocol tester (#18500) 2019-01-24 17:23:34 +01:00
c512d6054d github: codeowners for p2p/testing (#18517) 2019-01-24 17:22:46 +01:00
6167dd65b5 swarm/pss: fix data race in notify_test.go (TestStart) (#18518) 2019-01-24 17:07:43 +01:00
684facedb8 light: fix disableCheckFreq locking (#18515)
This change unbreaks the build and removes racy access to
disableCheckFreq. Even though the field is set while holding
the lock, it was read outside of the protected section.
2019-01-24 13:40:37 +01:00
0a454554ae cmd/utils: allow empty bootnodes flag override (#18509) 2019-01-24 13:02:30 +01:00
ad13d2d407 swarm/version: commit version added (#18510) 2019-01-24 12:35:10 +01:00
3591fc603f swarm/storage: Fix race in TestLDBStoreCollectGarbage. Disable testLDBStoreRemoveThenCollectGarbage (#18512) 2019-01-24 12:34:12 +01:00
6f45fa66d8 accounts/usbwallet: support trezor passphrases (#16503)
When opening the wallet, ask for passphrase as well as for the PIN
and return the relevant error (PIN/passphrase required). Open must then
be called again with either PIN or passphrase to advance the process.

This also updates the console bridge to support passphrase authentication.
2019-01-24 12:21:38 +01:00
769657060e les: implement ultralight client (#16904)
For more information about this light client mode, read
https://hackmd.io/s/HJy7jjZpm
2019-01-24 12:18:26 +01:00
b8f9b3779f core/vm: fix typos and use ExpGas for EXP (#18400)
This replaces the GasSlowStep constant with params.ExpGas.
Both constants have value 10.
2019-01-24 12:14:02 +01:00
fa34429a26 swarm: fix a data race on startTime (#18511) 2019-01-24 12:02:47 +01:00
bbd120354a swarm: bootnode-mode, new bootnodes and no p2p package discovery (#18498) 2019-01-24 12:02:18 +01:00
ecb781297b core, cmd/puppeth: implement constantinople fix, disable EIP-1283 (#18486)
This PR adds a new fork which disables EIP-1283. Internally it's called Petersburg,
but the genesis/config field is ConstantinopleFix.

The block numbers are:

    7280000 for Constantinople on Mainnet
    7280000 for ConstantinopleFix on Mainnet
    4939394 for ConstantinopleFix on Ropsten
    9999999 for ConstantinopleFix on Rinkeby (real number decided later)

This PR also defaults to using the same ConstantinopleFix number as whatever
Constantinople is set to. That is, it will default to mainnet behaviour if ConstantinopleFix
is not set.This means that for private networks which have already transitioned
to Constantinople, this PR will break the network unless ConstantinopleFix is
explicitly set!
2019-01-24 11:36:30 +01:00
dc43ea8d03 tests: tune flaky tests that error in travis occasionally (#18508)
* tests: tune flaky tests that error in travis occasionally

* tests: formatting
2019-01-23 16:09:29 +01:00
a50b471b6b accounts/abi: allow interface as the destination (#18490) 2019-01-23 14:36:49 +01:00
ad849c01d3 .github: add @janos as codeowner for p2p/simulations and p2p/protocols (#18506) 2019-01-23 14:23:45 +01:00
9bc0138ded eth: properly flush files in standardTraceBlockToFile (#18502) 2019-01-23 11:13:13 +01:00
85a79b3ad3 p2p/simulations: fix data race on swarm/network/simulations (#18464) 2019-01-22 20:11:43 +01:00
a6e0285320 .github: reinstate swarm codeowners to p2p package submodules (#18466) 2019-01-22 14:28:28 +01:00
f91312dbdb GraphQL master FF for review (#18445)
* Initial work on a graphql API

* Added receipts, and more transaction fields.

* Finish receipts, add logs

* Add transactionCount to block

* Add types  and .

* Update Block type to be compatible with ethql

* Rename nonce to transactionCount in Account, to be compatible with ethql

* Update transaction, receipt and log to match ethql

* Add  query operator, for a range of blocks

* Added ommerCount to Block

* Add transactionAt and ommerAt to Block

* Added sendRawTransaction mutation

* Add Call and EstimateGas to graphQL API

* Refactored to use hexutil.Bytes instead of HexBytes

* Replace BigNum with hexutil.Big

* Refactor call and estimateGas to use ethapi struct type

* Replace ethgraphql.Address with common.Address

* Replace ethgraphql.Hash with common.Hash

* Converted most quantities to Long instead of Int

* Add support for logs

* Fix bug in runFilter

* Restructured Transaction to work primarily with headers, so uncle data is reported properly

* Add gasPrice API

* Add protocolVersion API

* Add syncing API

* Moved schema into its own source file

* Move some single use args types into anonymous structs

* Add doc-comments

* Fixed backend fetching to use context

* Added (very) basic tests

* Add documentation to the graphql schema

* Fix reversion for formatting of big numbers

* Correct spelling error

* s/BigInt/Long/

* Update common/types.go

* Fixes in response to review

* Fix lint error

* Updated calls on private functions

* Fix typo in graphql.go

* Rollback ethapi breaking changes for graphql support
Co-Authored-By: Arachnid <arachnid@notdot.net>
2019-01-21 15:38:13 +01:00
105008b6a1 swarm/pss: fixing race condition (#18487) 2019-01-21 15:22:51 +01:00
15b9b39e6c swarm/network: unskip tests previously skipped due to suggestPeer issues (#18477) 2019-01-19 08:12:57 +01:00
560957799a cmd/swarm/swarm-smoke: use ResettingTimer instead of Counters for times (#18479) 2019-01-18 18:14:06 +01:00
a0b0db6305 cmd/swarm: use resetting timer to measure fetch time (#18474) 2019-01-18 13:27:27 +01:00
632135ce4c cmd/swarm/swarm-snapshot: disable tests on windows (#18478) 2019-01-18 13:22:05 +01:00
257bfff316 Upload speed (#18442) 2019-01-17 17:25:27 +01:00
66f0c464cc core: only cache non-nil receipts from the database (#18447)
receipts may be null for very short time in some condition. For this case, we should not add the null value into cache. Because you will not get the right result if you keep requesting that receipt.
2019-01-17 17:49:12 +02:00
19bfcbf911 swarm/network: fix data race in fetcher_test.go (#18469) 2019-01-17 16:45:36 +01:00
4f8ec44565 swarm/network: fix data race in stream.(*Peer).handleOfferedHashesMsg() (#18468)
* swarm/network: fix data race in stream.(*Peer).handleOfferedHashesMsg()

handleOfferedHashesMsg() contained a data race:
- read => in a goroutine, call to c.batchDone()
- write => in the main thread, write to c.sessionAt

c.batchDone() contained a call to c.AddInterval(). Client was a value
receiver for AddInterval. So on c.AddInterval() call the whole client
struct got copied (read) while one of its field was modified in
handleOfferedHashesMsg() (write).

fixes ethersphere/go-ethereum#1086

* swarm/network: simplify some trivial statements
2019-01-17 14:44:29 +01:00
81e26d5a48 swarm/network: fix data race warning on TestBzzHandshakeLightNode (#18459) 2019-01-17 11:38:23 +01:00
ba6349d39a Merge pull request #18436 from karalabe/chainmu-dedup
core, light: get rid of the dual mutexes, hard to reason with
2019-01-17 12:15:46 +02:00
bcb2594151 swarm/network: rewrite of peer suggestion engine, fix skipped tests (#18404)
* swarm/network: fix skipped tests related to suggestPeer

* swarm/network: rename depth to radius

* swarm/network: uncomment assertHealth and improve comments

* swarm/network: remove commented code

* swarm/network: kademlia suggestPeer algo correction

* swarm/network: kademlia suggest peer

 * simplify suggest Peer code
 * improve peer suggestion algo
 * add comments
 * kademlia testing improvements
   * assertHealth -> checkHealth (test helper)
   * testSuggestPeer -> checkSuggestPeer (test helper)
   * remove testSuggestPeerBug and TestKademliaCase

* swarm/network: kademlia suggestPeer cleanup, improved comments

* swarm/network: minor comment, discovery test default arg
2019-01-17 07:29:34 +01:00
34f11e752f cmd/swarm/swarm-snapshot: swarm snapshot generator (#18453)
* cmd/swarm/swarm-snapshot: add binary to create network snapshots

* cmd/swarm/swarm-snapshot: refactor and extend tests

* p2p/simulations: remove unused triggerChecks func and fix linter

* internal/cmdtest: raise the timeout for killing TestCmd

* cmd/swarm/swarm-snapshot: add more comments and other minor adjustments

* cmd/swarm/swarm-snapshot: remove redundant check in createSnapshot

* cmd/swarm/swarm-snapshot: change comment wording

* p2p/simulations: revert Simulation.Run from master

https://github.com/ethersphere/go-ethereum/pull/1077/files#r247078904

* cmd/swarm/swarm-snapshot: address pr comments

* swarm/network/simulations/discovery: removed snapshot write to file

* cmd/swarm/swarm-snapshot, swarm/network/simulations: removed redundant connection event check, fixed lint error
2019-01-16 14:33:02 +01:00
f728837ee6 swarm/storage: fix mockNetFetcher data races (#18462)
fixes: ethersphere/go-ethereum#1117
2019-01-16 14:31:32 +01:00
96c7c18b18 swarm/network: fix data race in TestNetworkID test (#18460) 2019-01-16 12:56:34 +01:00
d37f987639 cmd/evm: Add --vm.evm flag to support EVMC (#18457) 2019-01-16 11:43:41 +01:00
f50d66f2d8 cmd/geth: update cli copyright years (#18455)
* Update copyright

2018 -> 2019

* Update copyright

2018 -> 2019
2019-01-16 01:21:50 +02:00
24d66944cb params, swarm: begin Geth v1.8.22 and Swarm v0.3.10 cycle 2019-01-15 22:52:47 +02:00
9dc5d1a915 params, swarm: release Geth v1.8.21 and Swarm v0.3.9 2019-01-15 22:51:31 +02:00
c03f694be5 Merge pull request #18454 from karalabe/postpone-constantinople
params: postpone Constantinople due to net SSTORE reentrancy
2019-01-15 22:25:47 +02:00
2a2fd5adf8 params: postpone Constantinople due to net SSTORE reentrancy 2019-01-15 22:06:17 +02:00
115b1c38ac accounts/abi: Add tests for reflection ahead of refactor (#18434) 2019-01-15 16:45:52 +01:00
4aeeecfded swarm/pot: each() functions refactored (#18452) 2019-01-15 11:51:33 +01:00
1636d9574b swarm/pot: pot.remove fixed (#18431)
* swarm/pot: refactored pot.remove(), updated comments

* swarm/pot: comments updated
2019-01-11 20:42:33 +01:00
88168ff5c5 Stream subscriptions (#18355)
* swarm/network: eachBin now starts at kaddepth for nn

* swarm/network: fix Kademlia.EachBin

* swarm/network: fix kademlia.EachBin

* swarm/network: correct EachBin implementation according to requirements

* swarm/network: less addresses simplified tests

* swarm: calc kad depth outside loop in EachBin test

* swarm/network: removed printResults

* swarm/network: cleanup imports

* swarm/network: remove kademlia.EachBin; fix RequestSubscriptions and add unit test

* swarm/network/stream: address PR comments

* swarm/network/stream: package-wide subscriptionFunc

* swarm/network/stream: refactor to kad.EachConn
2019-01-11 15:08:09 +01:00
f25f776c9f core, light: get rid of the dual mutexes, hard to reason with 2019-01-11 15:27:47 +02:00
d5cad488be core, eth: fix database version (#18429)
* core, eth: fix database version

* eth: polish error message
2019-01-11 13:49:12 +02:00
2eb838ed97 p2p/simulations: eliminate concept of pivot (#18426) 2019-01-11 10:23:45 +01:00
38cce9ac33 accounts/abi: Extra slice tests (#18424)
Co-authored-by: weimumu <934657014@qq.com>
2019-01-10 16:27:54 +01:00
7240f4d800 swarm/network: Rename minproxbinsize, add as member of simulation (#18408)
* swarm/network: Rename minproxbinsize, add as member of simulation

* swarm/network: Deactivate WaitTillHealthy, unreliable pending suggestpeer
2019-01-10 12:33:51 +01:00
7ca40306af accounts/abi: tuple support (#18406) 2019-01-10 09:59:37 +01:00
6df3e4eeb0 swarm/network: remove isproxbin bool from kad.Each* iterfunc (#18239)
* swarm/network, swarm/pss: remove isproxbin bool from kad.Each* iterfunc

* swarm/network: restore comment and unskip snapshot sync tests
2019-01-10 03:36:19 +01:00
d70c4faf20 swarm: Fix T.Fatal inside a goroutine in tests (#18409)
* swarm/storage: fix T.Fatal inside a goroutine

* swarm/network/simulation: fix T.Fatal inside a goroutine

* swarm/network/stream: fix T.Fatal inside a goroutine

* swarm/network/simulation: consistent failures in TestPeerEventsTimeout

* swarm/network/simulation: rename sendRunSignal to triggerSimulationRun
2019-01-09 07:05:55 +01:00
81f04fa606 github: remove swarm github codeowners (#18412) 2019-01-08 22:50:15 +01:00
ae857e74bf swarm, p2p/protocols: Stream accounting (#18337)
* swarm: completed 1st phase of swap accounting

* swarm, p2p/protocols: added stream pricing

* swarm/network/stream: gofmt simplify stream.go

* swarm: fixed review comments

* swarm: used snapshots for swap tests

* swarm: custom retrieve for swap (less cascaded requests at any one time)

* swarm: addressed PR comments

* swarm: log output formatting

* swarm: removed parallelism in swap tests

* swarm: swap tests simplification

* swarm: removed swap_test.go

* swarm/network/stream: added prefix space for comments

* swarm/network/stream: unit test for prices

* swarm/network/stream: don't hardcode price

* swarm/network/stream: fixed invalid price check
2019-01-08 00:59:00 +01:00
56a3f6c03c swarm/storage/mock/test: fix T.Fatal inside a goroutine (#18399) 2019-01-07 14:32:01 +01:00
356c49fa7e swarm: Shed Index and Uint64Field additions (#18398) 2019-01-07 13:20:11 +01:00
428eabe28d cmd/geth: support dumpconfig optionally saving to file (#18327)
* Changed dumpConfig function to optionally save to file

* Added O_TRUNC flag to file open and cleaned up code
2019-01-07 10:56:50 +02:00
e05d468075 internal/ethapi: ask transaction pool for pending nonce (#15794) 2019-01-07 10:47:11 +02:00
aca588a8e4 accounts/keystore: small code simplification (#18394) 2019-01-07 10:35:44 +02:00
fe03b76ffe A few minor code inspection fixes (#18393)
* swarm/network: fix code inspection problems

- typos
- redundant import alias

* p2p/simulations: fix code inspection problems

- typos
- unused function parameters
- redundant import alias
- code style issue: snake case

* swarm/network: fix unused method parameters inspections
2019-01-06 11:58:57 +01:00
072c95fb74 accounts/keystore: fix comment typo (#18395) 2019-01-05 21:27:57 +01:00
e8ff318205 eth/tracer: extend create2 (#18318)
* eth/tracer: extend create2

* eth/tracers: fix create2-flaw in prestate_tracer

* eth/tracers: fix test

* eth/tracers: update assets
2019-01-05 21:26:50 +01:00
c1c4301121 Merge pull request #18371 from jeremyschlatter/patch-1
core/types: update incorrect comment
2019-01-04 10:14:17 +02:00
391d4cb9b5 Merge pull request #18390 from realdave/remove-sha3-pkg
vendor, crypto, swarm: switch over to upstream sha3 package
2019-01-04 09:51:12 +02:00
3f421aca54 cmd/puppeth: fix panic error when export aleth genesis wo/ precompile-addresses (#18344)
* cmd/puppeth: fix panic error when export aleth genesis wo/ precompile-addresses

* cmd/puppeth: don't need to handle duplicate set
2019-01-04 09:48:15 +02:00
8ec344bf60 vendor: update the entire golang.org/x/crypto dependency 2019-01-04 09:26:07 +02:00
33d233d3e1 vendor, crypto, swarm: switch over to upstream sha3 package 2019-01-04 09:26:07 +02:00
49975264a8 swarm/docker: Dockerfile for swarm:edge docker image (#18386) 2019-01-03 15:32:58 +01:00
1ea5279d5d vendor: vendor/github.com/mattn/go-isatty - add missing files (reported by mksully22) (#18376) 2019-01-03 13:31:20 +01:00
27913dd226 accounts/abi/bind: add optional block number for calls (#17942) 2019-01-03 12:54:24 +01:00
ddaf48bf84 travis, appveyor: bump to Go 1.11.4 (#18314)
* travis, appveyor: bump to Go 1.11.4

* internal/build: revert comment changes
2019-01-03 11:32:12 +02:00
57a90ad450 build: add LGPL license at update-license.go (#18377)
* add LGPL licence at update-licence.go

* add empty line
2019-01-03 10:09:04 +02:00
1d284c201d swarm/storage: change Proximity function and add TestProximity test (#18379) 2019-01-03 06:17:59 +01:00
b025053ab0 rpc: Warn the user when the path name is too long for the Unix ipc endpoint (#18330) 2019-01-02 17:33:17 +01:00
9bfd0b60cc accounts/abi: fix case of generated java functions (#18372) 2019-01-02 10:22:10 +01:00
a4af734328 accounts/abi: change unpacking of abi fields w/ underscores (#16513)
* accounts/abi: fix name styling when unpacking abi fields w/ underscores

ABI fields with underscores that are being unpacked
into structs expect structs with following form:

int_one -> Int_one

whereas in abigen the generated structs are camelcased

int_one -> IntOne

so updated the unpack method to expect camelcased structs as well.
2018-12-29 11:32:58 +01:00
6537ab5dd3 core/types: update incorrect comment 2018-12-28 17:58:03 -08:00
735343430d fix string array unpack bug in accounts/abi (#18364) 2018-12-28 08:43:55 +01:00
9e9fc87e70 swarm: remove unused/dead code (#18351) 2018-12-23 17:31:32 +01:00
335760bf06 accounts/abi: Brings out the msg defined at require statement in SC function (#17328) 2018-12-22 11:39:08 +01:00
7df52e324c accounts/abi: add support for unpacking returned bytesN arrays (#15242) 2018-12-22 11:26:49 +01:00
5e4fd8e7db swarm/network: Revised depth and health for Kademlia (#18354)
* swarm/network: Revised depth calculation with tests

* swarm/network: WIP remove redundant "full" function

* swarm/network: WIP peerpot refactor

* swarm/network: Make test methods submethod of peerpot and embed kad

* swarm/network: Remove commented out code

* swarm/network: Rename health test functions

* swarm/network: Too many n's

* swarm/network: Change hive Healthy func to accept addresses

* swarm/network: Add Healthy proxy method for api in hive

* swarm/network: Skip failing test out of scope for PR

* swarm/network: Skip all tests dependent on SuggestPeers

* swarm/network: Remove commented code and useless kad Pof member

* swarm/network: Remove more unused code, add counter on depth test errors

* swarm/network: WIP Create Healthy assertion tests

* swarm/network: Roll back health related methods receiver change

* swarm/network: Hardwire network minproxbinsize in swarm sim

* swarm/network: Rework Health test to strict

Pending add test for saturation
And add test for as many as possible up to saturation

* swarm/network: Skip discovery tests (dependent on SuggestPeer)

* swarm/network: Remove useless minProxBinSize in stream

* swarm/network: Remove unnecessary testing.T param to assert health

* swarm/network: Implement t.Helper() in checkHealth

* swarm/network: Rename check back to assert now that we have helper magic

* swarm/network: Revert WaitTillHealthy change (deferred to nxt PR)

* swarm/network: Kademlia tests GotNN => ConnectNN

* swarm/network: Renames and comments

* swarm/network: Add comments
2018-12-22 06:53:30 +01:00
880de230b4 p2p/protocols: accounting metrics rpc (#18336)
* p2p/protocols: accounting metrics rpc added (#847)

* p2p/protocols: accounting api documentation added (#847)

* p2p/protocols: accounting api doc updated (#847)

* p2p/protocols: accounting api doc update (#847)

* p2p/protocols: accounting api doc update (#847)

* p2p/protocols: fix file is not gofmted

* fix lint error

* updated comments after review

* add account balance to rpc

* naming changed after review
2018-12-22 06:04:03 +01:00
81c3dc728f eth/downloader: progress in stateSync not used anymore (#17998) 2018-12-21 23:36:14 +01:00
ca7c13ba8f swarm/pss: forwarding function refactoring (#18353) 2018-12-21 18:04:18 +01:00
e1edfe0689 p2p/simulation: Test snapshot correctness and minimal benchmark (#18287)
* p2p/simulation: WIP minimal snapshot test

* p2p/simulation: Add snapshot create, load and verify to snapshot test

* build: add test tag for tests

* p2p/simulations, build: Revert travis change, build test sym always

* p2p/simulations: Add comments, timeout check on additional events

* p2p/simulation: Add benchmark template for minimal peer protocol init

* p2p/simulations: Remove unused code

* p2p/simulation: Correct timer reset

* p2p/simulations: Put snapshot check events in buffer and call blocking

* p2p/simulations: TestSnapshot fail if Load function returns early

* p2p/simulations: TestSnapshot wait for all connections before returning

* p2p/simulation: Revert to before wait for snap load (5e75594)

* p2p/simulations: add "conns after load" subtest to TestSnapshot

and nudge
2018-12-21 06:22:11 +01:00
27ce4eb78b core: sanitize more TxPoolConfig fields (#17210)
* core: sanitize more TxPoolConfig fields

* core: fix TestTransactionPendingMinimumAllowance
2018-12-20 14:00:58 +01:00
5f251a6448 downloader: fix edgecase where returned index is OOB for downloader (#18335)
* downloader: fix edgecase where returned index is OOB for downloader

* downloader: documentation

Co-Authored-By: holiman <martin@swende.se>
2018-12-20 10:46:08 +01:00
fe86a707d8 swarm/storage: remove unused methods from Chunk interface (#18283) 2018-12-18 15:25:02 +01:00
b01cfce362 swarm/pss: Reduce input vulnerabilities (#18304) 2018-12-18 15:23:32 +01:00
de4265fa02 swarm/network/simulation:commented out unreachable code-avoid vet errors (#18263) 2018-12-18 07:24:59 +01:00
90ea542e9e Update visualized snapshot test (#18286)
* swarm/network/stream: fix visualized_snapshot_sync_sim_test

* swarm/network/stream: updated visualized snapshot-test;data in p2p event

* swarm/network/stream: cleanup visualized snapshot sync test

* swarm/network/stream: re-enable t.Skip for visualized test

* swarm/network/stream: addressed PR comments
2018-12-18 07:20:59 +01:00
472c23a801 p2p/simulation: move connection methods from swarm/network/simulation (#18323) 2018-12-17 12:19:01 +01:00
d322c9d550 swarm/storage/feed: remove unused code (#18324) 2018-12-17 11:32:55 +01:00
3ad73443c7 fix slice unpack bug in accounts/abi (#18321)
* fix slice unpack bug in accounts/abi
2018-12-17 09:50:52 +01:00
7dbb075c07 Change issue labels in bot configs to the new prefixed version (#18311) 2018-12-14 15:06:06 +01:00
aebf9e2fe7 .github: add @gballet as abi codeowner (#18306) 2018-12-14 15:05:22 +01:00
aad3c67a92 p2p/discv5: don't hash findnode target in lookup against table (#18309) 2018-12-14 14:55:51 +01:00
fe26b2f366 core/state: rename 'new' variable (#18301) 2018-12-14 14:55:03 +01:00
88d7d4fed4 Change issue labels in bot configs to the new prefixed version 2018-12-14 14:50:10 +01:00
9940d93a43 Comment error (#18303) 2018-12-14 11:15:31 +01:00
3796751efc rpc: add application/json-rpc as accepted content type, fixes #18293 (#18310) 2018-12-14 11:08:11 +01:00
e79821cabe accounts/abi: argument type and name were reversed (#17947)
argument type and name were reversed
2018-12-13 15:12:19 +01:00
e57e4571d3 crypto/secp256k1: Fix invalid document link (#18297) 2018-12-13 10:25:13 +01:00
b3be9b7cd8 usbwallet: check returned error when decoding hexstr (#18056)
* usbwallet: check returned error when decoding hexstr

* Update accounts/usbwallet/ledger.go

Co-Authored-By: CoreyLin <514971757@qq.com>

* usbwallet: check hex decode error
2018-12-13 10:21:52 +01:00
4e6f53ac33 swarm/storage: simplify ChunkValidator interface (#18285) 2018-12-12 16:22:17 +01:00
ebbf3dfafb swarm/shed: add metrics to each shed db (#18277)
* swarm/shed: add metrics to each shed db

* swarm/shed: push metrics prefix up

* swarm/shed: rename prefix to metricsPrefix

* swarm/shed: unexport Meter, remove Mutex for quit channel
2018-12-12 07:51:29 +01:00
1e190a3b1c params, swarm: begin Geth v1.9.0 family, Swarm v0.3.9 cycle 2018-12-11 14:23:57 +02:00
4016 changed files with 751254 additions and 291955 deletions

View File

@ -1,9 +1 @@
**/.git
.git
!.git/HEAD
!.git/refs/heads
**/*_test.go
build/_workspace
build/_bin
tests/testdata
.github

36
.github/CODEOWNERS vendored
View File

@ -1,35 +1 @@
# Lines starting with '#' are comments.
# Each line is a file pattern followed by one or more owners.
accounts/usbwallet @karalabe
consensus @karalabe
core/ @karalabe @holiman
eth/ @karalabe
les/ @zsfelfoldi
light/ @zsfelfoldi
mobile/ @karalabe
p2p/ @fjl @zsfelfoldi
p2p/simulations @lmars
p2p/protocols @zelig
swarm/api/http @justelad
swarm/bmt @zelig
swarm/dev @lmars
swarm/fuse @jmozah @holisticode
swarm/grafana_dashboards @nonsense
swarm/metrics @nonsense @holisticode
swarm/multihash @nolash
swarm/network/bitvector @zelig @janos
swarm/network/priorityqueue @zelig @janos
swarm/network/simulations @zelig @janos
swarm/network/stream @janos @zelig @holisticode @justelad
swarm/network/stream/intervals @janos
swarm/network/stream/testing @zelig
swarm/pot @zelig
swarm/pss @nolash @zelig @nonsense
swarm/services @zelig
swarm/state @justelad
swarm/storage/encryption @zelig @nagydani
swarm/storage/mock @janos
swarm/storage/feed @nolash @jpeletier
swarm/testutil @lmars
whisper/ @gballet @gluk256
# To be defined

View File

@ -1,40 +1,20 @@
# Contributing
## Contributing
Thank you for considering to help out with the source code! We welcome
contributions from anyone on the internet, and are grateful for even the
smallest of fixes!
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a
pull request for the maintainers to review and merge into the main code base. If
you wish to submit more complex changes though, please check up with the core
devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum) to
ensure those changes are in line with the general philosophy of the project
and/or get some early feedback which can make both your efforts much lighter as
well as our review and merge procedures quick and simple.
## Coding guidelines
If you'd like to contribute to Swarm, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit more
complex changes though, please check up with the core devs first on [our Swarm gitter channel](https://gitter.im/ethersphere/orange-lounge)
to ensure those changes are in line with the general philosophy of the project and/or get some
early feedback which can make both your efforts much lighter as well as our review and merge
procedures quick and simple.
Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go
[formatting](https://golang.org/doc/effective_go.html#formatting) guidelines
(i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go
[commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Pull requests need to be based on and opened against the `master` branch.
* [Code review guidelines](https://github.com/ethersphere/swarm/blob/master/docs/Code-Review-Guidelines.md).
* Commit messages should be prefixed with the package(s) they modify.
* E.g. "eth, rpc: make trace configs optional"
## Can I have feature X
Before you submit a feature request, please check and make sure that it isn't
possible through some other means. The JavaScript-enabled console is a powerful
feature in the right hands. Please check our
[Wiki page](https://github.com/ethereum/go-ethereum/wiki) for more info
and help.
## Configuration, dependencies, and tests
Please see the [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
for more details on configuring your environment, managing project dependencies
and testing procedures.
* E.g. "fuse: ignore default manifest entry"

View File

@ -1,12 +1,8 @@
Hi there,
please note that this is an issue tracker reserved for bug reports and feature requests.
For general questions please use the gitter channel or the Ethereum stack exchange at https://ethereum.stackexchange.com.
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions. It's helpful to search the existing GitHub issues first. It's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of. Please note that this is an issue tracker reserved for bug reports and feature requests. For general questions please use the gitter channel https://gitter.im/ethereum/swarm or the ethereum stack exchange at https://ethereum.stackexchange.com. -->
#### System information
Geth version: `geth version`
Swarm version: `swarm version`
OS & Version: Windows/Linux/OSX
Commit hash : (if `develop`)

View File

@ -1,7 +1,7 @@
# Number of days of inactivity before an Issue is closed for lack of response
daysUntilClose: 30
# Label requiring a response
responseRequiredLabel: more-information-needed
responseRequiredLabel: "need:more-information"
# Comment to post when closing an Issue for lack of response. Set to `false` to disable
closeComment: >
This issue has been automatically closed because there has been no response

2
.github/stale.yml vendored
View File

@ -7,7 +7,7 @@ exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
staleLabel: "status:inactive"
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had

1
.gitignore vendored
View File

@ -42,6 +42,7 @@ profile.cov
/dashboard/assets/node_modules
/dashboard/assets/stats.json
/dashboard/assets/bundle.js
/dashboard/assets/bundle.js.map
/dashboard/assets/package-lock.json
**/yarn-error.log

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "tests"]
path = tests/testdata
url = https://github.com/ethereum/tests

123
.mailmap
View File

@ -1,123 +0,0 @@
Jeffrey Wilcke <jeffrey@ethereum.org>
Jeffrey Wilcke <jeffrey@ethereum.org> <geffobscura@gmail.com>
Jeffrey Wilcke <jeffrey@ethereum.org> <obscuren@obscura.com>
Jeffrey Wilcke <jeffrey@ethereum.org> <obscuren@users.noreply.github.com>
Viktor Trón <viktor.tron@gmail.com>
Joseph Goulden <joegoulden@gmail.com>
Nick Savers <nicksavers@gmail.com>
Maran Hidskes <maran.hidskes@gmail.com>
Taylor Gerring <taylor.gerring@gmail.com>
Taylor Gerring <taylor.gerring@gmail.com> <taylor.gerring@ethereum.org>
Bas van Kervel <bas@ethdev.com>
Bas van Kervel <bas@ethdev.com> <basvankervel@ziggo.nl>
Bas van Kervel <bas@ethdev.com> <basvankervel@gmail.com>
Bas van Kervel <bas@ethdev.com> <bas-vk@users.noreply.github.com>
Sven Ehlert <sven@ethdev.com>
Vitalik Buterin <v@buterin.com>
Marian Oancea <contact@siteshop.ro>
Christoph Jentzsch <jentzsch.software@gmail.com>
Heiko Hees <heiko@heiko.org>
Alex Leverington <alex@ethdev.com>
Alex Leverington <alex@ethdev.com> <subtly@users.noreply.github.com>
Zsolt Felföldi <zsfelfoldi@gmail.com>
Gavin Wood <i@gavwood.com>
Martin Becze <mjbecze@gmail.com>
Martin Becze <mjbecze@gmail.com> <wanderer@users.noreply.github.com>
Dimitry Khokhlov <winsvega@mail.ru>
Roman Mandeleil <roman.mandeleil@gmail.com>
Alec Perseghin <aperseghin@gmail.com>
Alon Muroch <alonmuroch@gmail.com>
Arkadiy Paronyan <arkadiy@ethdev.com>
Jae Kwon <jkwon.work@gmail.com>
Aaron Kumavis <kumavis@users.noreply.github.com>
Nick Dodson <silentcicero@outlook.com>
Jason Carver <jacarver@linkedin.com>
Jason Carver <jacarver@linkedin.com> <ut96caarrs@snkmail.com>
Joseph Chow <ethereum@outlook.com>
Joseph Chow <ethereum@outlook.com> ethers <TODO>
Enrique Fynn <enriquefynn@gmail.com>
Vincent G <caktux@gmail.com>
RJ Catalano <catalanor0220@gmail.com>
RJ Catalano <catalanor0220@gmail.com> <rj@erisindustries.com>
Nchinda Nchinda <nchinda2@gmail.com>
Aron Fischer <github@aron.guru> <homotopycolimit@users.noreply.github.com>
Vlad Gluhovsky <gluk256@users.noreply.github.com>
Ville Sundell <github@solarius.fi>
Elliot Shepherd <elliot@identitii.com>
Yohann Léon <sybiload@gmail.com>
Gregg Dourgarian <greggd@tempworks.com>
Casey Detrio <cdetrio@gmail.com>
Jens Agerberg <github@agerberg.me>
Nick Johnson <arachnid@notdot.net>
Henning Diedrich <hd@eonblast.com>
Henning Diedrich <hd@eonblast.com> Drake Burroughs <wildfyre@hotmail.com>
Felix Lange <fjl@twurst.com>
Felix Lange <fjl@twurst.com> <fjl@users.noreply.github.com>
Максим Чусовлянов <mchusovlianov@gmail.com>
Louis Holbrook <dev@holbrook.no>
Louis Holbrook <dev@holbrook.no> <nolash@users.noreply.github.com>
Thomas Bocek <tom@tomp2p.net>
Victor Tran <vu.tran54@gmail.com>
Justin Drake <drakefjustin@gmail.com>
Frank Wang <eternnoir@gmail.com>
Gary Rong <garyrong0905@gmail.com>
Guillaume Nicolas <guin56@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com> <sorin@users.noreply.github.com>
Valentin Wüstholz <wuestholz@gmail.com>
Valentin Wüstholz <wuestholz@gmail.com> <wuestholz@users.noreply.github.com>
Armin Braun <me@obrown.io>
Ernesto del Toro <ernesto.deltoro@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com> <ernestodeltoro@users.noreply.github.com>

14
.readthedocs.yml Normal file
View File

@ -0,0 +1,14 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
sphinx:
configuration: docs/swarm-guide/contents/conf.py
builder: html
python:
version: 3.7
install:
- requirements: docs/swarm-guide/requirements.txt

View File

@ -1,20 +1,12 @@
language: go
go_import_path: github.com/ethereum/go-ethereum
go_import_path: github.com/ethersphere/swarm
sudo: false
branches:
only:
- master
- /v(\d+\.)(\d+\.)(\d)/
matrix:
include:
- os: linux
dist: trusty
sudo: required
go: 1.10.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- os: linux
dist: trusty
sudo: required
@ -26,8 +18,20 @@ matrix:
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
# These are the latest Go versions.
- os: linux
dist: trusty
sudo: required
go: 1.12.x
script:
- sudo modprobe fuse
- sudo chmod 666 /dev/fuse
- sudo chown root:$USER /etc/fuse.conf
- go run build/ci.go install
- go run build/ci.go test -coverage $TEST_PACKAGES
- os: osx
go: 1.11.x
go: 1.12.x
script:
- echo "Increase the maximum number of open file descriptors on macOS"
- NOFILE=20480
@ -44,11 +48,9 @@ matrix:
# This builder only tests code linters on latest version of Go
- os: linux
dist: trusty
go: 1.11.x
go: 1.12.x
env:
- lint
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go lint
@ -56,11 +58,9 @@ matrix:
- if: type = push
os: linux
dist: trusty
go: 1.11.x
go: 1.12.x
env:
- ubuntu-ppa
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
@ -68,19 +68,20 @@ matrix:
- debhelper
- dput
- fakeroot
- python-bzrlib
- python-paramiko
script:
- go run build/ci.go debsrc -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>" -upload ppa:ethereum/ethereum
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user ethswarm -signer "Ethereum Swarm Linux Builder <swarm@ethereum.org>"
# This builder does the Linux Azure uploads
- if: type = push
os: linux
dist: trusty
sudo: required
go: 1.11.x
go: 1.12.x
env:
- azure-linux
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
@ -88,22 +89,22 @@ matrix:
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- go run build/ci.go install -arch 386
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
# Switch over GCC to cross compilation (breaks 386, hence why do it here only)
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
- sudo ln -s /usr/include/asm-generic /usr/include/asm
- GOARM=5 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- GOARM=6 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- GOARM=7 go run build/ci.go install -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- go run build/ci.go install -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
# This builder does the Linux Azure MIPS xgo uploads
- if: type = push
@ -111,103 +112,43 @@ matrix:
dist: trusty
services:
- docker
go: 1.11.x
go: 1.12.x
env:
- azure-linux-mips
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go xgo --alltools -- --targets=linux/mips --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips; do mv -f "${bin}" "${bin/-linux-mips/}"; done
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mipsle --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mipsle; do mv -f "${bin}" "${bin/-linux-mipsle/}"; done
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mips64 --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips64; do mv -f "${bin}" "${bin/-linux-mips64/}"; done
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
- go run build/ci.go xgo --alltools -- --targets=linux/mips64le --ldflags '-extldflags "-static"' -v
- for bin in build/bin/*-linux-mips64le; do mv -f "${bin}" "${bin/-linux-mips64le/}"; done
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
# This builder does the Android Maven and Azure uploads
- if: type = push
os: linux
dist: trusty
addons:
apt:
packages:
- oracle-java8-installer
- oracle-java8-set-default
language: android
android:
components:
- platform-tools
- tools
- android-15
- android-19
- android-24
env:
- azure-android
- maven-android
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- curl https://storage.googleapis.com/golang/go1.11.2.linux-amd64.tar.gz | tar -xz
- export PATH=`pwd`/go/bin:$PATH
- export GOROOT=`pwd`/go
- export GOPATH=$HOME/go
script:
# Build the Android archive and upload it to Maven Central and Azure
- curl https://dl.google.com/android/repository/android-ndk-r17b-linux-x86_64.zip -o android-ndk-r17b.zip
- unzip -q android-ndk-r17b.zip && rm android-ndk-r17b.zip
- mv android-ndk-r17b $HOME
- export ANDROID_NDK=$HOME/android-ndk-r17b
- mkdir -p $GOPATH/src/github.com/ethereum
- ln -s `pwd` $GOPATH/src/github.com/ethereum
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload ethswarm/builds
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
- if: type = push
os: osx
go: 1.11.x
go: 1.12.x
env:
- azure-osx
- azure-ios
- cocoapods-ios
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go install
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload ethswarm/builds
# Build the iOS framework and upload it to CocoaPods and Azure
- gem uninstall cocoapods -a -x
- gem install cocoapods
- mv ~/.cocoapods/repos/master ~/.cocoapods/repos/master.bak
- sed -i '.bak' 's/repo.join/!repo.join/g' $(dirname `gem which cocoapods`)/cocoapods/sources_manager.rb
- if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then git clone --depth=1 https://github.com/CocoaPods/Specs.git ~/.cocoapods/repos/master && pod setup --verbose; fi
- xctool -version
- xcrun simctl list
# Workaround for https://github.com/golang/go/issues/23749
- export CGO_CFLAGS_ALLOW='-fmodules|-fblocks|-fobjc-arc'
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
# This builder does the Azure archive purges to avoid accumulating junk
- if: type = cron
os: linux
dist: trusty
go: 1.11.x
go: 1.12.x
env:
- azure-purge
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go purge -store gethstore/builds -days 14
- go run build/ci.go purge -store ethswarm/builds -days 14

207
AUTHORS
View File

@ -1,174 +1,35 @@
# This is the official list of go-ethereum authors for copyright purposes.
# Core team members
Afri Schoedon <5chdn@users.noreply.github.com>
Agustin Armellini Fischer <armellini13@gmail.com>
Airead <fgh1987168@gmail.com>
Alan Chen <alanchchen@users.noreply.github.com>
Alejandro Isaza <alejandro.isaza@gmail.com>
Ales Katona <ales@coinbase.com>
Alex Leverington <alex@ethdev.com>
Alex Wu <wuyiding@gmail.com>
Alexandre Van de Sande <alex.vandesande@ethdev.com>
Ali Hajimirza <Ali92hm@users.noreply.github.com>
Anton Evangelatov <anton.evangelatov@gmail.com>
Arba Sasmoyo <arba.sasmoyo@gmail.com>
Armani Ferrante <armaniferrante@berkeley.edu>
Armin Braun <me@obrown.io>
Aron Fischer <github@aron.guru>
Bas van Kervel <bas@ethdev.com>
Benjamin Brent <benjamin@benjaminbrent.com>
Benoit Verkindt <benoit.verkindt@gmail.com>
Bo <bohende@gmail.com>
Bo Ye <boy.e.computer.1982@outlook.com>
Bob Glickstein <bobg@users.noreply.github.com>
Brian Schroeder <bts@gmail.com>
Casey Detrio <cdetrio@gmail.com>
Chase Wright <mysticryuujin@gmail.com>
Christoph Jentzsch <jentzsch.software@gmail.com>
Daniel A. Nagy <nagy.da@gmail.com>
Daniel Sloof <goapsychadelic@gmail.com>
Darrel Herbst <dherbst@gmail.com>
Dave Appleton <calistralabs@gmail.com>
Diego Siqueira <DiSiqueira@users.noreply.github.com>
Dmitry Shulyak <yashulyak@gmail.com>
Egon Elbre <egonelbre@gmail.com>
Elias Naur <elias.naur@gmail.com>
Elliot Shepherd <elliot@identitii.com>
Enrique Fynn <enriquefynn@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com>
Ethan Buchman <ethan@coinculture.info>
Eugene Valeyev <evgen.povt@gmail.com>
Evangelos Pappas <epappas@evalonlabs.com>
Evgeny Danilenko <6655321@bk.ru>
Fabian Vogelsteller <fabian@frozeman.de>
Fabio Barone <fabio.barone.co@gmail.com>
Fabio Berger <fabioberger1991@gmail.com>
FaceHo <facehoshi@gmail.com>
Felix Lange <fjl@twurst.com>
Fiisio <liangcszzu@163.com>
Frank Wang <eternnoir@gmail.com>
Furkan KAMACI <furkankamaci@gmail.com>
Gary Rong <garyrong0905@gmail.com>
George Ornbo <george@shapeshed.com>
Gregg Dourgarian <greggd@tempworks.com>
Guillaume Ballet <gballet@gmail.com>
Guillaume Nicolas <guin56@gmail.com>
Gustav Simonsson <gustav.simonsson@gmail.com>
Hao Bryan Cheng <haobcheng@gmail.com>
Henning Diedrich <hd@eonblast.com>
Isidoro Ghezzi <isidoro.ghezzi@icloud.com>
Ivan Daniluk <ivan.daniluk@gmail.com>
Jae Kwon <jkwon.work@gmail.com>
Jamie Pitts <james.pitts@gmail.com>
Janoš Guljaš <janos@users.noreply.github.com>
Jason Carver <jacarver@linkedin.com>
Jay Guo <guojiannan1101@gmail.com>
Jeff R. Allen <jra@nella.org>
Jeffrey Wilcke <jeffrey@ethereum.org>
Jens Agerberg <github@agerberg.me>
Jia Chenhui <jiachenhui1989@gmail.com>
Jim McDonald <Jim@mcdee.net>
Joel Burget <joelburget@gmail.com>
Jonathan Brown <jbrown@bluedroplet.com>
Joseph Chow <ethereum@outlook.com>
Justin Clark-Casey <justincc@justincc.org>
Justin Drake <drakefjustin@gmail.com>
Kenji Siu <kenji@isuntv.com>
Kobi Gurkan <kobigurk@gmail.com>
Konrad Feldmeier <konrad@brainbot.com>
Kurkó Mihály <kurkomisi@users.noreply.github.com>
Kyuntae Ethan Kim <ethan.kyuntae.kim@gmail.com>
Lefteris Karapetsas <lefteris@refu.co>
Leif Jurvetson <leijurv@gmail.com>
Leo Shklovskii <leo@thermopylae.net>
Lewis Marshall <lewis@lmars.net>
Lio李欧 <lionello@users.noreply.github.com>
Louis Holbrook <dev@holbrook.no>
Luca Zeug <luclu@users.noreply.github.com>
Magicking <s@6120.eu>
Maran Hidskes <maran.hidskes@gmail.com>
Marek Kotewicz <marek.kotewicz@gmail.com>
Mark <markya0616@gmail.com>
Martin Holst Swende <martin@swende.se>
Matthew Di Ferrante <mattdf@users.noreply.github.com>
Matthew Wampler-Doty <matthew.wampler.doty@gmail.com>
Maximilian Meister <mmeister@suse.de>
Micah Zoltu <micah@zoltu.net>
Michael Ruminer <michael.ruminer+github@gmail.com>
Miguel Mota <miguelmota2@gmail.com>
Miya Chen <miyatlchen@gmail.com>
Nchinda Nchinda <nchinda2@gmail.com>
Nick Dodson <silentcicero@outlook.com>
Nick Johnson <arachnid@notdot.net>
Nicolas Guillaume <gunicolas@sqli.com>
Noman <noman@noman.land>
Oli Bye <olibye@users.noreply.github.com>
Paul Litvak <litvakpol@012.net.il>
Paulo L F Casaretto <pcasaretto@gmail.com>
Paweł Bylica <chfast@gmail.com>
Peter Pratscher <pratscher@gmail.com>
Petr Mikusek <petr@mikusek.info>
Péter Szilágyi <peterke@gmail.com>
RJ Catalano <catalanor0220@gmail.com>
Ramesh Nair <ram@hiddentao.com>
Ricardo Catalinas Jiménez <r@untroubled.be>
Ricardo Domingos <ricardohsd@gmail.com>
Richard Hart <richardhart92@gmail.com>
Rob <robert@rojotek.com>
Robert Zaremba <robert.zaremba@scale-it.pl>
Russ Cox <rsc@golang.org>
Rémy Roy <remyroy@remyroy.com>
S. Matthew English <s-matthew-english@users.noreply.github.com>
Shintaro Kaneko <kaneshin0120@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com>
Stein Dekker <dekker.stein@gmail.com>
Steve Waldman <swaldman@mchange.com>
Steven Roose <stevenroose@gmail.com>
Taylor Gerring <taylor.gerring@gmail.com>
Thomas Bocek <tom@tomp2p.net>
Ti Zhou <tizhou1986@gmail.com>
Tosh Camille <tochecamille@gmail.com>
Valentin Wüstholz <wuestholz@gmail.com>
Victor Farazdagi <simple.square@gmail.com>
Victor Tran <vu.tran54@gmail.com>
Viktor Trón <viktor.tron@gmail.com>
Ville Sundell <github@solarius.fi>
Vincent G <caktux@gmail.com>
Vitalik Buterin <v@buterin.com>
Vitaly V <vvelikodny@gmail.com>
Vivek Anand <vivekanand1101@users.noreply.github.com>
Vlad Gluhovsky <gluk256@users.noreply.github.com>
Yohann Léon <sybiload@gmail.com>
Yoichi Hirai <i@yoichihirai.com>
Yondon Fu <yondon.fu@gmail.com>
Zach <zach.ramsay@gmail.com>
Zahoor Mohamed <zahoor@zahoor.in>
Zoe Nolan <github@zoenolan.org>
Zsolt Felföldi <zsfelfoldi@gmail.com>
am2rican5 <am2rican5@gmail.com>
ayeowch <ayeowch@gmail.com>
b00ris <b00ris@mail.ru>
bailantaotao <Edwin@maicoin.com>
baizhenxuan <nkbai@163.com>
bloonfield <bloonfield@163.com>
changhong <changhong.yu@shanbay.com>
evgk <evgeniy.kamyshev@gmail.com>
ferhat elmas <elmas.ferhat@gmail.com>
holisticode <holistic.computing@gmail.com>
jtakalai <juuso.takalainen@streamr.com>
ken10100147 <sunhongping@kanjian.com>
ligi <ligi@ligi.de>
mark.lin <mark@maicoin.com>
necaremus <necaremus@gmail.com>
njupt-moon <1015041018@njupt.edu.cn>
nkbai <nkbai@163.com>
rhaps107 <dod-source@yandex.ru>
slumber1122 <slumber1122@gmail.com>
sunxiaojun2014 <sunxiaojun-xy@360.cn>
terasum <terasum@163.com>
tsarpaul <Litvakpol@012.net.il>
xiekeyang <xiekeyang@users.noreply.github.com>
yoza <yoza.is12s@gmail.com>
ΞTHΞЯSPHΞЯΞ <{viktor.tron,nagydani,zsfelfoldi}@gmail.com>
Максим Чусовлянов <mchusovlianov@gmail.com>
Ralph Caraveo <deckarep@gmail.com>
Viktor Trón - @zelig
Louis Holbrook - @nolash
Lewis Marshall - @lmars
Anton Evangelatov - @nonsense
Janoš Guljaš - @janos
Balint Gabor - @gbalint
Elad Nachmias - @justelad
Daniel A. Nagy - @nagydani
Aron Fischer - @homotopycolimit
Fabio Barone - @holisticode
Zahoor Mohamed - @jmozah
Zsolt Felföldi - @zsfelfoldi
# External contributors
Kiel Barry
Gary Rong
Jared Wasinger
Leon Stanko
Javier Peletier [epiclabs.io]
Bartek Borkowski [tungsten-labs.com]
Shane Howley [mainframe.com]
Doug Leonard [mainframe.com]
Ivan Daniluk [status.im]
Felix Lange [EF]
Martin Holst Swende [EF]
Guillaume Ballet [EF]
ligi [EF]
Christopher Dro [blick-labs.com]
Sergii Bomko [ledgerleopard.com]
Domino Valdano
Rafael Matias
Coogan Brennan

83
CHANGELOG.md Normal file
View File

@ -0,0 +1,83 @@
## v0.4.3 (July 25, 2019)
### Notes
- **Docker users:** The `$PASSWORD` and `$DATADIR` environment variables are not supported anymore since this release. From now on you should mount the password or data dirctories as a volume. For example:
```bash
$ docker run -it -v $PWD/hostdata:/data \
-v $PWD/password:/password \
ethersphere/swarm:0.4.3 \
--datadir /data \
--password /password
```
### Bug fixes and improvements
* [#1586](https://github.com/ethersphere/swarm/pull/1586): network: structured output for kademlia table
* [#1582](https://github.com/ethersphere/swarm/pull/1582): client: add bzz client, update smoke tests
* [#1578](https://github.com/ethersphere/swarm/pull/1578): swarm-smoke: fix check max prox hosts for pull/push sync modes
* [#1557](https://github.com/ethersphere/swarm/pull/1557): cmd/swarm: allow using a network interface by name for nat purposes
* [#1534](https://github.com/ethersphere/swarm/pull/1534): api, network: count chunk deliveries per peer
* [#1537](https://github.com/ethersphere/swarm/pull/1537): swarm: fix bzz_info.port when using dynamic port allocation
* [#1531](https://github.com/ethersphere/swarm/pull/1531): cmd/swarm: make bzzaccount flag optional and add bzzkeyhex flag
* [#1536](https://github.com/ethersphere/swarm/pull/1536): cmd/swarm: use only one function to parse flags
* [#1530](https://github.com/ethersphere/swarm/pull/1530): network/bitvector: Multibit set/unset + string rep
* [#1555](https://github.com/ethersphere/swarm/pull/1555): PoC: Network simulation framework
## v0.4.2 (June 28, 2019)
### Notes
This release is not backward compatible with the previous versions of Swarm due to changes to the wire protocol of the Retrieve Request messages. Please update your nodes.
### Bug fixes and improvements
* [#1503](https://github.com/ethersphere/swarm/pull/1503): network/simulation: add ExecAdapter capability to swarm simulations
* [#1495](https://github.com/ethersphere/swarm/pull/1495): build: enable ubuntu ppa disco (19.04) builds
* [#1395](https://github.com/ethersphere/swarm/pull/1395): swarm/storage: support for uploading 100gb files
* [#1344](https://github.com/ethersphere/swarm/pull/1344): swarm/network, swarm/storage: simplification of fetchers
* [#1488](https://github.com/ethersphere/swarm/pull/1488): docker: include git commit hash in swarm version
## v0.4.1 (June 13, 2019)
### Improvements
* [#1465](https://github.com/ethersphere/swarm/pull/1465): network: bump proto versions due to change in OfferedHashesMsg
* [#1428](https://github.com/ethersphere/swarm/pull/1428): swarm-smoke: add debug flag
* [#1422](https://github.com/ethersphere/swarm/pull/1422): swarm/network/stream: remove dead code
* [#1463](https://github.com/ethersphere/swarm/pull/1463): docker: create new dockerfiles that are context aware
* [#1466](https://github.com/ethersphere/swarm/pull/1466): changelog for releases
### Bug fixes
* [#1460](https://github.com/ethersphere/swarm/pull/1460): storage: fix alignement panics on 32 bit arch
* [#1422](https://github.com/ethersphere/swarm/pull/1422), [#19650](https://github.com/ethereum/go-ethereum/pull/19650): swarm/network/stream: remove dead code
* [#1420](https://github.com/ethersphere/swarm/pull/1420): swarm, cmd: fix migration link, change loglevel severity
* [#19594](https://github.com/ethereum/go-ethereum/pull/19594): swarm/api/http: fix bzz-hash to return ens resolved hash directly
* [#19599](https://github.com/ethereum/go-ethereum/pull/19599): swarm/storage: fix SubscribePull to not skip chunks
### Notes
* Swarm has split the codebase ([go-ethereum#19661](https://github.com/ethereum/go-ethereum/pull/19661), [#1405](https://github.com/ethersphere/swarm/pull/1405)) from [ethereum/go-ethereum](https://github.com/ethereum/go-ethereum). The code is now under [ethersphere/swarm](https://github.com/ethersphere/swarm)
* New docker images (>=0.4.0) can now be found under https://hub.docker.com/r/ethersphere/swarm
## v0.4.0 (May 17, 2019)
### Changes
* Implemented parallel feed lookups within Swarm Feeds
* Updated syncing protocol subscription algorithm
* Implemented EIP-1577 - Multiaddr support for ENS
* Improved LocalStore implementation
* Added support for syncing tags which provide the ability to measure how long it will take for an uploaded file to sync to the network
* Fixed data race bugs within PSS
* Improved end-to-end integration tests
* Various performance improvements and bug fixes
* Improved instrumentation - metrics and OpenTracing traces
### Notes
This release is not backward compatible with the previous versions of Swarm due to the new LocalStore implementation. If you wish to keep your data, you should run a data migration prior to running this version.
BZZ network ID has been updated to 4.
Swarm v0.4.0 introduces major changes to the existing codebase. Among other things, the storage layer has been rewritten to be more modular and flexible in a manner that will accommodate for our future needs. Since Swarm at this point does not provide any storage guarantees, we have made the decision to not impose any migrations on the nodes that we maintain as part of the public test network, nor on our users. We have provided a [manual](https://github.com/ethersphere/swarm/blob/master/docs/Migration-v0.3-to-v0.4.md) for those of you who are running private deployments and would like to migrate your data to the new local storage schema.

View File

@ -1,16 +1,13 @@
# Build Geth in a stock Go builder container
FROM golang:1.11-alpine as builder
FROM golang:1.12-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
ADD . /swarm
WORKDIR /swarm
RUN make swarm
RUN apk add --no-cache make gcc musl-dev linux-headers
FROM ethereum/client-go:v1.8.27 as geth
ADD . /go-ethereum
RUN cd /go-ethereum && make geth
# Pull Geth into a second stage deploy alpine container
FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp
ENTRYPOINT ["geth"]
FROM alpine:3.9
RUN apk --no-cache add ca-certificates && update-ca-certificates
COPY --from=builder /swarm/build/bin/swarm /usr/local/bin/
COPY --from=geth /usr/local/bin/geth /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/swarm"]

View File

@ -1,15 +1,14 @@
# Build Geth in a stock Go builder container
FROM golang:1.11-alpine as builder
FROM golang:1.12-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git
ADD . /swarm
WORKDIR /swarm
RUN make alltools
RUN apk add --no-cache make gcc musl-dev linux-headers
FROM ethereum/client-go:v1.8.27 as geth
ADD . /go-ethereum
RUN cd /go-ethereum && make all
# Pull all binaries into a second stage deploy alpine container
FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder /go-ethereum/build/bin/* /usr/local/bin/
EXPOSE 8545 8546 30303 30303/udp
FROM alpine:3.9
RUN apk --no-cache add ca-certificates
COPY --from=builder /swarm/build/bin/* /usr/local/bin/
COPY --from=geth /usr/local/bin/geth /usr/local/bin/
COPY docker/run-smoke.sh /run-smoke.sh
ENTRYPOINT ["/usr/local/bin/swarm"]

144
Makefile
View File

@ -2,152 +2,12 @@
# with Go source code. If you know what GOPATH is then you probably
# don't need to bother with make.
.PHONY: geth android ios geth-cross swarm evm all test clean
.PHONY: geth-linux geth-linux-386 geth-linux-amd64 geth-linux-mips64 geth-linux-mips64le
.PHONY: geth-linux-arm geth-linux-arm-5 geth-linux-arm-6 geth-linux-arm-7 geth-linux-arm64
.PHONY: geth-darwin geth-darwin-386 geth-darwin-amd64
.PHONY: geth-windows geth-windows-386 geth-windows-amd64
GOBIN = $(shell pwd)/build/bin
GO ?= latest
geth:
build/env.sh go run build/ci.go install ./cmd/geth
@echo "Done building."
@echo "Run \"$(GOBIN)/geth\" to launch geth."
swarm:
build/env.sh go run build/ci.go install ./cmd/swarm
@echo "Done building."
@echo "Run \"$(GOBIN)/swarm\" to launch swarm."
all:
build/env.sh go run build/ci.go install
android:
build/env.sh go run build/ci.go aar --local
@echo "Done building."
@echo "Import \"$(GOBIN)/geth.aar\" to use the library."
ios:
build/env.sh go run build/ci.go xcode --local
@echo "Done building."
@echo "Import \"$(GOBIN)/Geth.framework\" to use the library."
test: all
build/env.sh go run build/ci.go test
lint: ## Run linters.
build/env.sh go run build/ci.go lint
clean:
./build/clean_go_build_cache.sh
rm -fr build/_workspace/pkg/ $(GOBIN)/*
# The devtools target installs tools required for 'go generate'.
# You need to put $GOBIN (or $GOPATH/bin) in your PATH to use 'go generate'.
devtools:
env GOBIN= go get -u golang.org/x/tools/cmd/stringer
env GOBIN= go get -u github.com/kevinburke/go-bindata/go-bindata
env GOBIN= go get -u github.com/fjl/gencodec
env GOBIN= go get -u github.com/golang/protobuf/protoc-gen-go
env GOBIN= go install ./cmd/abigen
@type "npm" 2> /dev/null || echo 'Please install node.js and npm'
@type "solc" 2> /dev/null || echo 'Please install solc'
@type "protoc" 2> /dev/null || echo 'Please install protoc'
swarm-devtools:
env GOBIN= go install ./cmd/swarm/mimegen
# Cross Compilation Targets (xgo)
geth-cross: geth-linux geth-darwin geth-windows geth-android geth-ios
@echo "Full cross compilation done:"
@ls -ld $(GOBIN)/geth-*
geth-linux: geth-linux-386 geth-linux-amd64 geth-linux-arm geth-linux-mips64 geth-linux-mips64le
@echo "Linux cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-*
geth-linux-386:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/386 -v ./cmd/geth
@echo "Linux 386 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep 386
geth-linux-amd64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/amd64 -v ./cmd/geth
@echo "Linux amd64 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep amd64
geth-linux-arm: geth-linux-arm-5 geth-linux-arm-6 geth-linux-arm-7 geth-linux-arm64
@echo "Linux ARM cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm
geth-linux-arm-5:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm-5 -v ./cmd/geth
@echo "Linux ARMv5 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm-5
geth-linux-arm-6:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm-6 -v ./cmd/geth
@echo "Linux ARMv6 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm-6
geth-linux-arm-7:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm-7 -v ./cmd/geth
@echo "Linux ARMv7 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm-7
geth-linux-arm64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/arm64 -v ./cmd/geth
@echo "Linux ARM64 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep arm64
geth-linux-mips:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mips --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPS cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mips
geth-linux-mipsle:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mipsle --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPSle cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mipsle
geth-linux-mips64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mips64 --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPS64 cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mips64
geth-linux-mips64le:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=linux/mips64le --ldflags '-extldflags "-static"' -v ./cmd/geth
@echo "Linux MIPS64le cross compilation done:"
@ls -ld $(GOBIN)/geth-linux-* | grep mips64le
geth-darwin: geth-darwin-386 geth-darwin-amd64
@echo "Darwin cross compilation done:"
@ls -ld $(GOBIN)/geth-darwin-*
geth-darwin-386:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=darwin/386 -v ./cmd/geth
@echo "Darwin 386 cross compilation done:"
@ls -ld $(GOBIN)/geth-darwin-* | grep 386
geth-darwin-amd64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=darwin/amd64 -v ./cmd/geth
@echo "Darwin amd64 cross compilation done:"
@ls -ld $(GOBIN)/geth-darwin-* | grep amd64
geth-windows: geth-windows-386 geth-windows-amd64
@echo "Windows cross compilation done:"
@ls -ld $(GOBIN)/geth-windows-*
geth-windows-386:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=windows/386 -v ./cmd/geth
@echo "Windows 386 cross compilation done:"
@ls -ld $(GOBIN)/geth-windows-* | grep 386
geth-windows-amd64:
build/env.sh go run build/ci.go xgo -- --go=$(GO) --targets=windows/amd64 -v ./cmd/geth
@echo "Windows amd64 cross compilation done:"
@ls -ld $(GOBIN)/geth-windows-* | grep amd64
alltools:
build/env.sh go run build/ci.go install ./cmd/...

475
README.md
View File

@ -1,290 +1,314 @@
## Go Ethereum
## Swarm <!-- omit in toc -->
Official golang implementation of the Ethereum protocol.
[https://swarm.ethereum.org](https://swarm.ethereum.org)
[![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://godoc.org/github.com/ethereum/go-ethereum)
[![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[![Travis](https://travis-ci.org/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv)
Swarm is a distributed storage platform and content distribution service, a native base layer service of the ethereum web3 stack. The primary objective of Swarm is to provide a decentralized and redundant store for dapp code and data as well as block chain and state data. Swarm is also set out to provide various base layer services for web3, including node-to-node messaging, media streaming, decentralised database services and scalable state-channel infrastructure for decentralised service economies.
Automated builds are available for stable releases and the unstable master branch.
Binary archives are published at https://geth.ethereum.org/downloads/.
[![Travis](https://travis-ci.org/ethersphere/swarm.svg?branch=master)](https://travis-ci.org/ethersphere/swarm)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ethersphere/orange-lounge?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
## Table of Contents <!-- omit in toc -->
- [Building the source](#Building-the-source)
- [Running Swarm](#Running-Swarm)
- [Verifying that your local Swarm node is running](#Verifying-that-your-local-Swarm-node-is-running)
- [Ethereum Name Service resolution](#Ethereum-Name-Service-resolution)
- [Documentation](#Documentation)
- [Docker](#Docker)
- [Docker tags](#Docker-tags)
- [Swarm command line arguments](#Swarm-command-line-arguments)
- [Developers Guide](#Developers-Guide)
- [Go Environment](#Go-Environment)
- [Vendored Dependencies](#Vendored-Dependencies)
- [Testing](#Testing)
- [Profiling Swarm](#Profiling-Swarm)
- [Metrics and Instrumentation in Swarm](#Metrics-and-Instrumentation-in-Swarm)
- [Visualizing metrics](#Visualizing-metrics)
- [Public Gateways](#Public-Gateways)
- [Swarm Dapps](#Swarm-Dapps)
- [Contributing](#Contributing)
- [License](#License)
## Building the source
For prerequisites and detailed build instructions please read the
[Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum)
on the wiki.
Building Swarm requires Go (version 1.11 or later).
Building geth requires both a Go (version 1.9 or later) and a C compiler.
You can install them using your favourite package manager.
Once the dependencies are installed, run
To simply compile the `swarm` binary without a `GOPATH`:
make geth
or, to build the full suite of utilities:
make all
## Executables
The go-ethereum project comes with several wrappers/executables found in the `cmd` directory.
| Command | Description |
|:----------:|-------------|
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `swarm` | Swarm daemon and tools. This is the entrypoint for the Swarm network. `swarm --help` for command line options and subcommands. See [Swarm README](https://github.com/ethereum/go-ethereum/tree/master/swarm) for more information. |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
## Running geth
Going through all the possible command line flags is out of scope here (please consult our
[CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options)), but we've
enumerated a few common parameter combos to get you up to speed quickly on how you can run your
own Geth instance.
### Full node on the main Ethereum network
By far the most common scenario is people wanting to simply interact with the Ethereum network:
create accounts; transfer funds; deploy and interact with contracts. For this particular use-case
the user doesn't care about years-old historical data, so we can fast-sync quickly to the current
state of the network. To do so:
```
$ geth console
```bash
$ git clone https://github.com/ethersphere/swarm
$ cd swarm
$ make swarm
```
This command will:
You will find the binary under `./build/bin/swarm`.
* Start geth in fast sync mode (default, can be changed with the `--syncmode` flag), causing it to
download more data in exchange for avoiding processing the entire history of the Ethereum network,
which is very CPU intensive.
* Start up Geth's built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://github.com/ethereum/wiki/wiki/JavaScript-API)
as well as Geth's own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
This tool is optional and if you leave it out you can always attach to an already running Geth instance
with `geth attach`.
To build a vendored `swarm` using `go get` you must have `GOPATH` set. Then run:
### Full node on the Ethereum test network
Transitioning towards developers, if you'd like to play around with creating Ethereum contracts, you
almost certainly would like to do that without any real money involved until you get the hang of the
entire system. In other words, instead of attaching to the main network, you want to join the **test**
network with your node, which is fully equivalent to the main network, but with play-Ether only.
```
$ geth --testnet console
```bash
$ go get -d github.com/ethersphere/swarm
$ go install github.com/ethersphere/swarm/cmd/swarm
```
The `console` subcommand have the exact same meaning as above and they are equally useful on the
testnet too. Please see above for their explanations if you've skipped to here.
## Running Swarm
Specifying the `--testnet` flag however will reconfigure your Geth instance a bit:
* Instead of using the default data directory (`~/.ethereum` on Linux for example), Geth will nest
itself one level deeper into a `testnet` subfolder (`~/.ethereum/testnet` on Linux). Note, on OSX
and Linux this also means that attaching to a running testnet node requires the use of a custom
endpoint since `geth attach` will try to attach to a production node endpoint by default. E.g.
`geth attach <datadir>/testnet/geth.ipc`. Windows users are not affected by this.
* Instead of connecting the main Ethereum network, the client will connect to the test network,
which uses different P2P bootnodes, different network IDs and genesis states.
*Note: Although there are some internal protective measures to prevent transactions from crossing
over between the main network and test network, you should make sure to always use separate accounts
for play-money and real-money. Unless you manually move accounts, Geth will by default correctly
separate the two networks and will not make any accounts available between them.*
### Full node on the Rinkeby test network
The above test network is a cross client one based on the ethash proof-of-work consensus algorithm. As such, it has certain extra overhead and is more susceptible to reorganization attacks due to the network's low difficulty / security. Go Ethereum also supports connecting to a proof-of-authority based test network called [*Rinkeby*](https://www.rinkeby.io) (operated by members of the community). This network is lighter, more secure, but is only supported by go-ethereum.
```
$ geth --rinkeby console
```bash
$ swarm
```
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a configuration file via:
If you don't have an account yet, then you will be prompted to create one and secure it with a password:
```
$ geth --config /path/to/your_config.toml
Your new account is locked with a password. Please give a password. Do not forget this password.
Passphrase:
Repeat passphrase:
```
To get an idea how the file should look like you can use the `dumpconfig` subcommand to export your existing configuration:
If you have multiple accounts created, then you'll have to choose one of the accounts by using the `--bzzaccount` flag.
```
$ geth --your-favourite-flags dumpconfig
```bash
$ swarm --bzzaccount <your-account-here>
# example
$ swarm --bzzaccount 2f1cd699b0bf461dcfbf0098ad8f5587b038f0f1
```
*Note: This works only with geth v1.6.0 and above.*
### Verifying that your local Swarm node is running
#### Docker quick start
When running, Swarm is accessible through an HTTP API on port 8500.
One of the quickest ways to get Ethereum up and running on your machine is by using Docker:
Confirm that it is up and running by pointing your browser to http://localhost:8500
```
docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
-p 8545:8545 -p 30303:30303 \
ethereum/client-go
### Ethereum Name Service resolution
The Ethereum Name Service is the Ethereum equivalent of DNS in the classic web. In order to use ENS to resolve names to Swarm content hashes (e.g. `bzz://theswarm.eth`), `swarm` has to connect to a `geth` instance, which is synced with the Ethereum mainnet. This is done using the `--ens-api` flag.
```bash
$ swarm --bzzaccount <your-account-here> \
--ens-api '$HOME/.ethereum/geth.ipc'
# in our example
$ swarm --bzzaccount 2f1cd699b0bf461dcfbf0098ad8f5587b038f0f1 \
--ens-api '$HOME/.ethereum/geth.ipc'
```
This will start geth in fast-sync mode with a DB memory allowance of 1GB just as the above command does. It will also create a persistent volume in your home directory for saving your blockchain as well as map the default ports. There is also an `alpine` tag available for a slim version of the image.
For more information on usage, features or command line flags, please consult the Documentation.
Do not forget `--rpcaddr 0.0.0.0`, if you want to access RPC from other containers and/or hosts. By default, `geth` binds to the local interface and RPC endpoints is not accessible from the outside.
## Documentation
### Programatically interfacing Geth nodes
Swarm documentation can be found at [https://swarm-guide.readthedocs.io](https://swarm-guide.readthedocs.io).
As a developer, sooner rather than later you'll want to start interacting with Geth and the Ethereum
network via your own programs and not manually through the console. To aid this, Geth has built-in
support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC) and
[Geth specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)). These can be
exposed via HTTP, WebSockets and IPC (unix sockets on unix based platforms, and named pipes on Windows).
## Docker
The IPC interface is enabled by default and exposes all the APIs supported by Geth, whereas the HTTP
and WS interfaces need to manually be enabled and only expose a subset of APIs due to security reasons.
These can be turned on/off and configured as you'd expect.
Swarm container images are available at Docker Hub: [ethersphere/swarm](https://hub.docker.com/r/ethersphere/swarm)
HTTP based JSON-RPC API options:
### Docker tags
* `--rpc` Enable the HTTP-RPC server
* `--rpcaddr` HTTP-RPC server listening interface (default: "localhost")
* `--rpcport` HTTP-RPC server listening port (default: 8545)
* `--rpcapi` API's offered over the HTTP-RPC interface (default: "eth,net,web3")
* `--rpccorsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--ws` Enable the WS-RPC server
* `--wsaddr` WS-RPC server listening interface (default: "localhost")
* `--wsport` WS-RPC server listening port (default: 8546)
* `--wsapi` API's offered over the WS-RPC interface (default: "eth,net,web3")
* `--wsorigins` Origins from which to accept websockets requests
* `--ipcdisable` Disable the IPC-RPC server
* `--ipcapi` API's offered over the IPC-RPC interface (default: "admin,debug,eth,miner,net,personal,shh,txpool,web3")
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
* `latest` - latest stable release
* `edge` - latest build from `master`
* `v0.x.y` - specific stable release
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to connect
via HTTP, WS or IPC to a Geth node configured with the above flags and you'll need to speak [JSON-RPC](https://www.jsonrpc.org/specification)
on all transports. You can reuse the same connection for multiple requests!
### Swarm command line arguments
**Note: Please understand the security implications of opening up an HTTP/WS based transport before
doing so! Hackers on the internet are actively trying to subvert Ethereum nodes with exposed APIs!
Further, all browser tabs can access locally running webservers, so malicious webpages could try to
subvert locally available APIs!**
All Swarm command line arguments are supported and can be sent as part of the CMD field to the Docker container.
### Operating a private network
**Examples:**
Maintaining your own private network is more involved as a lot of configurations taken for granted in
the official networks need to be manually set up.
Running a Swarm container from the command line
#### Defining the private genesis state
First, you'll need to create the genesis state of your networks, which all nodes need to be aware of
and agree upon. This consists of a small JSON file (e.g. call it `genesis.json`):
```json
{
"config": {
"chainId": 0,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
},
"alloc" : {},
"coinbase" : "0x0000000000000000000000000000000000000000",
"difficulty" : "0x20000",
"extraData" : "",
"gasLimit" : "0x2fefd8",
"nonce" : "0x0000000000000042",
"mixhash" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp" : "0x00"
}
```bash
$ docker run -it ethersphere/swarm \
--debug \
--verbosity 4
```
The above fields should be fine for most purposes, although we'd recommend changing the `nonce` to
some random value so you prevent unknown remote nodes from being able to connect to you. If you'd
like to pre-fund some accounts for easier testing, you can populate the `alloc` field with account
configs:
```json
"alloc": {
"0x0000000000000000000000000000000000000001": {"balance": "111111111"},
"0x0000000000000000000000000000000000000002": {"balance": "222222222"}
}
Running a Swarm container with custom ENS endpoint
```bash
$ docker run -it ethersphere/swarm \
--ens-api http://1.2.3.4:8545 \
--debug \
--verbosity 4
```
With the genesis state defined in the above JSON file, you'll need to initialize **every** Geth node
with it prior to starting it up to ensure all blockchain parameters are correctly set:
Running a Swarm container with metrics enabled
```
$ geth init path/to/genesis.json
```bash
$ docker run -it ethersphere/swarm \
--debug \
--metrics \
--metrics.influxdb.export \
--metrics.influxdb.endpoint "http://localhost:8086" \
--metrics.influxdb.username "user" \
--metrics.influxdb.password "pass" \
--metrics.influxdb.database "metrics" \
--metrics.influxdb.host.tag "localhost" \
--verbosity 4
```
#### Creating the rendezvous point
Running a Swarm container with tracing and pprof server enabled
With all nodes that you want to run initialized to the desired genesis state, you'll need to start a
bootstrap node that others can use to find each other in your network and/or over the internet. The
clean way is to configure and run a dedicated bootnode:
```
$ bootnode --genkey=boot.key
$ bootnode --nodekey=boot.key
```bash
$ docker run -it ethersphere/swarm \
--debug \
--tracing \
--tracing.endpoint 127.0.0.1:6831 \
--tracing.svc myswarm \
--pprof \
--pprofaddr 0.0.0.0 \
--pprofport 6060
```
With the bootnode online, it will display an [`enode` URL](https://github.com/ethereum/wiki/wiki/enode-url-format)
that other nodes can use to connect to it and exchange peer information. Make sure to replace the
displayed IP address information (most probably `[::]`) with your externally accessible IP to get the
actual `enode` URL.
Running a Swarm container with a custom data directory mounted from a volume and a password file to unlock the swarm account
*Note: You could also use a full fledged Geth node as a bootnode, but it's the less recommended way.*
#### Starting up your member nodes
With the bootnode operational and externally reachable (you can try `telnet <ip> <port>` to ensure
it's indeed reachable), start every subsequent Geth node pointed to the bootnode for peer discovery
via the `--bootnodes` flag. It will probably also be desirable to keep the data directory of your
private network separated, so do also specify a custom `--datadir` flag.
```
$ geth --datadir=path/to/custom/data/folder --bootnodes=<bootnode-enode-url-from-above>
```bash
$ docker run -it -v $PWD/hostdata:/data \
-v $PWD/password:/password \
ethersphere/swarm \
--datadir /data \
--password /password \
--debug \
--verbosity 4
```
*Note: Since your network will be completely cut off from the main and test networks, you'll also
need to configure a miner to process transactions and create new blocks for you.*
## Developers Guide
#### Running a private miner
### Go Environment
Mining on the public Ethereum network is a complex task as it's only feasible using GPUs, requiring
an OpenCL or CUDA enabled `ethminer` instance. For information on such a setup, please consult the
[EtherMining subreddit](https://www.reddit.com/r/EtherMining/) and the [Genoil miner](https://github.com/Genoil/cpp-ethereum)
repository.
We assume that you have Go v1.11 installed, and `GOPATH` is set.
In a private network setting however, a single CPU miner instance is more than enough for practical
purposes as it can produce a stable stream of blocks at the correct intervals without needing heavy
resources (consider running on a single thread, no need for multiple ones either). To start a Geth
instance for mining, run it with all your usual flags, extended by:
You must have your working copy under `$GOPATH/src/github.com/ethersphere/swarm`.
```
$ geth <usual-flags> --mine --minerthreads=1 --etherbase=0x0000000000000000000000000000000000000000
Most likely you will be working from your fork of `swarm`, let's say from `github.com/nirname/swarm`. Clone or move your fork into the right place:
```bash
$ git clone git@github.com:nirname/swarm.git $GOPATH/src/github.com/ethersphere/swarm
```
Which will start mining blocks and transactions on a single CPU thread, crediting all proceedings to
the account specified by `--etherbase`. You can further tune the mining by changing the default gas
limit blocks converge to (`--targetgaslimit`) and the price transactions are accepted at (`--gasprice`).
## Contribution
### Vendored Dependencies
All dependencies are tracked in the `vendor` directory. We use `govendor` to manage them.
If you want to add a new dependency, run `govendor fetch <import-path>`, then commit the result.
If you want to update all dependencies to their latest upstream version, run `govendor fetch +v`.
### Testing
This section explains how to run unit, integration, and end-to-end tests in your development sandbox.
Testing one library:
```bash
$ go test -v -cpu 4 ./api
```
Note: Using options -cpu (number of cores allowed) and -v (logging even if no error) is recommended.
Testing only some methods:
```bash
$ go test -v -cpu 4 ./api -run TestMethod
```
Note: here all tests with prefix TestMethod will be run, so if you got TestMethod, TestMethod1, then both!
Running benchmarks:
```bash
$ go test -v -cpu 4 -bench . -run BenchmarkJoin
```
### Profiling Swarm
This section explains how to add Go `pprof` profiler to Swarm
If `swarm` is started with the `--pprof` option, a debugging HTTP server is made available on port 6060.
You can bring up http://localhost:6060/debug/pprof to see the heap, running routines etc.
By clicking full goroutine stack dump (clicking http://localhost:6060/debug/pprof/goroutine?debug=2) you can generate trace that is useful for debugging.
### Metrics and Instrumentation in Swarm
This section explains how to visualize and use existing Swarm metrics and how to instrument Swarm with a new metric.
Swarm metrics system is based on the `go-metrics` library.
The most common types of measurements we use in Swarm are `counters` and `resetting timers`. Consult the `go-metrics` documentation for full reference of available types.
```go
// incrementing a counter
metrics.GetOrRegisterCounter("network.stream.received_chunks", nil).Inc(1)
// measuring latency with a resetting timer
start := time.Now()
t := metrics.GetOrRegisterResettingTimer("http.request.GET.time"), nil)
...
t := UpdateSince(start)
```
#### Visualizing metrics
Swarm supports an InfluxDB exporter. Consult the help section to learn about the command line arguments used to configure it:
```bash
$ swarm --help | grep metrics
```
We use Grafana and InfluxDB to visualise metrics reported by Swarm. We keep our Grafana dashboards under version control at https://github.com/ethersphere/grafana-dashboards. You could use them or design your own.
We have built a tool to help with automatic start of Grafana and InfluxDB and provisioning of dashboards at https://github.com/nonsense/stateth, which requires that you have Docker installed.
Once you have `stateth` installed, and you have Docker running locally, you have to:
1. Run `stateth` and keep it running in the background
```bash
$ stateth --rm --grafana-dashboards-folder $GOPATH/src/github.com/ethersphere/grafana-dashboards --influxdb-database metrics
```
2. Run `swarm` with at least the following params:
```bash
--metrics \
--metrics.influxdb.export \
--metrics.influxdb.endpoint "http://localhost:8086" \
--metrics.influxdb.username "admin" \
--metrics.influxdb.password "admin" \
--metrics.influxdb.database "metrics"
```
3. Open Grafana at http://localhost:3000 and view the dashboards to gain insight into Swarm.
## Public Gateways
Swarm offers a local HTTP proxy API that Dapps can use to interact with Swarm. The Ethereum Foundation is hosting a public gateway, which allows free access so that people can try Swarm without running their own node.
The Swarm public gateways are temporary and users should not rely on their existence for production services.
The Swarm public gateway can be found at https://swarm-gateways.net and is always running the latest `stable` Swarm release.
## Swarm Dapps
You can find a few reference Swarm decentralised applications at: https://swarm-gateways.net/bzz:/swarmapps.eth
Their source code can be found at: https://github.com/ethersphere/swarm-dapps
## Contributing
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request
If you'd like to contribute to Swarm, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit more
complex changes though, please check up with the core devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum)
complex changes though, please check up with the core devs first on [our Swarm gitter channel](https://gitter.im/ethersphere/orange-lounge)
to ensure those changes are in line with the general philosophy of the project and/or get some
early feedback which can make both your efforts much lighter as well as our review and merge
procedures quick and simple.
@ -294,18 +318,17 @@ Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Pull requests need to be based on and opened against the `master` branch.
* [Code review guidelines](https://github.com/ethersphere/swarm/blob/master/docs/Code-Review-Guidelines.md).
* Commit messages should be prefixed with the package(s) they modify.
* E.g. "eth, rpc: make trace configs optional"
* E.g. "fuse: ignore default manifest entry"
Please see the [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
for more details on configuring your environment, managing project dependencies and testing procedures.
## License
The go-ethereum library (i.e. all code outside of the `cmd` directory) is licensed under the
The swarm library (i.e. all code outside of the `cmd` directory) is licensed under the
[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html), also
included in our repository in the `COPYING.LESSER` file.
The go-ethereum binaries (i.e. all code inside of the `cmd` directory) is licensed under the
The swarm binaries (i.e. all code inside of the `cmd` directory) is licensed under the
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also included
in our repository in the `COPYING` file.

View File

@ -1,724 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"encoding/hex"
"fmt"
"log"
"math/big"
"strings"
"testing"
"reflect"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
const jsondata = `
[
{ "type" : "function", "name" : "balance", "constant" : true },
{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] }
]`
const jsondata2 = `
[
{ "type" : "function", "name" : "balance", "constant" : true },
{ "type" : "function", "name" : "send", "constant" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "type" : "function", "name" : "test", "constant" : false, "inputs" : [ { "name" : "number", "type" : "uint32" } ] },
{ "type" : "function", "name" : "string", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "string" } ] },
{ "type" : "function", "name" : "bool", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "bool" } ] },
{ "type" : "function", "name" : "address", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "address" } ] },
{ "type" : "function", "name" : "uint64[2]", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint64[2]" } ] },
{ "type" : "function", "name" : "uint64[]", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint64[]" } ] },
{ "type" : "function", "name" : "foo", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32" } ] },
{ "type" : "function", "name" : "bar", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32" }, { "name" : "string", "type" : "uint16" } ] },
{ "type" : "function", "name" : "slice", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32[2]" } ] },
{ "type" : "function", "name" : "slice256", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "sliceAddress", "constant" : false, "inputs" : [ { "name" : "inputs", "type" : "address[]" } ] },
{ "type" : "function", "name" : "sliceMultiAddress", "constant" : false, "inputs" : [ { "name" : "a", "type" : "address[]" }, { "name" : "b", "type" : "address[]" } ] }
]`
func TestReader(t *testing.T) {
Uint256, _ := NewType("uint256")
exp := ABI{
Methods: map[string]Method{
"balance": {
"balance", true, nil, nil,
},
"send": {
"send", false, []Argument{
{"amount", Uint256, false},
}, nil,
},
},
}
abi, err := JSON(strings.NewReader(jsondata))
if err != nil {
t.Error(err)
}
// deep equal fails for some reason
for name, expM := range exp.Methods {
gotM, exist := abi.Methods[name]
if !exist {
t.Errorf("Missing expected method %v", name)
}
if !reflect.DeepEqual(gotM, expM) {
t.Errorf("\nGot abi method: \n%v\ndoes not match expected method\n%v", gotM, expM)
}
}
for name, gotM := range abi.Methods {
expM, exist := exp.Methods[name]
if !exist {
t.Errorf("Found extra method %v", name)
}
if !reflect.DeepEqual(gotM, expM) {
t.Errorf("\nGot abi method: \n%v\ndoes not match expected method\n%v", gotM, expM)
}
}
}
func TestTestNumbers(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Error(err)
t.FailNow()
}
if _, err := abi.Pack("balance"); err != nil {
t.Error(err)
}
if _, err := abi.Pack("balance", 1); err == nil {
t.Error("expected error for balance(1)")
}
if _, err := abi.Pack("doesntexist", nil); err == nil {
t.Errorf("doesntexist shouldn't exist")
}
if _, err := abi.Pack("doesntexist", 1); err == nil {
t.Errorf("doesntexist(1) shouldn't exist")
}
if _, err := abi.Pack("send", big.NewInt(1000)); err != nil {
t.Error(err)
}
i := new(int)
*i = 1000
if _, err := abi.Pack("send", i); err == nil {
t.Errorf("expected send( ptr ) to throw, requires *big.Int instead of *int")
}
if _, err := abi.Pack("test", uint32(1000)); err != nil {
t.Error(err)
}
}
func TestTestString(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Error(err)
t.FailNow()
}
if _, err := abi.Pack("string", "hello world"); err != nil {
t.Error(err)
}
}
func TestTestBool(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Error(err)
t.FailNow()
}
if _, err := abi.Pack("bool", true); err != nil {
t.Error(err)
}
}
func TestTestSlice(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Error(err)
t.FailNow()
}
slice := make([]uint64, 2)
if _, err := abi.Pack("uint64[2]", slice); err != nil {
t.Error(err)
}
if _, err := abi.Pack("uint64[]", slice); err != nil {
t.Error(err)
}
}
func TestMethodSignature(t *testing.T) {
String, _ := NewType("string")
m := Method{"foo", false, []Argument{{"bar", String, false}, {"baz", String, false}}, nil}
exp := "foo(string,string)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
}
idexp := crypto.Keccak256([]byte(exp))[:4]
if !bytes.Equal(m.Id(), idexp) {
t.Errorf("expected ids to match %x != %x", m.Id(), idexp)
}
uintt, _ := NewType("uint256")
m = Method{"foo", false, []Argument{{"bar", uintt, false}}, nil}
exp = "foo(uint256)"
if m.Sig() != exp {
t.Error("signature mismatch", exp, "!=", m.Sig())
}
}
func TestMultiPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Error(err)
t.FailNow()
}
sig := crypto.Keccak256([]byte("bar(uint32,uint16)"))[:4]
sig = append(sig, make([]byte, 64)...)
sig[35] = 10
sig[67] = 11
packed, err := abi.Pack("bar", uint32(10), uint16(11))
if err != nil {
t.Error(err)
t.FailNow()
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
}
func ExampleJSON() {
const definition = `[{"constant":true,"inputs":[{"name":"","type":"address"}],"name":"isBar","outputs":[{"name":"","type":"bool"}],"type":"function"}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
log.Fatalln(err)
}
out, err := abi.Pack("isBar", common.HexToAddress("01"))
if err != nil {
log.Fatalln(err)
}
fmt.Printf("%x\n", out)
// Output:
// 1f2c40920000000000000000000000000000000000000000000000000000000000000001
}
func TestInputVariableInputLength(t *testing.T) {
const definition = `[
{ "type" : "function", "name" : "strOne", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" } ] },
{ "type" : "function", "name" : "bytesOne", "constant" : true, "inputs" : [ { "name" : "str", "type" : "bytes" } ] },
{ "type" : "function", "name" : "strTwo", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "str1", "type" : "string" } ] }
]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
// test one string
strin := "hello world"
strpack, err := abi.Pack("strOne", strin)
if err != nil {
t.Error(err)
}
offset := make([]byte, 32)
offset[31] = 32
length := make([]byte, 32)
length[31] = byte(len(strin))
value := common.RightPadBytes([]byte(strin), 32)
exp := append(offset, append(length, value...)...)
// ignore first 4 bytes of the output. This is the function identifier
strpack = strpack[4:]
if !bytes.Equal(strpack, exp) {
t.Errorf("expected %x, got %x\n", exp, strpack)
}
// test one bytes
btspack, err := abi.Pack("bytesOne", []byte(strin))
if err != nil {
t.Error(err)
}
// ignore first 4 bytes of the output. This is the function identifier
btspack = btspack[4:]
if !bytes.Equal(btspack, exp) {
t.Errorf("expected %x, got %x\n", exp, btspack)
}
// test two strings
str1 := "hello"
str2 := "world"
str2pack, err := abi.Pack("strTwo", str1, str2)
if err != nil {
t.Error(err)
}
offset1 := make([]byte, 32)
offset1[31] = 64
length1 := make([]byte, 32)
length1[31] = byte(len(str1))
value1 := common.RightPadBytes([]byte(str1), 32)
offset2 := make([]byte, 32)
offset2[31] = 128
length2 := make([]byte, 32)
length2[31] = byte(len(str2))
value2 := common.RightPadBytes([]byte(str2), 32)
exp2 := append(offset1, offset2...)
exp2 = append(exp2, append(length1, value1...)...)
exp2 = append(exp2, append(length2, value2...)...)
// ignore first 4 bytes of the output. This is the function identifier
str2pack = str2pack[4:]
if !bytes.Equal(str2pack, exp2) {
t.Errorf("expected %x, got %x\n", exp, str2pack)
}
// test two strings, first > 32, second < 32
str1 = strings.Repeat("a", 33)
str2pack, err = abi.Pack("strTwo", str1, str2)
if err != nil {
t.Error(err)
}
offset1 = make([]byte, 32)
offset1[31] = 64
length1 = make([]byte, 32)
length1[31] = byte(len(str1))
value1 = common.RightPadBytes([]byte(str1), 64)
offset2[31] = 160
exp2 = append(offset1, offset2...)
exp2 = append(exp2, append(length1, value1...)...)
exp2 = append(exp2, append(length2, value2...)...)
// ignore first 4 bytes of the output. This is the function identifier
str2pack = str2pack[4:]
if !bytes.Equal(str2pack, exp2) {
t.Errorf("expected %x, got %x\n", exp, str2pack)
}
// test two strings, first > 32, second >32
str1 = strings.Repeat("a", 33)
str2 = strings.Repeat("a", 33)
str2pack, err = abi.Pack("strTwo", str1, str2)
if err != nil {
t.Error(err)
}
offset1 = make([]byte, 32)
offset1[31] = 64
length1 = make([]byte, 32)
length1[31] = byte(len(str1))
value1 = common.RightPadBytes([]byte(str1), 64)
offset2 = make([]byte, 32)
offset2[31] = 160
length2 = make([]byte, 32)
length2[31] = byte(len(str2))
value2 = common.RightPadBytes([]byte(str2), 64)
exp2 = append(offset1, offset2...)
exp2 = append(exp2, append(length1, value1...)...)
exp2 = append(exp2, append(length2, value2...)...)
// ignore first 4 bytes of the output. This is the function identifier
str2pack = str2pack[4:]
if !bytes.Equal(str2pack, exp2) {
t.Errorf("expected %x, got %x\n", exp, str2pack)
}
}
func TestInputFixedArrayAndVariableInputLength(t *testing.T) {
const definition = `[
{ "type" : "function", "name" : "fixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "fixedArrBytes", "constant" : true, "inputs" : [ { "name" : "str", "type" : "bytes" }, { "name" : "fixedArr", "type" : "uint256[2]" } ] },
{ "type" : "function", "name" : "mixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr", "type": "uint256[2]" }, { "name" : "dynArr", "type": "uint256[]" } ] },
{ "type" : "function", "name" : "doubleFixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type": "uint256[2]" }, { "name" : "fixedArr2", "type": "uint256[3]" } ] },
{ "type" : "function", "name" : "multipleMixedArrStr", "constant" : true, "inputs" : [ { "name" : "str", "type" : "string" }, { "name" : "fixedArr1", "type": "uint256[2]" }, { "name" : "dynArr", "type" : "uint256[]" }, { "name" : "fixedArr2", "type" : "uint256[3]" } ] }
]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Error(err)
}
// test string, fixed array uint256[2]
strin := "hello world"
arrin := [2]*big.Int{big.NewInt(1), big.NewInt(2)}
fixedArrStrPack, err := abi.Pack("fixedArrStr", strin, arrin)
if err != nil {
t.Error(err)
}
// generate expected output
offset := make([]byte, 32)
offset[31] = 96
length := make([]byte, 32)
length[31] = byte(len(strin))
strvalue := common.RightPadBytes([]byte(strin), 32)
arrinvalue1 := common.LeftPadBytes(arrin[0].Bytes(), 32)
arrinvalue2 := common.LeftPadBytes(arrin[1].Bytes(), 32)
exp := append(offset, arrinvalue1...)
exp = append(exp, arrinvalue2...)
exp = append(exp, append(length, strvalue...)...)
// ignore first 4 bytes of the output. This is the function identifier
fixedArrStrPack = fixedArrStrPack[4:]
if !bytes.Equal(fixedArrStrPack, exp) {
t.Errorf("expected %x, got %x\n", exp, fixedArrStrPack)
}
// test byte array, fixed array uint256[2]
bytesin := []byte(strin)
arrin = [2]*big.Int{big.NewInt(1), big.NewInt(2)}
fixedArrBytesPack, err := abi.Pack("fixedArrBytes", bytesin, arrin)
if err != nil {
t.Error(err)
}
// generate expected output
offset = make([]byte, 32)
offset[31] = 96
length = make([]byte, 32)
length[31] = byte(len(strin))
strvalue = common.RightPadBytes([]byte(strin), 32)
arrinvalue1 = common.LeftPadBytes(arrin[0].Bytes(), 32)
arrinvalue2 = common.LeftPadBytes(arrin[1].Bytes(), 32)
exp = append(offset, arrinvalue1...)
exp = append(exp, arrinvalue2...)
exp = append(exp, append(length, strvalue...)...)
// ignore first 4 bytes of the output. This is the function identifier
fixedArrBytesPack = fixedArrBytesPack[4:]
if !bytes.Equal(fixedArrBytesPack, exp) {
t.Errorf("expected %x, got %x\n", exp, fixedArrBytesPack)
}
// test string, fixed array uint256[2], dynamic array uint256[]
strin = "hello world"
fixedarrin := [2]*big.Int{big.NewInt(1), big.NewInt(2)}
dynarrin := []*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)}
mixedArrStrPack, err := abi.Pack("mixedArrStr", strin, fixedarrin, dynarrin)
if err != nil {
t.Error(err)
}
// generate expected output
stroffset := make([]byte, 32)
stroffset[31] = 128
strlength := make([]byte, 32)
strlength[31] = byte(len(strin))
strvalue = common.RightPadBytes([]byte(strin), 32)
fixedarrinvalue1 := common.LeftPadBytes(fixedarrin[0].Bytes(), 32)
fixedarrinvalue2 := common.LeftPadBytes(fixedarrin[1].Bytes(), 32)
dynarroffset := make([]byte, 32)
dynarroffset[31] = byte(160 + ((len(strin)/32)+1)*32)
dynarrlength := make([]byte, 32)
dynarrlength[31] = byte(len(dynarrin))
dynarrinvalue1 := common.LeftPadBytes(dynarrin[0].Bytes(), 32)
dynarrinvalue2 := common.LeftPadBytes(dynarrin[1].Bytes(), 32)
dynarrinvalue3 := common.LeftPadBytes(dynarrin[2].Bytes(), 32)
exp = append(stroffset, fixedarrinvalue1...)
exp = append(exp, fixedarrinvalue2...)
exp = append(exp, dynarroffset...)
exp = append(exp, append(strlength, strvalue...)...)
dynarrarg := append(dynarrlength, dynarrinvalue1...)
dynarrarg = append(dynarrarg, dynarrinvalue2...)
dynarrarg = append(dynarrarg, dynarrinvalue3...)
exp = append(exp, dynarrarg...)
// ignore first 4 bytes of the output. This is the function identifier
mixedArrStrPack = mixedArrStrPack[4:]
if !bytes.Equal(mixedArrStrPack, exp) {
t.Errorf("expected %x, got %x\n", exp, mixedArrStrPack)
}
// test string, fixed array uint256[2], fixed array uint256[3]
strin = "hello world"
fixedarrin1 := [2]*big.Int{big.NewInt(1), big.NewInt(2)}
fixedarrin2 := [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)}
doubleFixedArrStrPack, err := abi.Pack("doubleFixedArrStr", strin, fixedarrin1, fixedarrin2)
if err != nil {
t.Error(err)
}
// generate expected output
stroffset = make([]byte, 32)
stroffset[31] = 192
strlength = make([]byte, 32)
strlength[31] = byte(len(strin))
strvalue = common.RightPadBytes([]byte(strin), 32)
fixedarrin1value1 := common.LeftPadBytes(fixedarrin1[0].Bytes(), 32)
fixedarrin1value2 := common.LeftPadBytes(fixedarrin1[1].Bytes(), 32)
fixedarrin2value1 := common.LeftPadBytes(fixedarrin2[0].Bytes(), 32)
fixedarrin2value2 := common.LeftPadBytes(fixedarrin2[1].Bytes(), 32)
fixedarrin2value3 := common.LeftPadBytes(fixedarrin2[2].Bytes(), 32)
exp = append(stroffset, fixedarrin1value1...)
exp = append(exp, fixedarrin1value2...)
exp = append(exp, fixedarrin2value1...)
exp = append(exp, fixedarrin2value2...)
exp = append(exp, fixedarrin2value3...)
exp = append(exp, append(strlength, strvalue...)...)
// ignore first 4 bytes of the output. This is the function identifier
doubleFixedArrStrPack = doubleFixedArrStrPack[4:]
if !bytes.Equal(doubleFixedArrStrPack, exp) {
t.Errorf("expected %x, got %x\n", exp, doubleFixedArrStrPack)
}
// test string, fixed array uint256[2], dynamic array uint256[], fixed array uint256[3]
strin = "hello world"
fixedarrin1 = [2]*big.Int{big.NewInt(1), big.NewInt(2)}
dynarrin = []*big.Int{big.NewInt(1), big.NewInt(2)}
fixedarrin2 = [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)}
multipleMixedArrStrPack, err := abi.Pack("multipleMixedArrStr", strin, fixedarrin1, dynarrin, fixedarrin2)
if err != nil {
t.Error(err)
}
// generate expected output
stroffset = make([]byte, 32)
stroffset[31] = 224
strlength = make([]byte, 32)
strlength[31] = byte(len(strin))
strvalue = common.RightPadBytes([]byte(strin), 32)
fixedarrin1value1 = common.LeftPadBytes(fixedarrin1[0].Bytes(), 32)
fixedarrin1value2 = common.LeftPadBytes(fixedarrin1[1].Bytes(), 32)
dynarroffset = U256(big.NewInt(int64(256 + ((len(strin)/32)+1)*32)))
dynarrlength = make([]byte, 32)
dynarrlength[31] = byte(len(dynarrin))
dynarrinvalue1 = common.LeftPadBytes(dynarrin[0].Bytes(), 32)
dynarrinvalue2 = common.LeftPadBytes(dynarrin[1].Bytes(), 32)
fixedarrin2value1 = common.LeftPadBytes(fixedarrin2[0].Bytes(), 32)
fixedarrin2value2 = common.LeftPadBytes(fixedarrin2[1].Bytes(), 32)
fixedarrin2value3 = common.LeftPadBytes(fixedarrin2[2].Bytes(), 32)
exp = append(stroffset, fixedarrin1value1...)
exp = append(exp, fixedarrin1value2...)
exp = append(exp, dynarroffset...)
exp = append(exp, fixedarrin2value1...)
exp = append(exp, fixedarrin2value2...)
exp = append(exp, fixedarrin2value3...)
exp = append(exp, append(strlength, strvalue...)...)
dynarrarg = append(dynarrlength, dynarrinvalue1...)
dynarrarg = append(dynarrarg, dynarrinvalue2...)
exp = append(exp, dynarrarg...)
// ignore first 4 bytes of the output. This is the function identifier
multipleMixedArrStrPack = multipleMixedArrStrPack[4:]
if !bytes.Equal(multipleMixedArrStrPack, exp) {
t.Errorf("expected %x, got %x\n", exp, multipleMixedArrStrPack)
}
}
func TestDefaultFunctionParsing(t *testing.T) {
const definition = `[{ "name" : "balance" }]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
if _, ok := abi.Methods["balance"]; !ok {
t.Error("expected 'balance' to be present")
}
}
func TestBareEvents(t *testing.T) {
const definition = `[
{ "type" : "event", "name" : "balance" },
{ "type" : "event", "name" : "anon", "anonymous" : true},
{ "type" : "event", "name" : "args", "inputs" : [{ "indexed":false, "name":"arg0", "type":"uint256" }, { "indexed":true, "name":"arg1", "type":"address" }] }
]`
arg0, _ := NewType("uint256")
arg1, _ := NewType("address")
expectedEvents := map[string]struct {
Anonymous bool
Args []Argument
}{
"balance": {false, nil},
"anon": {true, nil},
"args": {false, []Argument{
{Name: "arg0", Type: arg0, Indexed: false},
{Name: "arg1", Type: arg1, Indexed: true},
}},
}
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
if len(abi.Events) != len(expectedEvents) {
t.Fatalf("invalid number of events after parsing, want %d, got %d", len(expectedEvents), len(abi.Events))
}
for name, exp := range expectedEvents {
got, ok := abi.Events[name]
if !ok {
t.Errorf("could not found event %s", name)
continue
}
if got.Anonymous != exp.Anonymous {
t.Errorf("invalid anonymous indication for event %s, want %v, got %v", name, exp.Anonymous, got.Anonymous)
}
if len(got.Inputs) != len(exp.Args) {
t.Errorf("invalid number of args, want %d, got %d", len(exp.Args), len(got.Inputs))
continue
}
for i, arg := range exp.Args {
if arg.Name != got.Inputs[i].Name {
t.Errorf("events[%s].Input[%d] has an invalid name, want %s, got %s", name, i, arg.Name, got.Inputs[i].Name)
}
if arg.Indexed != got.Inputs[i].Indexed {
t.Errorf("events[%s].Input[%d] has an invalid indexed indication, want %v, got %v", name, i, arg.Indexed, got.Inputs[i].Indexed)
}
if arg.Type.T != got.Inputs[i].Type.T {
t.Errorf("events[%s].Input[%d] has an invalid type, want %x, got %x", name, i, arg.Type.T, got.Inputs[i].Type.T)
}
}
}
}
// TestUnpackEvent is based on this contract:
// contract T {
// event received(address sender, uint amount, bytes memo);
// event receivedAddr(address sender);
// function receive(bytes memo) external payable {
// received(msg.sender, msg.value, memo);
// receivedAddr(msg.sender);
// }
// }
// When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt:
// receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
func TestUnpackEvent(t *testing.T) {
const abiJSON = `[{"constant":false,"inputs":[{"name":"memo","type":"bytes"}],"name":"receive","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"}],"name":"receivedAddr","type":"event"}]`
abi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
const hexdata = `000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158`
data, err := hex.DecodeString(hexdata)
if err != nil {
t.Fatal(err)
}
if len(data)%32 == 0 {
t.Errorf("len(data) is %d, want a non-multiple of 32", len(data))
}
type ReceivedEvent struct {
Address common.Address
Amount *big.Int
Memo []byte
}
var ev ReceivedEvent
err = abi.Unpack(&ev, "received", data)
if err != nil {
t.Error(err)
} else {
t.Logf("len(data): %d; received event: %+v", len(data), ev)
}
type ReceivedAddrEvent struct {
Address common.Address
}
var receivedAddrEv ReceivedAddrEvent
err = abi.Unpack(&receivedAddrEv, "receivedAddr", data)
if err != nil {
t.Error(err)
} else {
t.Logf("len(data): %d; received event: %+v", len(data), receivedAddrEv)
}
}
func TestABI_MethodById(t *testing.T) {
const abiJSON = `[
{"type":"function","name":"receive","constant":false,"inputs":[{"name":"memo","type":"bytes"}],"outputs":[],"payable":true,"stateMutability":"payable"},
{"type":"event","name":"received","anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}]},
{"type":"function","name":"fixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr","type":"uint256[2]"}]},
{"type":"function","name":"fixedArrBytes","constant":true,"inputs":[{"name":"str","type":"bytes"},{"name":"fixedArr","type":"uint256[2]"}]},
{"type":"function","name":"mixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr","type":"uint256[2]"},{"name":"dynArr","type":"uint256[]"}]},
{"type":"function","name":"doubleFixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr1","type":"uint256[2]"},{"name":"fixedArr2","type":"uint256[3]"}]},
{"type":"function","name":"multipleMixedArrStr","constant":true,"inputs":[{"name":"str","type":"string"},{"name":"fixedArr1","type":"uint256[2]"},{"name":"dynArr","type":"uint256[]"},{"name":"fixedArr2","type":"uint256[3]"}]},
{"type":"function","name":"balance","constant":true},
{"type":"function","name":"send","constant":false,"inputs":[{"name":"amount","type":"uint256"}]},
{"type":"function","name":"test","constant":false,"inputs":[{"name":"number","type":"uint32"}]},
{"type":"function","name":"string","constant":false,"inputs":[{"name":"inputs","type":"string"}]},
{"type":"function","name":"bool","constant":false,"inputs":[{"name":"inputs","type":"bool"}]},
{"type":"function","name":"address","constant":false,"inputs":[{"name":"inputs","type":"address"}]},
{"type":"function","name":"uint64[2]","constant":false,"inputs":[{"name":"inputs","type":"uint64[2]"}]},
{"type":"function","name":"uint64[]","constant":false,"inputs":[{"name":"inputs","type":"uint64[]"}]},
{"type":"function","name":"foo","constant":false,"inputs":[{"name":"inputs","type":"uint32"}]},
{"type":"function","name":"bar","constant":false,"inputs":[{"name":"inputs","type":"uint32"},{"name":"string","type":"uint16"}]},
{"type":"function","name":"_slice","constant":false,"inputs":[{"name":"inputs","type":"uint32[2]"}]},
{"type":"function","name":"__slice256","constant":false,"inputs":[{"name":"inputs","type":"uint256[2]"}]},
{"type":"function","name":"sliceAddress","constant":false,"inputs":[{"name":"inputs","type":"address[]"}]},
{"type":"function","name":"sliceMultiAddress","constant":false,"inputs":[{"name":"a","type":"address[]"},{"name":"b","type":"address[]"}]}
]
`
abi, err := JSON(strings.NewReader(abiJSON))
if err != nil {
t.Fatal(err)
}
for name, m := range abi.Methods {
a := fmt.Sprintf("%v", m)
m2, err := abi.MethodById(m.Id())
if err != nil {
t.Fatalf("Failed to look up ABI method: %v", err)
}
b := fmt.Sprintf("%v", m2)
if a != b {
t.Errorf("Method %v (id %v) not 'findable' by id in ABI", name, common.ToHex(m.Id()))
}
}
// Also test empty
if _, err := abi.MethodById([]byte{0x00}); err == nil {
t.Errorf("Expected error, too short to decode data")
}
if _, err := abi.MethodById([]byte{}); err == nil {
t.Errorf("Expected error, too short to decode data")
}
if _, err := abi.MethodById(nil); err == nil {
t.Errorf("Expected error, nil is short to decode data")
}
}

View File

@ -1,285 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"encoding/json"
"fmt"
"reflect"
"strings"
)
// Argument holds the name of the argument and the corresponding type.
// Types are used when packing and testing arguments.
type Argument struct {
Name string
Type Type
Indexed bool // indexed is only used by events
}
type Arguments []Argument
// UnmarshalJSON implements json.Unmarshaler interface
func (argument *Argument) UnmarshalJSON(data []byte) error {
var extarg struct {
Name string
Type string
Indexed bool
}
err := json.Unmarshal(data, &extarg)
if err != nil {
return fmt.Errorf("argument json err: %v", err)
}
argument.Type, err = NewType(extarg.Type)
if err != nil {
return err
}
argument.Name = extarg.Name
argument.Indexed = extarg.Indexed
return nil
}
// LengthNonIndexed returns the number of arguments when not counting 'indexed' ones. Only events
// can ever have 'indexed' arguments, it should always be false on arguments for method input/output
func (arguments Arguments) LengthNonIndexed() int {
out := 0
for _, arg := range arguments {
if !arg.Indexed {
out++
}
}
return out
}
// NonIndexed returns the arguments with indexed arguments filtered out
func (arguments Arguments) NonIndexed() Arguments {
var ret []Argument
for _, arg := range arguments {
if !arg.Indexed {
ret = append(ret, arg)
}
}
return ret
}
// isTuple returns true for non-atomic constructs, like (uint,uint) or uint[]
func (arguments Arguments) isTuple() bool {
return len(arguments) > 1
}
// Unpack performs the operation hexdata -> Go format
func (arguments Arguments) Unpack(v interface{}, data []byte) error {
// make sure the passed value is arguments pointer
if reflect.Ptr != reflect.ValueOf(v).Kind() {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
}
marshalledValues, err := arguments.UnpackValues(data)
if err != nil {
return err
}
if arguments.isTuple() {
return arguments.unpackTuple(v, marshalledValues)
}
return arguments.unpackAtomic(v, marshalledValues)
}
func (arguments Arguments) unpackTuple(v interface{}, marshalledValues []interface{}) error {
var (
value = reflect.ValueOf(v).Elem()
typ = value.Type()
kind = value.Kind()
)
if err := requireUnpackKind(value, typ, kind, arguments); err != nil {
return err
}
// If the interface is a struct, get of abi->struct_field mapping
var abi2struct map[string]string
if kind == reflect.Struct {
var err error
abi2struct, err = mapAbiToStructFields(arguments, value)
if err != nil {
return err
}
}
for i, arg := range arguments.NonIndexed() {
reflectValue := reflect.ValueOf(marshalledValues[i])
switch kind {
case reflect.Struct:
if structField, ok := abi2struct[arg.Name]; ok {
if err := set(value.FieldByName(structField), reflectValue, arg); err != nil {
return err
}
}
case reflect.Slice, reflect.Array:
if value.Len() < i {
return fmt.Errorf("abi: insufficient number of arguments for unpack, want %d, got %d", len(arguments), value.Len())
}
v := value.Index(i)
if err := requireAssignable(v, reflectValue); err != nil {
return err
}
if err := set(v.Elem(), reflectValue, arg); err != nil {
return err
}
default:
return fmt.Errorf("abi:[2] cannot unmarshal tuple in to %v", typ)
}
}
return nil
}
// unpackAtomic unpacks ( hexdata -> go ) a single value
func (arguments Arguments) unpackAtomic(v interface{}, marshalledValues []interface{}) error {
if len(marshalledValues) != 1 {
return fmt.Errorf("abi: wrong length, expected single value, got %d", len(marshalledValues))
}
elem := reflect.ValueOf(v).Elem()
kind := elem.Kind()
reflectValue := reflect.ValueOf(marshalledValues[0])
var abi2struct map[string]string
if kind == reflect.Struct {
var err error
if abi2struct, err = mapAbiToStructFields(arguments, elem); err != nil {
return err
}
arg := arguments.NonIndexed()[0]
if structField, ok := abi2struct[arg.Name]; ok {
return set(elem.FieldByName(structField), reflectValue, arg)
}
return nil
}
return set(elem, reflectValue, arguments.NonIndexed()[0])
}
// Computes the full size of an array;
// i.e. counting nested arrays, which count towards size for unpacking.
func getArraySize(arr *Type) int {
size := arr.Size
// Arrays can be nested, with each element being the same size
arr = arr.Elem
for arr.T == ArrayTy {
// Keep multiplying by elem.Size while the elem is an array.
size *= arr.Size
arr = arr.Elem
}
// Now we have the full array size, including its children.
return size
}
// UnpackValues can be used to unpack ABI-encoded hexdata according to the ABI-specification,
// without supplying a struct to unpack into. Instead, this method returns a list containing the
// values. An atomic argument will be a list with one element.
func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
retval := make([]interface{}, 0, arguments.LengthNonIndexed())
virtualArgs := 0
for index, arg := range arguments.NonIndexed() {
marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data)
if arg.Type.T == ArrayTy {
// If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256.
// This means that we need to add two 'virtual' arguments when
// we count the index from now on.
//
// Array values nested multiple levels deep are also encoded inline:
// [2][3]uint256: uint256,uint256,uint256,uint256,uint256,uint256
//
// Calculate the full array size to get the correct offset for the next argument.
// Decrement it by 1, as the normal index increment is still applied.
virtualArgs += getArraySize(&arg.Type) - 1
}
if err != nil {
return nil, err
}
retval = append(retval, marshalledValue)
}
return retval, nil
}
// PackValues performs the operation Go format -> Hexdata
// It is the semantic opposite of UnpackValues
func (arguments Arguments) PackValues(args []interface{}) ([]byte, error) {
return arguments.Pack(args...)
}
// Pack performs the operation Go format -> Hexdata
func (arguments Arguments) Pack(args ...interface{}) ([]byte, error) {
// Make sure arguments match up and pack them
abiArgs := arguments
if len(args) != len(abiArgs) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(abiArgs))
}
// variable input is the output appended at the end of packed
// output. This is used for strings and bytes types input.
var variableInput []byte
// input offset is the bytes offset for packed output
inputOffset := 0
for _, abiArg := range abiArgs {
inputOffset += getDynamicTypeOffset(abiArg.Type)
}
var ret []byte
for i, a := range args {
input := abiArgs[i]
// pack the input
packed, err := input.Type.pack(reflect.ValueOf(a))
if err != nil {
return nil, err
}
// check for dynamic types
if isDynamicType(input.Type) {
// set the offset
ret = append(ret, packNum(reflect.ValueOf(inputOffset))...)
// calculate next offset
inputOffset += len(packed)
// append to variable input
variableInput = append(variableInput, packed...)
} else {
// append the packed value to the input
ret = append(ret, packed...)
}
}
// append the variable input at the end of the packed input
ret = append(ret, variableInput...)
return ret, nil
}
// capitalise makes the first character of a string upper case, also removing any
// prefixing underscores from the variable names.
func capitalise(input string) string {
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return strings.ToUpper(input[:1]) + input[1:]
}

View File

@ -1,350 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind
import (
"context"
"errors"
"fmt"
"math/big"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/event"
)
// SignerFn is a signer function callback when a contract requires a method to
// sign the transaction before submission.
type SignerFn func(types.Signer, common.Address, *types.Transaction) (*types.Transaction, error)
// CallOpts is the collection of options to fine tune a contract call request.
type CallOpts struct {
Pending bool // Whether to operate on the pending state or the last known one
From common.Address // Optional the sender address, otherwise the first account is used
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// TransactOpts is the collection of authorization data required to create a
// valid Ethereum transaction.
type TransactOpts struct {
From common.Address // Ethereum account to send the transaction from
Nonce *big.Int // Nonce to use for the transaction execution (nil = use pending state)
Signer SignerFn // Method to use for signing the transaction (mandatory)
Value *big.Int // Funds to transfer along along the transaction (nil = 0 = no funds)
GasPrice *big.Int // Gas price to use for the transaction execution (nil = gas price oracle)
GasLimit uint64 // Gas limit to set for the transaction execution (0 = estimate)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// FilterOpts is the collection of options to fine tune filtering for events
// within a bound contract.
type FilterOpts struct {
Start uint64 // Start of the queried range
End *uint64 // End of the range (nil = latest)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// WatchOpts is the collection of options to fine tune subscribing for events
// within a bound contract.
type WatchOpts struct {
Start *uint64 // Start of the queried range (nil = latest)
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// BoundContract is the base wrapper object that reflects a contract on the
// Ethereum network. It contains a collection of methods that are used by the
// higher level contract bindings to operate.
type BoundContract struct {
address common.Address // Deployment address of the contract on the Ethereum blockchain
abi abi.ABI // Reflect based ABI to access the correct Ethereum methods
caller ContractCaller // Read interface to interact with the blockchain
transactor ContractTransactor // Write interface to interact with the blockchain
filterer ContractFilterer // Event filtering to interact with the blockchain
}
// NewBoundContract creates a low level contract interface through which calls
// and transactions may be made through.
func NewBoundContract(address common.Address, abi abi.ABI, caller ContractCaller, transactor ContractTransactor, filterer ContractFilterer) *BoundContract {
return &BoundContract{
address: address,
abi: abi,
caller: caller,
transactor: transactor,
filterer: filterer,
}
}
// DeployContract deploys a contract onto the Ethereum blockchain and binds the
// deployment address with a Go wrapper.
func DeployContract(opts *TransactOpts, abi abi.ABI, bytecode []byte, backend ContractBackend, params ...interface{}) (common.Address, *types.Transaction, *BoundContract, error) {
// Otherwise try to deploy the contract
c := NewBoundContract(common.Address{}, abi, backend, backend, backend)
input, err := c.abi.Pack("", params...)
if err != nil {
return common.Address{}, nil, nil, err
}
tx, err := c.transact(opts, nil, append(bytecode, input...))
if err != nil {
return common.Address{}, nil, nil, err
}
c.address = crypto.CreateAddress(opts.From, tx.Nonce())
return c.address, tx, c, nil
}
// Call invokes the (constant) contract method with params as input values and
// sets the output to result. The result type might be a single field for simple
// returns, a slice of interfaces for anonymous returns and a struct for named
// returns.
func (c *BoundContract) Call(opts *CallOpts, result interface{}, method string, params ...interface{}) error {
// Don't crash on a lazy user
if opts == nil {
opts = new(CallOpts)
}
// Pack the input, call and unpack the results
input, err := c.abi.Pack(method, params...)
if err != nil {
return err
}
var (
msg = ethereum.CallMsg{From: opts.From, To: &c.address, Data: input}
ctx = ensureContext(opts.Context)
code []byte
output []byte
)
if opts.Pending {
pb, ok := c.caller.(PendingContractCaller)
if !ok {
return ErrNoPendingState
}
output, err = pb.PendingCallContract(ctx, msg)
if err == nil && len(output) == 0 {
// Make sure we have a contract to operate on, and bail out otherwise.
if code, err = pb.PendingCodeAt(ctx, c.address); err != nil {
return err
} else if len(code) == 0 {
return ErrNoCode
}
}
} else {
output, err = c.caller.CallContract(ctx, msg, nil)
if err == nil && len(output) == 0 {
// Make sure we have a contract to operate on, and bail out otherwise.
if code, err = c.caller.CodeAt(ctx, c.address, nil); err != nil {
return err
} else if len(code) == 0 {
return ErrNoCode
}
}
}
if err != nil {
return err
}
return c.abi.Unpack(result, method, output)
}
// Transact invokes the (paid) contract method with params as input values.
func (c *BoundContract) Transact(opts *TransactOpts, method string, params ...interface{}) (*types.Transaction, error) {
// Otherwise pack up the parameters and invoke the contract
input, err := c.abi.Pack(method, params...)
if err != nil {
return nil, err
}
return c.transact(opts, &c.address, input)
}
// Transfer initiates a plain transaction to move funds to the contract, calling
// its default method if one is available.
func (c *BoundContract) Transfer(opts *TransactOpts) (*types.Transaction, error) {
return c.transact(opts, &c.address, nil)
}
// transact executes an actual transaction invocation, first deriving any missing
// authorization fields, and then scheduling the transaction for execution.
func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, input []byte) (*types.Transaction, error) {
var err error
// Ensure a valid value field and resolve the account nonce
value := opts.Value
if value == nil {
value = new(big.Int)
}
var nonce uint64
if opts.Nonce == nil {
nonce, err = c.transactor.PendingNonceAt(ensureContext(opts.Context), opts.From)
if err != nil {
return nil, fmt.Errorf("failed to retrieve account nonce: %v", err)
}
} else {
nonce = opts.Nonce.Uint64()
}
// Figure out the gas allowance and gas price values
gasPrice := opts.GasPrice
if gasPrice == nil {
gasPrice, err = c.transactor.SuggestGasPrice(ensureContext(opts.Context))
if err != nil {
return nil, fmt.Errorf("failed to suggest gas price: %v", err)
}
}
gasLimit := opts.GasLimit
if gasLimit == 0 {
// Gas estimation cannot succeed without code for method invocations
if contract != nil {
if code, err := c.transactor.PendingCodeAt(ensureContext(opts.Context), c.address); err != nil {
return nil, err
} else if len(code) == 0 {
return nil, ErrNoCode
}
}
// If the contract surely has code (or code is not needed), estimate the transaction
msg := ethereum.CallMsg{From: opts.From, To: contract, Value: value, Data: input}
gasLimit, err = c.transactor.EstimateGas(ensureContext(opts.Context), msg)
if err != nil {
return nil, fmt.Errorf("failed to estimate gas needed: %v", err)
}
}
// Create the transaction, sign it and schedule it for execution
var rawTx *types.Transaction
if contract == nil {
rawTx = types.NewContractCreation(nonce, value, gasLimit, gasPrice, input)
} else {
rawTx = types.NewTransaction(nonce, c.address, value, gasLimit, gasPrice, input)
}
if opts.Signer == nil {
return nil, errors.New("no signer to authorize the transaction with")
}
signedTx, err := opts.Signer(types.HomesteadSigner{}, opts.From, rawTx)
if err != nil {
return nil, err
}
if err := c.transactor.SendTransaction(ensureContext(opts.Context), signedTx); err != nil {
return nil, err
}
return signedTx, nil
}
// FilterLogs filters contract logs for past blocks, returning the necessary
// channels to construct a strongly typed bound iterator on top of them.
func (c *BoundContract) FilterLogs(opts *FilterOpts, name string, query ...[]interface{}) (chan types.Log, event.Subscription, error) {
// Don't crash on a lazy user
if opts == nil {
opts = new(FilterOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].Id()}}, query...)
topics, err := makeTopics(query...)
if err != nil {
return nil, nil, err
}
// Start the background filtering
logs := make(chan types.Log, 128)
config := ethereum.FilterQuery{
Addresses: []common.Address{c.address},
Topics: topics,
FromBlock: new(big.Int).SetUint64(opts.Start),
}
if opts.End != nil {
config.ToBlock = new(big.Int).SetUint64(*opts.End)
}
/* TODO(karalabe): Replace the rest of the method below with this when supported
sub, err := c.filterer.SubscribeFilterLogs(ensureContext(opts.Context), config, logs)
*/
buff, err := c.filterer.FilterLogs(ensureContext(opts.Context), config)
if err != nil {
return nil, nil, err
}
sub, err := event.NewSubscription(func(quit <-chan struct{}) error {
for _, log := range buff {
select {
case logs <- log:
case <-quit:
return nil
}
}
return nil
}), nil
if err != nil {
return nil, nil, err
}
return logs, sub, nil
}
// WatchLogs filters subscribes to contract logs for future blocks, returning a
// subscription object that can be used to tear down the watcher.
func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]interface{}) (chan types.Log, event.Subscription, error) {
// Don't crash on a lazy user
if opts == nil {
opts = new(WatchOpts)
}
// Append the event selector to the query parameters and construct the topic set
query = append([][]interface{}{{c.abi.Events[name].Id()}}, query...)
topics, err := makeTopics(query...)
if err != nil {
return nil, nil, err
}
// Start the background filtering
logs := make(chan types.Log, 128)
config := ethereum.FilterQuery{
Addresses: []common.Address{c.address},
Topics: topics,
}
if opts.Start != nil {
config.FromBlock = new(big.Int).SetUint64(*opts.Start)
}
sub, err := c.filterer.SubscribeFilterLogs(ensureContext(opts.Context), config, logs)
if err != nil {
return nil, nil, err
}
return logs, sub, nil
}
// UnpackLog unpacks a retrieved log into the provided output structure.
func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log) error {
if len(log.Data) > 0 {
if err := c.abi.Unpack(out, event, log.Data); err != nil {
return err
}
}
var indexed abi.Arguments
for _, arg := range c.abi.Events[event].Inputs {
if arg.Indexed {
indexed = append(indexed, arg)
}
}
return parseTopics(out, indexed, log.Topics[1:])
}
// ensureContext is a helper method to ensure a context is not nil, even if the
// user specified it as such.
func ensureContext(ctx context.Context) context.Context {
if ctx == nil {
return context.TODO()
}
return ctx
}

View File

@ -1,455 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package bind generates Ethereum contract Go bindings.
//
// Detailed usage document and tutorial available on the go-ethereum Wiki page:
// https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts
package bind
import (
"bytes"
"fmt"
"go/format"
"regexp"
"strings"
"text/template"
"unicode"
"github.com/ethereum/go-ethereum/accounts/abi"
)
// Lang is a target programming language selector to generate bindings for.
type Lang int
const (
LangGo Lang = iota
LangJava
LangObjC
)
// Bind generates a Go wrapper around a contract ABI. This wrapper isn't meant
// to be used as is in client code, but rather as an intermediate struct which
// enforces compile time type safety and naming convention opposed to having to
// manually maintain hard coded strings that break on runtime.
func Bind(types []string, abis []string, bytecodes []string, pkg string, lang Lang) (string, error) {
// Process each individual contract requested binding
contracts := make(map[string]*tmplContract)
for i := 0; i < len(types); i++ {
// Parse the actual ABI to generate the binding for
evmABI, err := abi.JSON(strings.NewReader(abis[i]))
if err != nil {
return "", err
}
// Strip any whitespace from the JSON ABI
strippedABI := strings.Map(func(r rune) rune {
if unicode.IsSpace(r) {
return -1
}
return r
}, abis[i])
// Extract the call and transact methods; events; and sort them alphabetically
var (
calls = make(map[string]*tmplMethod)
transacts = make(map[string]*tmplMethod)
events = make(map[string]*tmplEvent)
)
for _, original := range evmABI.Methods {
// Normalize the method for capital cases and non-anonymous inputs/outputs
normalized := original
normalized.Name = methodNormalizer[lang](original.Name)
normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs {
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
}
normalized.Outputs = make([]abi.Argument, len(original.Outputs))
copy(normalized.Outputs, original.Outputs)
for j, output := range normalized.Outputs {
if output.Name != "" {
normalized.Outputs[j].Name = capitalise(output.Name)
}
}
// Append the methods to the call or transact lists
if original.Const {
calls[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original.Outputs)}
} else {
transacts[original.Name] = &tmplMethod{Original: original, Normalized: normalized, Structured: structured(original.Outputs)}
}
}
for _, original := range evmABI.Events {
// Skip anonymous events as they don't support explicit filtering
if original.Anonymous {
continue
}
// Normalize the event for capital cases and non-anonymous outputs
normalized := original
normalized.Name = methodNormalizer[lang](original.Name)
normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs {
// Indexed fields are input, non-indexed ones are outputs
if input.Indexed {
if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
}
}
}
// Append the event to the accumulator list
events[original.Name] = &tmplEvent{Original: original, Normalized: normalized}
}
contracts[types[i]] = &tmplContract{
Type: capitalise(types[i]),
InputABI: strings.Replace(strippedABI, "\"", "\\\"", -1),
InputBin: strings.TrimSpace(bytecodes[i]),
Constructor: evmABI.Constructor,
Calls: calls,
Transacts: transacts,
Events: events,
}
}
// Generate the contract template data content and render it
data := &tmplData{
Package: pkg,
Contracts: contracts,
}
buffer := new(bytes.Buffer)
funcs := map[string]interface{}{
"bindtype": bindType[lang],
"bindtopictype": bindTopicType[lang],
"namedtype": namedType[lang],
"capitalise": capitalise,
"decapitalise": decapitalise,
}
tmpl := template.Must(template.New("").Funcs(funcs).Parse(tmplSource[lang]))
if err := tmpl.Execute(buffer, data); err != nil {
return "", err
}
// For Go bindings pass the code through gofmt to clean it up
if lang == LangGo {
code, err := format.Source(buffer.Bytes())
if err != nil {
return "", fmt.Errorf("%v\n%s", err, buffer)
}
return string(code), nil
}
// For all others just return as is for now
return buffer.String(), nil
}
// bindType is a set of type binders that convert Solidity types to some supported
// programming language types.
var bindType = map[Lang]func(kind abi.Type) string{
LangGo: bindTypeGo,
LangJava: bindTypeJava,
}
// Helper function for the binding generators.
// It reads the unmatched characters after the inner type-match,
// (since the inner type is a prefix of the total type declaration),
// looks for valid arrays (possibly a dynamic one) wrapping the inner type,
// and returns the sizes of these arrays.
//
// Returned array sizes are in the same order as solidity signatures; inner array size first.
// Array sizes may also be "", indicating a dynamic array.
func wrapArray(stringKind string, innerLen int, innerMapping string) (string, []string) {
remainder := stringKind[innerLen:]
//find all the sizes
matches := regexp.MustCompile(`\[(\d*)\]`).FindAllStringSubmatch(remainder, -1)
parts := make([]string, 0, len(matches))
for _, match := range matches {
//get group 1 from the regex match
parts = append(parts, match[1])
}
return innerMapping, parts
}
// Translates the array sizes to a Go-lang declaration of a (nested) array of the inner type.
// Simply returns the inner type if arraySizes is empty.
func arrayBindingGo(inner string, arraySizes []string) string {
out := ""
//prepend all array sizes, from outer (end arraySizes) to inner (start arraySizes)
for i := len(arraySizes) - 1; i >= 0; i-- {
out += "[" + arraySizes[i] + "]"
}
out += inner
return out
}
// bindTypeGo converts a Solidity type to a Go one. Since there is no clear mapping
// from all Solidity types to Go ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. *big.Int).
func bindTypeGo(kind abi.Type) string {
stringKind := kind.String()
innerLen, innerMapping := bindUnnestedTypeGo(stringKind)
return arrayBindingGo(wrapArray(stringKind, innerLen, innerMapping))
}
// The inner function of bindTypeGo, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the translated type.
func bindUnnestedTypeGo(stringKind string) (int, string) {
switch {
case strings.HasPrefix(stringKind, "address"):
return len("address"), "common.Address"
case strings.HasPrefix(stringKind, "bytes"):
parts := regexp.MustCompile(`bytes([0-9]*)`).FindStringSubmatch(stringKind)
return len(parts[0]), fmt.Sprintf("[%s]byte", parts[1])
case strings.HasPrefix(stringKind, "int") || strings.HasPrefix(stringKind, "uint"):
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(stringKind)
switch parts[2] {
case "8", "16", "32", "64":
return len(parts[0]), fmt.Sprintf("%sint%s", parts[1], parts[2])
}
return len(parts[0]), "*big.Int"
case strings.HasPrefix(stringKind, "bool"):
return len("bool"), "bool"
case strings.HasPrefix(stringKind, "string"):
return len("string"), "string"
default:
return len(stringKind), stringKind
}
}
// Translates the array sizes to a Java declaration of a (nested) array of the inner type.
// Simply returns the inner type if arraySizes is empty.
func arrayBindingJava(inner string, arraySizes []string) string {
// Java array type declarations do not include the length.
return inner + strings.Repeat("[]", len(arraySizes))
}
// bindTypeJava converts a Solidity type to a Java one. Since there is no clear mapping
// from all Solidity types to Java ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. BigDecimal).
func bindTypeJava(kind abi.Type) string {
stringKind := kind.String()
innerLen, innerMapping := bindUnnestedTypeJava(stringKind)
return arrayBindingJava(wrapArray(stringKind, innerLen, innerMapping))
}
// The inner function of bindTypeJava, this finds the inner type of stringKind.
// (Or just the type itself if it is not an array or slice)
// The length of the matched part is returned, with the translated type.
func bindUnnestedTypeJava(stringKind string) (int, string) {
switch {
case strings.HasPrefix(stringKind, "address"):
parts := regexp.MustCompile(`address(\[[0-9]*\])?`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return len(stringKind), stringKind
}
if parts[1] == "" {
return len("address"), "Address"
}
return len(parts[0]), "Addresses"
case strings.HasPrefix(stringKind, "bytes"):
parts := regexp.MustCompile(`bytes([0-9]*)`).FindStringSubmatch(stringKind)
if len(parts) != 2 {
return len(stringKind), stringKind
}
return len(parts[0]), "byte[]"
case strings.HasPrefix(stringKind, "int") || strings.HasPrefix(stringKind, "uint"):
//Note that uint and int (without digits) are also matched,
// these are size 256, and will translate to BigInt (the default).
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(stringKind)
if len(parts) != 3 {
return len(stringKind), stringKind
}
namedSize := map[string]string{
"8": "byte",
"16": "short",
"32": "int",
"64": "long",
}[parts[2]]
//default to BigInt
if namedSize == "" {
namedSize = "BigInt"
}
return len(parts[0]), namedSize
case strings.HasPrefix(stringKind, "bool"):
return len("bool"), "boolean"
case strings.HasPrefix(stringKind, "string"):
return len("string"), "String"
default:
return len(stringKind), stringKind
}
}
// bindTopicType is a set of type binders that convert Solidity types to some
// supported programming language topic types.
var bindTopicType = map[Lang]func(kind abi.Type) string{
LangGo: bindTopicTypeGo,
LangJava: bindTopicTypeJava,
}
// bindTypeGo converts a Solidity topic type to a Go one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeGo(kind abi.Type) string {
bound := bindTypeGo(kind)
if bound == "string" || bound == "[]byte" {
bound = "common.Hash"
}
return bound
}
// bindTypeGo converts a Solidity topic type to a Java one. It is almost the same
// funcionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type) string {
bound := bindTypeJava(kind)
if bound == "String" || bound == "Bytes" {
bound = "Hash"
}
return bound
}
// namedType is a set of functions that transform language specific types to
// named versions that my be used inside method names.
var namedType = map[Lang]func(string, abi.Type) string{
LangGo: func(string, abi.Type) string { panic("this shouldn't be needed") },
LangJava: namedTypeJava,
}
// namedTypeJava converts some primitive data types to named variants that can
// be used as parts of method names.
func namedTypeJava(javaKind string, solKind abi.Type) string {
switch javaKind {
case "byte[]":
return "Binary"
case "byte[][]":
return "Binaries"
case "string":
return "String"
case "string[]":
return "Strings"
case "boolean":
return "Bool"
case "boolean[]":
return "Bools"
case "BigInt[]":
return "BigInts"
default:
parts := regexp.MustCompile(`(u)?int([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(solKind.String())
if len(parts) != 4 {
return javaKind
}
switch parts[2] {
case "8", "16", "32", "64":
if parts[3] == "" {
return capitalise(fmt.Sprintf("%sint%s", parts[1], parts[2]))
}
return capitalise(fmt.Sprintf("%sint%ss", parts[1], parts[2]))
default:
return javaKind
}
}
}
// methodNormalizer is a name transformer that modifies Solidity method names to
// conform to target language naming concentions.
var methodNormalizer = map[Lang]func(string) string{
LangGo: capitalise,
LangJava: decapitalise,
}
// capitalise makes a camel-case string which starts with an upper case character.
func capitalise(input string) string {
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return toCamelCase(strings.ToUpper(input[:1]) + input[1:])
}
// decapitalise makes a camel-case string which starts with a lower case character.
func decapitalise(input string) string {
for len(input) > 0 && input[0] == '_' {
input = input[1:]
}
if len(input) == 0 {
return ""
}
return toCamelCase(strings.ToLower(input[:1]) + input[1:])
}
// toCamelCase converts an under-score string to a camel-case string
func toCamelCase(input string) string {
toupper := false
result := ""
for k, v := range input {
switch {
case k == 0:
result = strings.ToUpper(string(input[0]))
case toupper:
result += strings.ToUpper(string(v))
toupper = false
case v == '_':
toupper = true
default:
result += string(v)
}
}
return result
}
// structured checks whether a list of ABI data types has enough information to
// operate through a proper Go struct or if flat returns are needed.
func structured(args abi.Arguments) bool {
if len(args) < 2 {
return false
}
exists := make(map[string]bool)
for _, out := range args {
// If the name is anonymous, we can't organize into a struct
if out.Name == "" {
return false
}
// If the field name is empty when normalized or collides (var, Var, _var, _Var),
// we can't organize into a struct
field := capitalise(out.Name)
if field == "" || exists[field] {
return false
}
exists[field] = true
}
return true
}

File diff suppressed because one or more lines are too long

View File

@ -1,94 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bind_test
import (
"context"
"math/big"
"testing"
"time"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
)
var testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
var waitDeployedTests = map[string]struct {
code string
gas uint64
wantAddress common.Address
wantErr error
}{
"successful deploy": {
code: `6060604052600a8060106000396000f360606040526008565b00`,
gas: 3000000,
wantAddress: common.HexToAddress("0x3a220f351252089d385b29beca14e27f204c296a"),
},
"empty code": {
code: ``,
gas: 300000,
wantErr: bind.ErrNoCodeAfterDeploy,
wantAddress: common.HexToAddress("0x3a220f351252089d385b29beca14e27f204c296a"),
},
}
func TestWaitDeployed(t *testing.T) {
for name, test := range waitDeployedTests {
backend := backends.NewSimulatedBackend(
core.GenesisAlloc{
crypto.PubkeyToAddress(testKey.PublicKey): {Balance: big.NewInt(10000000000)},
}, 10000000,
)
// Create the transaction.
tx := types.NewContractCreation(0, big.NewInt(0), test.gas, big.NewInt(1), common.FromHex(test.code))
tx, _ = types.SignTx(tx, types.HomesteadSigner{}, testKey)
// Wait for it to get mined in the background.
var (
err error
address common.Address
mined = make(chan struct{})
ctx = context.Background()
)
go func() {
address, err = bind.WaitDeployed(ctx, backend, tx)
close(mined)
}()
// Send and mine the transaction.
backend.SendTransaction(ctx, tx)
backend.Commit()
select {
case <-mined:
if err != test.wantErr {
t.Errorf("test %q: error mismatch: got %q, want %q", name, err, test.wantErr)
}
if address != test.wantAddress {
t.Errorf("test %q: unexpected contract address %s", name, address.Hex())
}
case <-time.After(2 * time.Second):
t.Errorf("test %q: timeout", name)
}
}
}

View File

@ -1,57 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"fmt"
"strings"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
// Event is an event potentially triggered by the EVM's LOG mechanism. The Event
// holds type information (inputs) about the yielded output. Anonymous events
// don't get the signature canonical representation as the first LOG topic.
type Event struct {
Name string
Anonymous bool
Inputs Arguments
}
func (e Event) String() string {
inputs := make([]string, len(e.Inputs))
for i, input := range e.Inputs {
inputs[i] = fmt.Sprintf("%v %v", input.Name, input.Type)
if input.Indexed {
inputs[i] = fmt.Sprintf("%v indexed %v", input.Name, input.Type)
}
}
return fmt.Sprintf("e %v(%v)", e.Name, strings.Join(inputs, ", "))
}
// Id returns the canonical representation of the event's signature used by the
// abi definition to identify event names and types.
func (e Event) Id() common.Hash {
types := make([]string, len(e.Inputs))
i := 0
for _, input := range e.Inputs {
types[i] = input.Type.String()
i++
}
return common.BytesToHash(crypto.Keccak256([]byte(fmt.Sprintf("%v(%v)", e.Name, strings.Join(types, ",")))))
}

View File

@ -1,395 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"encoding/hex"
"encoding/json"
"math/big"
"reflect"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var jsonEventTransfer = []byte(`{
"anonymous": false,
"inputs": [
{
"indexed": true, "name": "from", "type": "address"
}, {
"indexed": true, "name": "to", "type": "address"
}, {
"indexed": false, "name": "value", "type": "uint256"
}],
"name": "Transfer",
"type": "event"
}`)
var jsonEventPledge = []byte(`{
"anonymous": false,
"inputs": [{
"indexed": false, "name": "who", "type": "address"
}, {
"indexed": false, "name": "wad", "type": "uint128"
}, {
"indexed": false, "name": "currency", "type": "bytes3"
}],
"name": "Pledge",
"type": "event"
}`)
var jsonEventMixedCase = []byte(`{
"anonymous": false,
"inputs": [{
"indexed": false, "name": "value", "type": "uint256"
}, {
"indexed": false, "name": "_value", "type": "uint256"
}, {
"indexed": false, "name": "Value", "type": "uint256"
}],
"name": "MixedCase",
"type": "event"
}`)
// 1000000
var transferData1 = "00000000000000000000000000000000000000000000000000000000000f4240"
// "0x00Ce0d46d924CC8437c806721496599FC3FFA268", 2218516807680, "usd"
var pledgeData1 = "00000000000000000000000000ce0d46d924cc8437c806721496599fc3ffa2680000000000000000000000000000000000000000000000000000020489e800007573640000000000000000000000000000000000000000000000000000000000"
// 1000000,2218516807680,1000001
var mixedCaseData1 = "00000000000000000000000000000000000000000000000000000000000f42400000000000000000000000000000000000000000000000000000020489e8000000000000000000000000000000000000000000000000000000000000000f4241"
func TestEventId(t *testing.T) {
var table = []struct {
definition string
expectations map[string]common.Hash
}{
{
definition: `[
{ "type" : "event", "name" : "balance", "inputs": [{ "name" : "in", "type": "uint256" }] },
{ "type" : "event", "name" : "check", "inputs": [{ "name" : "t", "type": "address" }, { "name": "b", "type": "uint256" }] }
]`,
expectations: map[string]common.Hash{
"balance": crypto.Keccak256Hash([]byte("balance(uint256)")),
"check": crypto.Keccak256Hash([]byte("check(address,uint256)")),
},
},
}
for _, test := range table {
abi, err := JSON(strings.NewReader(test.definition))
if err != nil {
t.Fatal(err)
}
for name, event := range abi.Events {
if event.Id() != test.expectations[name] {
t.Errorf("expected id to be %x, got %x", test.expectations[name], event.Id())
}
}
}
}
// TestEventMultiValueWithArrayUnpack verifies that array fields will be counted after parsing array.
func TestEventMultiValueWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": false, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 [2]uint8
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
var i uint8 = 1
for ; i <= 3; i++ {
b.Write(packNum(reflect.ValueOf(i)))
}
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{1, 2}, rst.Value1)
require.Equal(t, uint8(3), rst.Value2)
}
func TestEventTupleUnpack(t *testing.T) {
type EventTransfer struct {
Value *big.Int
}
type EventTransferWithTag struct {
// this is valid because `value` is not exportable,
// so value is only unmarshalled into `Value1`.
value *big.Int
Value1 *big.Int `abi:"value"`
}
type BadEventTransferWithSameFieldAndTag struct {
Value *big.Int
Value1 *big.Int `abi:"value"`
}
type BadEventTransferWithDuplicatedTag struct {
Value1 *big.Int `abi:"value"`
Value2 *big.Int `abi:"value"`
}
type BadEventTransferWithEmptyTag struct {
Value *big.Int `abi:""`
}
type EventPledge struct {
Who common.Address
Wad *big.Int
Currency [3]byte
}
type BadEventPledge struct {
Who string
Wad int
Currency [3]byte
}
type EventMixedCase struct {
Value1 *big.Int `abi:"value"`
Value2 *big.Int `abi:"_value"`
Value3 *big.Int `abi:"Value"`
}
bigint := new(big.Int)
bigintExpected := big.NewInt(1000000)
bigintExpected2 := big.NewInt(2218516807680)
bigintExpected3 := big.NewInt(1000001)
addr := common.HexToAddress("0x00Ce0d46d924CC8437c806721496599FC3FFA268")
var testCases = []struct {
data string
dest interface{}
expected interface{}
jsonLog []byte
error string
name string
}{{
transferData1,
&EventTransfer{},
&EventTransfer{Value: bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into structure",
}, {
transferData1,
&[]interface{}{&bigint},
&[]interface{}{&bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into slice",
}, {
transferData1,
&EventTransferWithTag{},
&EventTransferWithTag{Value1: bigintExpected},
jsonEventTransfer,
"",
"Can unpack ERC20 Transfer event into structure with abi: tag",
}, {
transferData1,
&BadEventTransferWithDuplicatedTag{},
&BadEventTransferWithDuplicatedTag{},
jsonEventTransfer,
"struct: abi tag in 'Value2' already mapped",
"Can not unpack ERC20 Transfer event with duplicated abi tag",
}, {
transferData1,
&BadEventTransferWithSameFieldAndTag{},
&BadEventTransferWithSameFieldAndTag{},
jsonEventTransfer,
"abi: multiple variables maps to the same abi field 'value'",
"Can not unpack ERC20 Transfer event with a field and a tag mapping to the same abi variable",
}, {
transferData1,
&BadEventTransferWithEmptyTag{},
&BadEventTransferWithEmptyTag{},
jsonEventTransfer,
"struct: abi tag in 'Value' is empty",
"Can not unpack ERC20 Transfer event with an empty tag",
}, {
pledgeData1,
&EventPledge{},
&EventPledge{
addr,
bigintExpected2,
[3]byte{'u', 's', 'd'}},
jsonEventPledge,
"",
"Can unpack Pledge event into structure",
}, {
pledgeData1,
&[]interface{}{&common.Address{}, &bigint, &[3]byte{}},
&[]interface{}{
&addr,
&bigintExpected2,
&[3]byte{'u', 's', 'd'}},
jsonEventPledge,
"",
"Can unpack Pledge event into slice",
}, {
pledgeData1,
&[3]interface{}{&common.Address{}, &bigint, &[3]byte{}},
&[3]interface{}{
&addr,
&bigintExpected2,
&[3]byte{'u', 's', 'd'}},
jsonEventPledge,
"",
"Can unpack Pledge event into an array",
}, {
pledgeData1,
&[]interface{}{new(int), 0, 0},
&[]interface{}{},
jsonEventPledge,
"abi: cannot unmarshal common.Address in to int",
"Can not unpack Pledge event into slice with wrong types",
}, {
pledgeData1,
&BadEventPledge{},
&BadEventPledge{},
jsonEventPledge,
"abi: cannot unmarshal common.Address in to string",
"Can not unpack Pledge event into struct with wrong filed types",
}, {
pledgeData1,
&[]interface{}{common.Address{}, new(big.Int)},
&[]interface{}{},
jsonEventPledge,
"abi: insufficient number of elements in the list/array for unpack, want 3, got 2",
"Can not unpack Pledge event into too short slice",
}, {
pledgeData1,
new(map[string]interface{}),
&[]interface{}{},
jsonEventPledge,
"abi: cannot unmarshal tuple into map[string]interface {}",
"Can not unpack Pledge event into map",
}, {
mixedCaseData1,
&EventMixedCase{},
&EventMixedCase{Value1: bigintExpected, Value2: bigintExpected2, Value3: bigintExpected3},
jsonEventMixedCase,
"",
"Can unpack abi variables with mixed case",
}}
for _, tc := range testCases {
assert := assert.New(t)
tc := tc
t.Run(tc.name, func(t *testing.T) {
err := unpackTestEventData(tc.dest, tc.data, tc.jsonLog, assert)
if tc.error == "" {
assert.Nil(err, "Should be able to unpack event data.")
assert.Equal(tc.expected, tc.dest, tc.name)
} else {
assert.EqualError(err, tc.error, tc.name)
}
})
}
}
func unpackTestEventData(dest interface{}, hexData string, jsonEvent []byte, assert *assert.Assertions) error {
data, err := hex.DecodeString(hexData)
assert.NoError(err, "Hex data should be a correct hex-string")
var e Event
assert.NoError(json.Unmarshal(jsonEvent, &e), "Should be able to unmarshal event ABI")
a := ABI{Events: map[string]Event{"e": e}}
return a.Unpack(dest, "e", data)
}
/*
Taken from
https://github.com/ethereum/go-ethereum/pull/15568
*/
type testResult struct {
Values [2]*big.Int
Value1 *big.Int
Value2 *big.Int
}
type testCase struct {
definition string
want testResult
}
func (tc testCase) encoded(intType, arrayType Type) []byte {
var b bytes.Buffer
if tc.want.Value1 != nil {
val, _ := intType.pack(reflect.ValueOf(tc.want.Value1))
b.Write(val)
}
if !reflect.DeepEqual(tc.want.Values, [2]*big.Int{nil, nil}) {
val, _ := arrayType.pack(reflect.ValueOf(tc.want.Values))
b.Write(val)
}
if tc.want.Value2 != nil {
val, _ := intType.pack(reflect.ValueOf(tc.want.Value2))
b.Write(val)
}
return b.Bytes()
}
// TestEventUnpackIndexed verifies that indexed field will be skipped by event decoder.
func TestEventUnpackIndexed(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8"},{"indexed": false, "name":"value2", "type":"uint8"}]}]`
type testStruct struct {
Value1 uint8
Value2 uint8
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
b.Write(packNum(reflect.ValueOf(uint8(8))))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, uint8(0), rst.Value1)
require.Equal(t, uint8(8), rst.Value2)
}
// TestEventIndexedWithArrayUnpack verifies that decoder will not overlow when static array is indexed input.
func TestEventIndexedWithArrayUnpack(t *testing.T) {
definition := `[{"name": "test", "type": "event", "inputs": [{"indexed": true, "name":"value1", "type":"uint8[2]"},{"indexed": false, "name":"value2", "type":"string"}]}]`
type testStruct struct {
Value1 [2]uint8
Value2 string
}
abi, err := JSON(strings.NewReader(definition))
require.NoError(t, err)
var b bytes.Buffer
stringOut := "abc"
// number of fields that will be encoded * 32
b.Write(packNum(reflect.ValueOf(32)))
b.Write(packNum(reflect.ValueOf(len(stringOut))))
b.Write(common.RightPadBytes([]byte(stringOut), 32))
var rst testStruct
require.NoError(t, abi.Unpack(&rst, "test", b.Bytes()))
require.Equal(t, [2]uint8{0, 0}, rst.Value1)
require.Equal(t, stringOut, rst.Value2)
}

View File

@ -1,33 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"math/big"
"testing"
)
func TestNumberTypes(t *testing.T) {
ubytes := make([]byte, 32)
ubytes[31] = 1
unsigned := U256(big.NewInt(1))
if !bytes.Equal(unsigned, ubytes) {
t.Errorf("expected %x got %x", ubytes, unsigned)
}
}

View File

@ -1,503 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"math"
"math/big"
"reflect"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
)
func TestPack(t *testing.T) {
for i, test := range []struct {
typ string
input interface{}
output []byte
}{
{
"uint8",
uint8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint8[]",
[]uint8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16",
uint16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint16[]",
[]uint16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32",
uint32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint32[]",
[]uint32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64",
uint64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint64[]",
[]uint64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256",
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"uint256[]",
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8",
int8(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int8[]",
[]int8{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16",
int16(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int16[]",
[]int16{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32",
int32(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int32[]",
[]int32{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64",
int64(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int64[]",
[]int64{1, 2},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256",
big.NewInt(2),
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"),
},
{
"int256[]",
[]*big.Int{big.NewInt(1), big.NewInt(2)},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002"),
},
{
"bytes1",
[1]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes2",
[2]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes3",
[3]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes4",
[4]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes5",
[5]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes6",
[6]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes7",
[7]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes8",
[8]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes9",
[9]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes10",
[10]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes11",
[11]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes12",
[12]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes13",
[13]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes14",
[14]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes15",
[15]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes16",
[16]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes17",
[17]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes18",
[18]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes19",
[19]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes20",
[20]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes21",
[21]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes22",
[22]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes23",
[23]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes24",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes25",
[25]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes26",
[26]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes27",
[27]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes28",
[28]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes29",
[29]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes30",
[30]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes31",
[31]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"bytes32",
[32]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"uint32[2][3][4]",
[4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018"),
},
{
"address[]",
[]common.Address{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000200000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000"),
},
{
"bytes32[]",
[]common.Hash{{1}, {2}},
common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000201000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000"),
},
{
"function",
[24]byte{1},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
"string",
"foobar",
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000006666f6f6261720000000000000000000000000000000000000000000000000000"),
},
{
"string[]",
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset 128 to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"string[2]",
[]string{"hello", "foobar"},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset to i = 0
"0000000000000000000000000000000000000000000000000000000000000080" + // offset to i = 1
"0000000000000000000000000000000000000000000000000000000000000005" + // len(str[0]) = 5
"68656c6c6f000000000000000000000000000000000000000000000000000000" + // str[0]
"0000000000000000000000000000000000000000000000000000000000000006" + // len(str[1]) = 6
"666f6f6261720000000000000000000000000000000000000000000000000000"), // str[1]
},
{
"bytes32[][]",
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002" + // len(array) = 2
"0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[][2]",
[][]common.Hash{{{1}, {2}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040" + // offset 64 to i = 0
"00000000000000000000000000000000000000000000000000000000000000a0" + // offset 160 to i = 1
"0000000000000000000000000000000000000000000000000000000000000002" + // len(array[0]) = 2
"0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0000000000000000000000000000000000000000000000000000000000000003" + // len(array[1]) = 3
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
{
"bytes32[3][2]",
[][]common.Hash{{{1}, {2}, {3}}, {{3}, {4}, {5}}},
common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000" + // array[0][0]
"0200000000000000000000000000000000000000000000000000000000000000" + // array[0][1]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[0][2]
"0300000000000000000000000000000000000000000000000000000000000000" + // array[1][0]
"0400000000000000000000000000000000000000000000000000000000000000" + // array[1][1]
"0500000000000000000000000000000000000000000000000000000000000000"), // array[1][2]
},
} {
typ, err := NewType(test.typ)
if err != nil {
t.Fatalf("%v failed. Unexpected parse error: %v", i, err)
}
output, err := typ.pack(reflect.ValueOf(test.input))
if err != nil {
t.Fatalf("%v failed. Unexpected pack error: %v", i, err)
}
if !bytes.Equal(output, test.output) {
t.Errorf("input %d for typ: %v failed. Expected bytes: '%x' Got: '%x'", i, typ.String(), test.output, output)
}
}
}
func TestMethodPack(t *testing.T) {
abi, err := JSON(strings.NewReader(jsondata2))
if err != nil {
t.Fatal(err)
}
sig := abi.Methods["slice"].Id()
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
packed, err := abi.Pack("slice", []uint32{1, 2})
if err != nil {
t.Error(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
var addrA, addrB = common.Address{1}, common.Address{2}
sig = abi.Methods["sliceAddress"].Id()
sig = append(sig, common.LeftPadBytes([]byte{32}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrA[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrB[:], 32)...)
packed, err = abi.Pack("sliceAddress", []common.Address{addrA, addrB})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
var addrC, addrD = common.Address{3}, common.Address{4}
sig = abi.Methods["sliceMultiAddress"].Id()
sig = append(sig, common.LeftPadBytes([]byte{64}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{160}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrA[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrB[:], 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
sig = append(sig, common.LeftPadBytes(addrC[:], 32)...)
sig = append(sig, common.LeftPadBytes(addrD[:], 32)...)
packed, err = abi.Pack("sliceMultiAddress", []common.Address{addrA, addrB}, []common.Address{addrC, addrD})
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
sig = abi.Methods["slice256"].Id()
sig = append(sig, common.LeftPadBytes([]byte{1}, 32)...)
sig = append(sig, common.LeftPadBytes([]byte{2}, 32)...)
packed, err = abi.Pack("slice256", []*big.Int{big.NewInt(1), big.NewInt(2)})
if err != nil {
t.Error(err)
}
if !bytes.Equal(packed, sig) {
t.Errorf("expected %x got %x", sig, packed)
}
}
func TestPackNumber(t *testing.T) {
tests := []struct {
value reflect.Value
packed []byte
}{
// Protocol limits
{reflect.ValueOf(0), common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000000")},
{reflect.ValueOf(1), common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")},
{reflect.ValueOf(-1), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")},
// Type corner cases
{reflect.ValueOf(uint8(math.MaxUint8)), common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000000000ff")},
{reflect.ValueOf(uint16(math.MaxUint16)), common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000ffff")},
{reflect.ValueOf(uint32(math.MaxUint32)), common.Hex2Bytes("00000000000000000000000000000000000000000000000000000000ffffffff")},
{reflect.ValueOf(uint64(math.MaxUint64)), common.Hex2Bytes("000000000000000000000000000000000000000000000000ffffffffffffffff")},
{reflect.ValueOf(int8(math.MaxInt8)), common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000007f")},
{reflect.ValueOf(int16(math.MaxInt16)), common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000007fff")},
{reflect.ValueOf(int32(math.MaxInt32)), common.Hex2Bytes("000000000000000000000000000000000000000000000000000000007fffffff")},
{reflect.ValueOf(int64(math.MaxInt64)), common.Hex2Bytes("0000000000000000000000000000000000000000000000007fffffffffffffff")},
{reflect.ValueOf(int8(math.MinInt8)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff80")},
{reflect.ValueOf(int16(math.MinInt16)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff8000")},
{reflect.ValueOf(int32(math.MinInt32)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffffffffffff80000000")},
{reflect.ValueOf(int64(math.MinInt64)), common.Hex2Bytes("ffffffffffffffffffffffffffffffffffffffffffffffff8000000000000000")},
}
for i, tt := range tests {
packed := packNum(tt.value)
if !bytes.Equal(packed, tt.packed) {
t.Errorf("test %d: pack mismatch: have %x, want %x", i, packed, tt.packed)
}
}
}

View File

@ -1,212 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"fmt"
"reflect"
"strings"
)
// indirect recursively dereferences the value until it either gets the value
// or finds a big.Int
func indirect(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Ptr && v.Elem().Type() != derefbigT {
return indirect(v.Elem())
}
return v
}
// reflectIntKind returns the reflect using the given size and
// unsignedness.
func reflectIntKindAndType(unsigned bool, size int) (reflect.Kind, reflect.Type) {
switch size {
case 8:
if unsigned {
return reflect.Uint8, uint8T
}
return reflect.Int8, int8T
case 16:
if unsigned {
return reflect.Uint16, uint16T
}
return reflect.Int16, int16T
case 32:
if unsigned {
return reflect.Uint32, uint32T
}
return reflect.Int32, int32T
case 64:
if unsigned {
return reflect.Uint64, uint64T
}
return reflect.Int64, int64T
}
return reflect.Ptr, bigT
}
// mustArrayToBytesSlice creates a new byte slice with the exact same size as value
// and copies the bytes in value to the new slice.
func mustArrayToByteSlice(value reflect.Value) reflect.Value {
slice := reflect.MakeSlice(reflect.TypeOf([]byte{}), value.Len(), value.Len())
reflect.Copy(slice, value)
return slice
}
// set attempts to assign src to dst by either setting, copying or otherwise.
//
// set is a bit more lenient when it comes to assignment and doesn't force an as
// strict ruleset as bare `reflect` does.
func set(dst, src reflect.Value, output Argument) error {
dstType := dst.Type()
srcType := src.Type()
switch {
case dstType.AssignableTo(srcType):
dst.Set(src)
case dstType.Kind() == reflect.Interface:
dst.Set(src)
case dstType.Kind() == reflect.Ptr:
return set(dst.Elem(), src, output)
default:
return fmt.Errorf("abi: cannot unmarshal %v in to %v", src.Type(), dst.Type())
}
return nil
}
// requireAssignable assures that `dest` is a pointer and it's not an interface.
func requireAssignable(dst, src reflect.Value) error {
if dst.Kind() != reflect.Ptr && dst.Kind() != reflect.Interface {
return fmt.Errorf("abi: cannot unmarshal %v into %v", src.Type(), dst.Type())
}
return nil
}
// requireUnpackKind verifies preconditions for unpacking `args` into `kind`
func requireUnpackKind(v reflect.Value, t reflect.Type, k reflect.Kind,
args Arguments) error {
switch k {
case reflect.Struct:
case reflect.Slice, reflect.Array:
if minLen := args.LengthNonIndexed(); v.Len() < minLen {
return fmt.Errorf("abi: insufficient number of elements in the list/array for unpack, want %d, got %d",
minLen, v.Len())
}
default:
return fmt.Errorf("abi: cannot unmarshal tuple into %v", t)
}
return nil
}
// mapAbiToStringField maps abi to struct fields.
// first round: for each Exportable field that contains a `abi:""` tag
// and this field name exists in the arguments, pair them together.
// second round: for each argument field that has not been already linked,
// find what variable is expected to be mapped into, if it exists and has not been
// used, pair them.
func mapAbiToStructFields(args Arguments, value reflect.Value) (map[string]string, error) {
typ := value.Type()
abi2struct := make(map[string]string)
struct2abi := make(map[string]string)
// first round ~~~
for i := 0; i < typ.NumField(); i++ {
structFieldName := typ.Field(i).Name
// skip private struct fields.
if structFieldName[:1] != strings.ToUpper(structFieldName[:1]) {
continue
}
// skip fields that have no abi:"" tag.
var ok bool
var tagName string
if tagName, ok = typ.Field(i).Tag.Lookup("abi"); !ok {
continue
}
// check if tag is empty.
if tagName == "" {
return nil, fmt.Errorf("struct: abi tag in '%s' is empty", structFieldName)
}
// check which argument field matches with the abi tag.
found := false
for _, abiField := range args.NonIndexed() {
if abiField.Name == tagName {
if abi2struct[abiField.Name] != "" {
return nil, fmt.Errorf("struct: abi tag in '%s' already mapped", structFieldName)
}
// pair them
abi2struct[abiField.Name] = structFieldName
struct2abi[structFieldName] = abiField.Name
found = true
}
}
// check if this tag has been mapped.
if !found {
return nil, fmt.Errorf("struct: abi tag '%s' defined but not found in abi", tagName)
}
}
// second round ~~~
for _, arg := range args {
abiFieldName := arg.Name
structFieldName := capitalise(abiFieldName)
if structFieldName == "" {
return nil, fmt.Errorf("abi: purely underscored output cannot unpack to struct")
}
// this abi has already been paired, skip it... unless there exists another, yet unassigned
// struct field with the same field name. If so, raise an error:
// abi: [ { "name": "value" } ]
// struct { Value *big.Int , Value1 *big.Int `abi:"value"`}
if abi2struct[abiFieldName] != "" {
if abi2struct[abiFieldName] != structFieldName &&
struct2abi[structFieldName] == "" &&
value.FieldByName(structFieldName).IsValid() {
return nil, fmt.Errorf("abi: multiple variables maps to the same abi field '%s'", abiFieldName)
}
continue
}
// return an error if this struct field has already been paired.
if struct2abi[structFieldName] != "" {
return nil, fmt.Errorf("abi: multiple outputs mapping to the same struct field '%s'", structFieldName)
}
if value.FieldByName(structFieldName).IsValid() {
// pair them
abi2struct[abiFieldName] = structFieldName
struct2abi[structFieldName] = abiFieldName
} else {
// not paired, but annotate as used, to detect cases like
// abi : [ { "name": "value" }, { "name": "_value" } ]
// struct { Value *big.Int }
struct2abi[structFieldName] = abiFieldName
}
}
return abi2struct, nil
}

View File

@ -1,249 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"fmt"
"reflect"
"regexp"
"strconv"
"strings"
)
// Type enumerator
const (
IntTy byte = iota
UintTy
BoolTy
StringTy
SliceTy
ArrayTy
AddressTy
FixedBytesTy
BytesTy
HashTy
FixedPointTy
FunctionTy
)
// Type is the reflection of the supported argument type
type Type struct {
Elem *Type
Kind reflect.Kind
Type reflect.Type
Size int
T byte // Our own type checking
stringKind string // holds the unparsed string for deriving signatures
}
var (
// typeRegex parses the abi sub types
typeRegex = regexp.MustCompile("([a-zA-Z]+)(([0-9]+)(x([0-9]+))?)?")
)
// NewType creates a new reflection type of abi type given in t.
func NewType(t string) (typ Type, err error) {
// check that array brackets are equal if they exist
if strings.Count(t, "[") != strings.Count(t, "]") {
return Type{}, fmt.Errorf("invalid arg type in abi")
}
typ.stringKind = t
// if there are brackets, get ready to go into slice/array mode and
// recursively create the type
if strings.Count(t, "[") != 0 {
i := strings.LastIndex(t, "[")
// recursively embed the type
embeddedType, err := NewType(t[:i])
if err != nil {
return Type{}, err
}
// grab the last cell and create a type from there
sliced := t[i:]
// grab the slice size with regexp
re := regexp.MustCompile("[0-9]+")
intz := re.FindAllString(sliced, -1)
if len(intz) == 0 {
// is a slice
typ.T = SliceTy
typ.Kind = reflect.Slice
typ.Elem = &embeddedType
typ.Type = reflect.SliceOf(embeddedType.Type)
} else if len(intz) == 1 {
// is a array
typ.T = ArrayTy
typ.Kind = reflect.Array
typ.Elem = &embeddedType
typ.Size, err = strconv.Atoi(intz[0])
if err != nil {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
typ.Type = reflect.ArrayOf(typ.Size, embeddedType.Type)
} else {
return Type{}, fmt.Errorf("invalid formatting of array type")
}
return typ, err
}
// parse the type and size of the abi-type.
matches := typeRegex.FindAllStringSubmatch(t, -1)
if len(matches) == 0 {
return Type{}, fmt.Errorf("invalid type '%v'", t)
}
parsedType := matches[0]
// varSize is the size of the variable
var varSize int
if len(parsedType[3]) > 0 {
var err error
varSize, err = strconv.Atoi(parsedType[2])
if err != nil {
return Type{}, fmt.Errorf("abi: error parsing variable size: %v", err)
}
} else {
if parsedType[0] == "uint" || parsedType[0] == "int" {
// this should fail because it means that there's something wrong with
// the abi type (the compiler should always format it to the size...always)
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
}
// varType is the parsed abi type
switch varType := parsedType[1]; varType {
case "int":
typ.Kind, typ.Type = reflectIntKindAndType(false, varSize)
typ.Size = varSize
typ.T = IntTy
case "uint":
typ.Kind, typ.Type = reflectIntKindAndType(true, varSize)
typ.Size = varSize
typ.T = UintTy
case "bool":
typ.Kind = reflect.Bool
typ.T = BoolTy
typ.Type = reflect.TypeOf(bool(false))
case "address":
typ.Kind = reflect.Array
typ.Type = addressT
typ.Size = 20
typ.T = AddressTy
case "string":
typ.Kind = reflect.String
typ.Type = reflect.TypeOf("")
typ.T = StringTy
case "bytes":
if varSize == 0 {
typ.T = BytesTy
typ.Kind = reflect.Slice
typ.Type = reflect.SliceOf(reflect.TypeOf(byte(0)))
} else {
typ.T = FixedBytesTy
typ.Kind = reflect.Array
typ.Size = varSize
typ.Type = reflect.ArrayOf(varSize, reflect.TypeOf(byte(0)))
}
case "function":
typ.Kind = reflect.Array
typ.T = FunctionTy
typ.Size = 24
typ.Type = reflect.ArrayOf(24, reflect.TypeOf(byte(0)))
default:
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
return
}
// String implements Stringer
func (t Type) String() (out string) {
return t.stringKind
}
func (t Type) pack(v reflect.Value) ([]byte, error) {
// dereference pointer first if it's a pointer
v = indirect(v)
if err := typeCheck(t, v); err != nil {
return nil, err
}
switch t.T {
case SliceTy, ArrayTy:
var ret []byte
if t.requiresLengthPrefix() {
// append length
ret = append(ret, packNum(reflect.ValueOf(v.Len()))...)
}
// calculate offset if any
offset := 0
offsetReq := isDynamicType(*t.Elem)
if offsetReq {
offset = getDynamicTypeOffset(*t.Elem) * v.Len()
}
var tail []byte
for i := 0; i < v.Len(); i++ {
val, err := t.Elem.pack(v.Index(i))
if err != nil {
return nil, err
}
if !offsetReq {
ret = append(ret, val...)
continue
}
ret = append(ret, packNum(reflect.ValueOf(offset))...)
offset += len(val)
tail = append(tail, val...)
}
return append(ret, tail...), nil
default:
return packElement(t, v), nil
}
}
// requireLengthPrefix returns whether the type requires any sort of length
// prefixing.
func (t Type) requiresLengthPrefix() bool {
return t.T == StringTy || t.T == BytesTy || t.T == SliceTy
}
// isDynamicType returns true if the type is dynamic.
// StringTy, BytesTy, and SliceTy(irrespective of slice element type) are dynamic types
// ArrayTy is considered dynamic if and only if the Array element is a dynamic type.
// This function recursively checks the type for slice and array elements.
func isDynamicType(t Type) bool {
// dynamic types
// array is also a dynamic type if the array type is dynamic
return t.T == StringTy || t.T == BytesTy || t.T == SliceTy || (t.T == ArrayTy && isDynamicType(*t.Elem))
}
// getDynamicTypeOffset returns the offset for the type.
// See `isDynamicType` to know which types are considered dynamic.
// If the type t is an array and element type is not a dynamic type, then we consider it a static type and
// return 32 * size of array since length prefix is not required.
// If t is a dynamic type or element type(for slices and arrays) is dynamic, then we simply return 32 as offset.
func getDynamicTypeOffset(t Type) int {
// if it is an array and there are no dynamic types
// then the array is static type
if t.T == ArrayTy && !isDynamicType(*t.Elem) {
return 32 * t.Size
}
return 32
}

View File

@ -1,283 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"math/big"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/common"
)
// typeWithoutStringer is a alias for the Type type which simply doesn't implement
// the stringer interface to allow printing type details in the tests below.
type typeWithoutStringer Type
// Tests that all allowed types get recognized by the type parser.
func TestTypeRegexp(t *testing.T) {
tests := []struct {
blob string
kind Type
}{
{"bool", Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}},
{"bool[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool(nil)), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}},
{"bool[2]", Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}},
{"bool[2][]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}},
{"bool[][]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}},
{"bool[][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}},
{"bool[2][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}},
{"bool[2][][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][][2]bool{}), Elem: &Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][]"}, stringKind: "bool[2][][2]"}},
{"bool[2][2][2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][2]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[2]"}, stringKind: "bool[2][2]"}, stringKind: "bool[2][2][2]"}},
{"bool[][][]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][]"}, stringKind: "bool[][][]"}},
{"bool[][2][]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][2][]bool{}), Elem: &Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]bool{}), Elem: &Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]bool{}), Elem: &Type{Kind: reflect.Bool, T: BoolTy, Type: reflect.TypeOf(bool(false)), stringKind: "bool"}, stringKind: "bool[]"}, stringKind: "bool[][2]"}, stringKind: "bool[][2][]"}},
{"int8", Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}},
{"int16", Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}},
{"int32", Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}},
{"int64", Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}},
{"int256", Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}},
{"int8[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[]"}},
{"int8[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int8{}), Elem: &Type{Kind: reflect.Int8, Type: int8T, Size: 8, T: IntTy, stringKind: "int8"}, stringKind: "int8[2]"}},
{"int16[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[]"}},
{"int16[2]", Type{Size: 2, Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]int16{}), Elem: &Type{Kind: reflect.Int16, Type: int16T, Size: 16, T: IntTy, stringKind: "int16"}, stringKind: "int16[2]"}},
{"int32[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[]"}},
{"int32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int32{}), Elem: &Type{Kind: reflect.Int32, Type: int32T, Size: 32, T: IntTy, stringKind: "int32"}, stringKind: "int32[2]"}},
{"int64[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[]"}},
{"int64[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]int64{}), Elem: &Type{Kind: reflect.Int64, Type: int64T, Size: 64, T: IntTy, stringKind: "int64"}, stringKind: "int64[2]"}},
{"int256[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[]"}},
{"int256[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: IntTy, stringKind: "int256"}, stringKind: "int256[2]"}},
{"uint8", Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}},
{"uint16", Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}},
{"uint32", Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}},
{"uint64", Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}},
{"uint256", Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}},
{"uint8[]", Type{Kind: reflect.Slice, T: SliceTy, Type: reflect.TypeOf([]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[]"}},
{"uint8[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint8{}), Elem: &Type{Kind: reflect.Uint8, Type: uint8T, Size: 8, T: UintTy, stringKind: "uint8"}, stringKind: "uint8[2]"}},
{"uint16[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[]"}},
{"uint16[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint16{}), Elem: &Type{Kind: reflect.Uint16, Type: uint16T, Size: 16, T: UintTy, stringKind: "uint16"}, stringKind: "uint16[2]"}},
{"uint32[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[]"}},
{"uint32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint32{}), Elem: &Type{Kind: reflect.Uint32, Type: uint32T, Size: 32, T: UintTy, stringKind: "uint32"}, stringKind: "uint32[2]"}},
{"uint64[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[]"}},
{"uint64[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]uint64{}), Elem: &Type{Kind: reflect.Uint64, Type: uint64T, Size: 64, T: UintTy, stringKind: "uint64"}, stringKind: "uint64[2]"}},
{"uint256[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]*big.Int{}), Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[]"}},
{"uint256[2]", Type{Kind: reflect.Array, T: ArrayTy, Type: reflect.TypeOf([2]*big.Int{}), Size: 2, Elem: &Type{Kind: reflect.Ptr, Type: bigT, Size: 256, T: UintTy, stringKind: "uint256"}, stringKind: "uint256[2]"}},
{"bytes32", Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}},
{"bytes[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][]byte{}), Elem: &Type{Kind: reflect.Slice, Type: reflect.TypeOf([]byte{}), T: BytesTy, stringKind: "bytes"}, stringKind: "bytes[]"}},
{"bytes[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][]byte{}), Elem: &Type{T: BytesTy, Type: reflect.TypeOf([]byte{}), Kind: reflect.Slice, stringKind: "bytes"}, stringKind: "bytes[2]"}},
{"bytes32[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([][32]byte{}), Elem: &Type{Kind: reflect.Array, Type: reflect.TypeOf([32]byte{}), T: FixedBytesTy, Size: 32, stringKind: "bytes32"}, stringKind: "bytes32[]"}},
{"bytes32[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2][32]byte{}), Elem: &Type{Kind: reflect.Array, T: FixedBytesTy, Size: 32, Type: reflect.TypeOf([32]byte{}), stringKind: "bytes32"}, stringKind: "bytes32[2]"}},
{"string", Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}},
{"string[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]string{}), Elem: &Type{Kind: reflect.String, Type: reflect.TypeOf(""), T: StringTy, stringKind: "string"}, stringKind: "string[]"}},
{"string[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]string{}), Elem: &Type{Kind: reflect.String, T: StringTy, Type: reflect.TypeOf(""), stringKind: "string"}, stringKind: "string[2]"}},
{"address", Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}},
{"address[]", Type{T: SliceTy, Kind: reflect.Slice, Type: reflect.TypeOf([]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[]"}},
{"address[2]", Type{Kind: reflect.Array, T: ArrayTy, Size: 2, Type: reflect.TypeOf([2]common.Address{}), Elem: &Type{Kind: reflect.Array, Type: addressT, Size: 20, T: AddressTy, stringKind: "address"}, stringKind: "address[2]"}},
// TODO when fixed types are implemented properly
// {"fixed", Type{}},
// {"fixed128x128", Type{}},
// {"fixed[]", Type{}},
// {"fixed[2]", Type{}},
// {"fixed128x128[]", Type{}},
// {"fixed128x128[2]", Type{}},
}
for _, tt := range tests {
typ, err := NewType(tt.blob)
if err != nil {
t.Errorf("type %q: failed to parse type string: %v", tt.blob, err)
}
if !reflect.DeepEqual(typ, tt.kind) {
t.Errorf("type %q: parsed type mismatch:\nGOT %s\nWANT %s ", tt.blob, spew.Sdump(typeWithoutStringer(typ)), spew.Sdump(typeWithoutStringer(tt.kind)))
}
}
}
func TestTypeCheck(t *testing.T) {
for i, test := range []struct {
typ string
input interface{}
err string
}{
{"uint", big.NewInt(1), "unsupported arg type: uint"},
{"int", big.NewInt(1), "unsupported arg type: int"},
{"uint256", big.NewInt(1), ""},
{"uint256[][3][]", [][3][]*big.Int{{{}}}, ""},
{"uint256[][][3]", [3][][]*big.Int{{{}}}, ""},
{"uint256[3][][]", [][][3]*big.Int{{{}}}, ""},
{"uint256[3][3][3]", [3][3][3]*big.Int{{{}}}, ""},
{"uint8[][]", [][]uint8{}, ""},
{"int256", big.NewInt(1), ""},
{"uint8", uint8(1), ""},
{"uint16", uint16(1), ""},
{"uint32", uint32(1), ""},
{"uint64", uint64(1), ""},
{"int8", int8(1), ""},
{"int16", int16(1), ""},
{"int32", int32(1), ""},
{"int64", int64(1), ""},
{"uint24", big.NewInt(1), ""},
{"uint40", big.NewInt(1), ""},
{"uint48", big.NewInt(1), ""},
{"uint56", big.NewInt(1), ""},
{"uint72", big.NewInt(1), ""},
{"uint80", big.NewInt(1), ""},
{"uint88", big.NewInt(1), ""},
{"uint96", big.NewInt(1), ""},
{"uint104", big.NewInt(1), ""},
{"uint112", big.NewInt(1), ""},
{"uint120", big.NewInt(1), ""},
{"uint128", big.NewInt(1), ""},
{"uint136", big.NewInt(1), ""},
{"uint144", big.NewInt(1), ""},
{"uint152", big.NewInt(1), ""},
{"uint160", big.NewInt(1), ""},
{"uint168", big.NewInt(1), ""},
{"uint176", big.NewInt(1), ""},
{"uint184", big.NewInt(1), ""},
{"uint192", big.NewInt(1), ""},
{"uint200", big.NewInt(1), ""},
{"uint208", big.NewInt(1), ""},
{"uint216", big.NewInt(1), ""},
{"uint224", big.NewInt(1), ""},
{"uint232", big.NewInt(1), ""},
{"uint240", big.NewInt(1), ""},
{"uint248", big.NewInt(1), ""},
{"int24", big.NewInt(1), ""},
{"int40", big.NewInt(1), ""},
{"int48", big.NewInt(1), ""},
{"int56", big.NewInt(1), ""},
{"int72", big.NewInt(1), ""},
{"int80", big.NewInt(1), ""},
{"int88", big.NewInt(1), ""},
{"int96", big.NewInt(1), ""},
{"int104", big.NewInt(1), ""},
{"int112", big.NewInt(1), ""},
{"int120", big.NewInt(1), ""},
{"int128", big.NewInt(1), ""},
{"int136", big.NewInt(1), ""},
{"int144", big.NewInt(1), ""},
{"int152", big.NewInt(1), ""},
{"int160", big.NewInt(1), ""},
{"int168", big.NewInt(1), ""},
{"int176", big.NewInt(1), ""},
{"int184", big.NewInt(1), ""},
{"int192", big.NewInt(1), ""},
{"int200", big.NewInt(1), ""},
{"int208", big.NewInt(1), ""},
{"int216", big.NewInt(1), ""},
{"int224", big.NewInt(1), ""},
{"int232", big.NewInt(1), ""},
{"int240", big.NewInt(1), ""},
{"int248", big.NewInt(1), ""},
{"uint30", uint8(1), "abi: cannot use uint8 as type ptr as argument"},
{"uint8", uint16(1), "abi: cannot use uint16 as type uint8 as argument"},
{"uint8", uint32(1), "abi: cannot use uint32 as type uint8 as argument"},
{"uint8", uint64(1), "abi: cannot use uint64 as type uint8 as argument"},
{"uint8", int8(1), "abi: cannot use int8 as type uint8 as argument"},
{"uint8", int16(1), "abi: cannot use int16 as type uint8 as argument"},
{"uint8", int32(1), "abi: cannot use int32 as type uint8 as argument"},
{"uint8", int64(1), "abi: cannot use int64 as type uint8 as argument"},
{"uint16", uint16(1), ""},
{"uint16", uint8(1), "abi: cannot use uint8 as type uint16 as argument"},
{"uint16[]", []uint16{1, 2, 3}, ""},
{"uint16[]", [3]uint16{1, 2, 3}, ""},
{"uint16[]", []uint32{1, 2, 3}, "abi: cannot use []uint32 as type [0]uint16 as argument"},
{"uint16[3]", [3]uint32{1, 2, 3}, "abi: cannot use [3]uint32 as type [3]uint16 as argument"},
{"uint16[3]", [4]uint16{1, 2, 3}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"uint16[3]", []uint16{1, 2, 3}, ""},
{"uint16[3]", []uint16{1, 2, 3, 4}, "abi: cannot use [4]uint16 as type [3]uint16 as argument"},
{"address[]", []common.Address{{1}}, ""},
{"address[1]", []common.Address{{1}}, ""},
{"address[1]", [1]common.Address{{1}}, ""},
{"address[2]", [1]common.Address{{1}}, "abi: cannot use [1]array as type [2]array as argument"},
{"bytes32", [32]byte{}, ""},
{"bytes31", [31]byte{}, ""},
{"bytes30", [30]byte{}, ""},
{"bytes29", [29]byte{}, ""},
{"bytes28", [28]byte{}, ""},
{"bytes27", [27]byte{}, ""},
{"bytes26", [26]byte{}, ""},
{"bytes25", [25]byte{}, ""},
{"bytes24", [24]byte{}, ""},
{"bytes23", [23]byte{}, ""},
{"bytes22", [22]byte{}, ""},
{"bytes21", [21]byte{}, ""},
{"bytes20", [20]byte{}, ""},
{"bytes19", [19]byte{}, ""},
{"bytes18", [18]byte{}, ""},
{"bytes17", [17]byte{}, ""},
{"bytes16", [16]byte{}, ""},
{"bytes15", [15]byte{}, ""},
{"bytes14", [14]byte{}, ""},
{"bytes13", [13]byte{}, ""},
{"bytes12", [12]byte{}, ""},
{"bytes11", [11]byte{}, ""},
{"bytes10", [10]byte{}, ""},
{"bytes9", [9]byte{}, ""},
{"bytes8", [8]byte{}, ""},
{"bytes7", [7]byte{}, ""},
{"bytes6", [6]byte{}, ""},
{"bytes5", [5]byte{}, ""},
{"bytes4", [4]byte{}, ""},
{"bytes3", [3]byte{}, ""},
{"bytes2", [2]byte{}, ""},
{"bytes1", [1]byte{}, ""},
{"bytes32", [33]byte{}, "abi: cannot use [33]uint8 as type [32]uint8 as argument"},
{"bytes32", common.Hash{1}, ""},
{"bytes31", common.Hash{1}, "abi: cannot use common.Hash as type [31]uint8 as argument"},
{"bytes31", [32]byte{}, "abi: cannot use [32]uint8 as type [31]uint8 as argument"},
{"bytes", []byte{0, 1}, ""},
{"bytes", [2]byte{0, 1}, "abi: cannot use array as type slice as argument"},
{"bytes", common.Hash{1}, "abi: cannot use array as type slice as argument"},
{"string", "hello world", ""},
{"string", string(""), ""},
{"string", []byte{}, "abi: cannot use slice as type string as argument"},
{"bytes32[]", [][32]byte{{}}, ""},
{"function", [24]byte{}, ""},
{"bytes20", common.Address{}, ""},
{"address", [20]byte{}, ""},
{"address", common.Address{}, ""},
{"bytes32[]]", "", "invalid arg type in abi"},
{"invalidType", "", "unsupported arg type: invalidType"},
{"invalidSlice[]", "", "unsupported arg type: invalidSlice"},
} {
typ, err := NewType(test.typ)
if err != nil && len(test.err) == 0 {
t.Fatal("unexpected parse error:", err)
} else if err != nil && len(test.err) != 0 {
if err.Error() != test.err {
t.Errorf("%d failed. Expected err: '%v' got err: '%v'", i, test.err, err)
}
continue
}
err = typeCheck(typ, reflect.ValueOf(test.input))
if err != nil && len(test.err) == 0 {
t.Errorf("%d failed. Expected no err but got: %v", i, err)
continue
}
if err == nil && len(test.err) != 0 {
t.Errorf("%d failed. Expected err: %v but got none", i, test.err)
continue
}
if err != nil && len(test.err) != 0 && err.Error() != test.err {
t.Errorf("%d failed. Expected err: '%v' got err: '%v'", i, test.err, err)
}
}
}

View File

@ -1,252 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"encoding/binary"
"fmt"
"math/big"
"reflect"
"github.com/ethereum/go-ethereum/common"
)
var (
maxUint256 = big.NewInt(0).Add(
big.NewInt(0).Exp(big.NewInt(2), big.NewInt(256), nil),
big.NewInt(-1))
maxInt256 = big.NewInt(0).Add(
big.NewInt(0).Exp(big.NewInt(2), big.NewInt(255), nil),
big.NewInt(-1))
)
// reads the integer based on its kind
func readInteger(typ byte, kind reflect.Kind, b []byte) interface{} {
switch kind {
case reflect.Uint8:
return b[len(b)-1]
case reflect.Uint16:
return binary.BigEndian.Uint16(b[len(b)-2:])
case reflect.Uint32:
return binary.BigEndian.Uint32(b[len(b)-4:])
case reflect.Uint64:
return binary.BigEndian.Uint64(b[len(b)-8:])
case reflect.Int8:
return int8(b[len(b)-1])
case reflect.Int16:
return int16(binary.BigEndian.Uint16(b[len(b)-2:]))
case reflect.Int32:
return int32(binary.BigEndian.Uint32(b[len(b)-4:]))
case reflect.Int64:
return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
default:
// the only case lefts for integer is int256/uint256.
// big.SetBytes can't tell if a number is negative, positive on itself.
// On EVM, if the returned number > max int256, it is negative.
ret := new(big.Int).SetBytes(b)
if typ == UintTy {
return ret
}
if ret.Cmp(maxInt256) > 0 {
ret.Add(maxUint256, big.NewInt(0).Neg(ret))
ret.Add(ret, big.NewInt(1))
ret.Neg(ret)
}
return ret
}
}
// reads a bool
func readBool(word []byte) (bool, error) {
for _, b := range word[:31] {
if b != 0 {
return false, errBadBool
}
}
switch word[31] {
case 0:
return false, nil
case 1:
return true, nil
default:
return false, errBadBool
}
}
// A function type is simply the address with the function selection signature at the end.
// This enforces that standard by always presenting it as a 24-array (address + sig = 24 bytes)
func readFunctionType(t Type, word []byte) (funcTy [24]byte, err error) {
if t.T != FunctionTy {
return [24]byte{}, fmt.Errorf("abi: invalid type in call to make function type byte array")
}
if garbage := binary.BigEndian.Uint64(word[24:32]); garbage != 0 {
err = fmt.Errorf("abi: got improperly encoded function type, got %v", word)
} else {
copy(funcTy[:], word[0:24])
}
return
}
// through reflection, creates a fixed array to be read from
func readFixedBytes(t Type, word []byte) (interface{}, error) {
if t.T != FixedBytesTy {
return nil, fmt.Errorf("abi: invalid type in call to make fixed byte array")
}
// convert
array := reflect.New(t.Type).Elem()
reflect.Copy(array, reflect.ValueOf(word[0:t.Size]))
return array.Interface(), nil
}
func getFullElemSize(elem *Type) int {
//all other should be counted as 32 (slices have pointers to respective elements)
size := 32
//arrays wrap it, each element being the same size
for elem.T == ArrayTy {
size *= elem.Size
elem = elem.Elem
}
return size
}
// iteratively unpack elements
func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error) {
if size < 0 {
return nil, fmt.Errorf("cannot marshal input to array, size is negative (%d)", size)
}
if start+32*size > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go array: offset %d would go over slice boundary (len=%d)", len(output), start+32*size)
}
// this value will become our slice or our array, depending on the type
var refSlice reflect.Value
if t.T == SliceTy {
// declare our slice
refSlice = reflect.MakeSlice(t.Type, size, size)
} else if t.T == ArrayTy {
// declare our array
refSlice = reflect.New(t.Type).Elem()
} else {
return nil, fmt.Errorf("abi: invalid type in array/slice unpacking stage")
}
// Arrays have packed elements, resulting in longer unpack steps.
// Slices have just 32 bytes per element (pointing to the contents).
elemSize := 32
if t.T == ArrayTy {
elemSize = getFullElemSize(t.Elem)
}
for i, j := start, 0; j < size; i, j = i+elemSize, j+1 {
inter, err := toGoType(i, *t.Elem, output)
if err != nil {
return nil, err
}
// append the item to our reflect slice
refSlice.Index(j).Set(reflect.ValueOf(inter))
}
// return the interface
return refSlice.Interface(), nil
}
// toGoType parses the output bytes and recursively assigns the value of these bytes
// into a go type with accordance with the ABI spec.
func toGoType(index int, t Type, output []byte) (interface{}, error) {
if index+32 > len(output) {
return nil, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %d require %d", len(output), index+32)
}
var (
returnOutput []byte
begin, end int
err error
)
// if we require a length prefix, find the beginning word and size returned.
if t.requiresLengthPrefix() {
begin, end, err = lengthPrefixPointsTo(index, output)
if err != nil {
return nil, err
}
} else {
returnOutput = output[index : index+32]
}
switch t.T {
case SliceTy:
return forEachUnpack(t, output, begin, end)
case ArrayTy:
return forEachUnpack(t, output, index, t.Size)
case StringTy: // variable arrays are written at the end of the return bytes
return string(output[begin : begin+end]), nil
case IntTy, UintTy:
return readInteger(t.T, t.Kind, returnOutput), nil
case BoolTy:
return readBool(returnOutput)
case AddressTy:
return common.BytesToAddress(returnOutput), nil
case HashTy:
return common.BytesToHash(returnOutput), nil
case BytesTy:
return output[begin : begin+end], nil
case FixedBytesTy:
return readFixedBytes(t, returnOutput)
case FunctionTy:
return readFunctionType(t, returnOutput)
default:
return nil, fmt.Errorf("abi: unknown type %v", t.T)
}
}
// interprets a 32 byte slice as an offset and then determines which indice to look to decode the type.
func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err error) {
bigOffsetEnd := big.NewInt(0).SetBytes(output[index : index+32])
bigOffsetEnd.Add(bigOffsetEnd, common.Big32)
outputLength := big.NewInt(int64(len(output)))
if bigOffsetEnd.Cmp(outputLength) > 0 {
return 0, 0, fmt.Errorf("abi: cannot marshal in to go slice: offset %v would go over slice boundary (len=%v)", bigOffsetEnd, outputLength)
}
if bigOffsetEnd.BitLen() > 63 {
return 0, 0, fmt.Errorf("abi offset larger than int64: %v", bigOffsetEnd)
}
offsetEnd := int(bigOffsetEnd.Uint64())
lengthBig := big.NewInt(0).SetBytes(output[offsetEnd-32 : offsetEnd])
totalSize := big.NewInt(0)
totalSize.Add(totalSize, bigOffsetEnd)
totalSize.Add(totalSize, lengthBig)
if totalSize.BitLen() > 63 {
return 0, 0, fmt.Errorf("abi length larger than int64: %v", totalSize)
}
if totalSize.Cmp(outputLength) > 0 {
return 0, 0, fmt.Errorf("abi: cannot marshal in to go type: length insufficient %v require %v", outputLength, totalSize)
}
start = int(bigOffsetEnd.Uint64())
length = int(lengthBig.Uint64())
return
}

View File

@ -1,822 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"bytes"
"encoding/hex"
"fmt"
"math/big"
"reflect"
"strconv"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/stretchr/testify/require"
)
type unpackTest struct {
def string // ABI definition JSON
enc string // evm return data
want interface{} // the expected output
err string // empty or error if expected
}
func (test unpackTest) checkError(err error) error {
if err != nil {
if len(test.err) == 0 {
return fmt.Errorf("expected no err but got: %v", err)
} else if err.Error() != test.err {
return fmt.Errorf("expected err: '%v' got err: %q", test.err, err)
}
} else if len(test.err) > 0 {
return fmt.Errorf("expected err: %v but got none", test.err)
}
return nil
}
var unpackTests = []unpackTest{
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: true,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000000",
want: false,
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000001000000000001",
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{ "type": "bool" }]`,
enc: "0000000000000000000000000000000000000000000000000000000000000003",
want: false,
err: "abi: improperly encoded boolean value",
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint32(1),
},
{
def: `[{"type": "uint32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint16(0),
err: "abi: cannot unmarshal uint32 in to uint16",
},
{
def: `[{"type": "uint17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: uint16(0),
err: "abi: cannot unmarshal *big.Int in to uint16",
},
{
def: `[{"type": "uint17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int32(1),
},
{
def: `[{"type": "int32"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int16(0),
err: "abi: cannot unmarshal int32 in to int16",
},
{
def: `[{"type": "int17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: int16(0),
err: "abi: cannot unmarshal *big.Int in to int16",
},
{
def: `[{"type": "int17"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001",
want: big.NewInt(1),
},
{
def: `[{"type": "int256"}]`,
enc: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
want: big.NewInt(-1),
},
{
def: `[{"type": "address"}]`,
enc: "0000000000000000000000000100000000000000000000000000000000000000",
want: common.Address{1},
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: common.Hex2Bytes("0100000000000000000000000000000000000000000000000000000000000000"),
},
{
def: `[{"type": "bytes"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{},
err: "abi: cannot unmarshal []uint8 in to [32]uint8",
},
{
def: `[{"type": "bytes32"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200100000000000000000000000000000000000000000000000000000000000000",
want: []byte(nil),
err: "abi: cannot unmarshal [32]uint8 in to []uint8",
},
{
def: `[{"type": "bytes32"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [32]byte{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
def: `[{"type": "function"}]`,
enc: "0100000000000000000000000000000000000000000000000000000000000000",
want: [24]byte{1},
},
// slices
{
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint8{1, 2},
},
{
def: `[{"type": "uint8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint8{1, 2},
},
// multi dimensional, if these pass, all types that don't require length prefix should pass
{
def: `[{"type": "uint8[][]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000E0000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[2][2]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2][2]uint8{{1, 2}, {1, 2}},
},
{
def: `[{"type": "uint8[][2]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
want: [2][]uint8{{1}, {1}},
},
{
def: `[{"type": "uint8[2][]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [][2]uint8{{1, 2}},
},
{
def: `[{"type": "uint16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint16{1, 2},
},
{
def: `[{"type": "uint16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint16{1, 2},
},
{
def: `[{"type": "uint32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint32{1, 2},
},
{
def: `[{"type": "uint32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint32{1, 2},
},
{
def: `[{"type": "uint32[2][3][4]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000009000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b000000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000d000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000000f000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000001300000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000015000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000018",
want: [4][3][2]uint32{{{1, 2}, {3, 4}, {5, 6}}, {{7, 8}, {9, 10}, {11, 12}}, {{13, 14}, {15, 16}, {17, 18}}, {{19, 20}, {21, 22}, {23, 24}}},
},
{
def: `[{"type": "uint64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []uint64{1, 2},
},
{
def: `[{"type": "uint64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]uint64{1, 2},
},
{
def: `[{"type": "uint256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "uint256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
{
def: `[{"type": "int8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int8{1, 2},
},
{
def: `[{"type": "int8[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int8{1, 2},
},
{
def: `[{"type": "int16[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int16{1, 2},
},
{
def: `[{"type": "int16[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int16{1, 2},
},
{
def: `[{"type": "int32[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int32{1, 2},
},
{
def: `[{"type": "int32[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int32{1, 2},
},
{
def: `[{"type": "int64[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []int64{1, 2},
},
{
def: `[{"type": "int64[2]"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: [2]int64{1, 2},
},
{
def: `[{"type": "int256[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: []*big.Int{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"type": "int256[3]"}]`,
enc: "000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003",
want: [3]*big.Int{big.NewInt(1), big.NewInt(2), big.NewInt(3)},
},
// struct outputs
{
def: `[{"name":"int1","type":"int256"},{"name":"int2","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{big.NewInt(1), big.NewInt(2)},
},
{
def: `[{"name":"int","type":"int256"},{"name":"Int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'Int'",
},
{
def: `[{"name":"int","type":"int256"},{"name":"_int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'Int'",
},
{
def: `[{"name":"Int","type":"int256"},{"name":"_int","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: multiple outputs mapping to the same struct field 'Int'",
},
{
def: `[{"name":"Int","type":"int256"},{"name":"_","type":"int256"}]`,
enc: "00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000002",
want: struct {
Int1 *big.Int
Int2 *big.Int
}{},
err: "abi: purely underscored output cannot unpack to struct",
},
}
func TestUnpack(t *testing.T) {
for i, test := range unpackTests {
t.Run(strconv.Itoa(i), func(t *testing.T) {
def := fmt.Sprintf(`[{ "name" : "method", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.enc)
if err != nil {
t.Fatalf("invalid hex: %s" + test.enc)
}
outptr := reflect.New(reflect.TypeOf(test.want))
err = abi.Unpack(outptr.Interface(), "method", encb)
if err := test.checkError(err); err != nil {
t.Errorf("test %d (%v) failed: %v", i, test.def, err)
return
}
out := outptr.Elem().Interface()
if !reflect.DeepEqual(test.want, out) {
t.Errorf("test %d (%v) failed: expected %v, got %v", i, test.def, test.want, out)
}
})
}
}
type methodMultiOutput struct {
Int *big.Int
String string
}
func methodMultiReturn(require *require.Assertions) (ABI, []byte, methodMultiOutput) {
const definition = `[
{ "name" : "multi", "constant" : false, "outputs": [ { "name": "Int", "type": "uint256" }, { "name": "String", "type": "string" } ] }]`
var expected = methodMultiOutput{big.NewInt(1), "hello"}
abi, err := JSON(strings.NewReader(definition))
require.NoError(err)
// using buff to make the code readable
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000005"))
buff.Write(common.RightPadBytes([]byte(expected.String), 32))
return abi, buff.Bytes(), expected
}
func TestMethodMultiReturn(t *testing.T) {
type reversed struct {
String string
Int *big.Int
}
abi, data, expected := methodMultiReturn(require.New(t))
bigint := new(big.Int)
var testCases = []struct {
dest interface{}
expected interface{}
error string
name string
}{{
&methodMultiOutput{},
&expected,
"",
"Can unpack into structure",
}, {
&reversed{},
&reversed{expected.String, expected.Int},
"",
"Can unpack into reversed structure",
}, {
&[]interface{}{&bigint, new(string)},
&[]interface{}{&expected.Int, &expected.String},
"",
"Can unpack into a slice",
}, {
&[2]interface{}{&bigint, new(string)},
&[2]interface{}{&expected.Int, &expected.String},
"",
"Can unpack into an array",
}, {
&[]interface{}{new(int), new(int)},
&[]interface{}{&expected.Int, &expected.String},
"abi: cannot unmarshal *big.Int in to int",
"Can not unpack into a slice with wrong types",
}, {
&[]interface{}{new(int)},
&[]interface{}{},
"abi: insufficient number of elements in the list/array for unpack, want 2, got 1",
"Can not unpack into a slice with wrong types",
}}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
require := require.New(t)
err := abi.Unpack(tc.dest, "multi", data)
if tc.error == "" {
require.Nil(err, "Should be able to unpack method outputs.")
require.Equal(tc.expected, tc.dest)
} else {
require.EqualError(err, tc.error)
}
})
}
}
func TestMultiReturnWithArray(t *testing.T) {
const definition = `[{"name" : "multi", "outputs": [{"type": "uint64[3]"}, {"type": "uint64"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000900000000000000000000000000000000000000000000000000000000000000090000000000000000000000000000000000000000000000000000000000000009"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000008"))
ret1, ret1Exp := new([3]uint64), [3]uint64{9, 9, 9}
ret2, ret2Exp := new(uint64), uint64(8)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
t.Error("array result", *ret1, "!= Expected", ret1Exp)
}
if *ret2 != ret2Exp {
t.Error("int result", *ret2, "!= Expected", ret2Exp)
}
}
func TestMultiReturnWithDeeplyNestedArray(t *testing.T) {
// Similar to TestMultiReturnWithArray, but with a special case in mind:
// values of nested static arrays count towards the size as well, and any element following
// after such nested array argument should be read with the correct offset,
// so that it does not read content from the previous array argument.
const definition = `[{"name" : "multi", "outputs": [{"type": "uint64[3][2][4]"}, {"type": "uint64"}]}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
// construct the test array, each 3 char element is joined with 61 '0' chars,
// to from the ((3 + 61) * 0.5) = 32 byte elements in the array.
buff.Write(common.Hex2Bytes(strings.Join([]string{
"", //empty, to apply the 61-char separator to the first element as well.
"111", "112", "113", "121", "122", "123",
"211", "212", "213", "221", "222", "223",
"311", "312", "313", "321", "322", "323",
"411", "412", "413", "421", "422", "423",
}, "0000000000000000000000000000000000000000000000000000000000000")))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000009876"))
ret1, ret1Exp := new([4][2][3]uint64), [4][2][3]uint64{
{{0x111, 0x112, 0x113}, {0x121, 0x122, 0x123}},
{{0x211, 0x212, 0x213}, {0x221, 0x222, 0x223}},
{{0x311, 0x312, 0x313}, {0x321, 0x322, 0x323}},
{{0x411, 0x412, 0x413}, {0x421, 0x422, 0x423}},
}
ret2, ret2Exp := new(uint64), uint64(0x9876)
if err := abi.Unpack(&[]interface{}{ret1, ret2}, "multi", buff.Bytes()); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(*ret1, ret1Exp) {
t.Error("array result", *ret1, "!= Expected", ret1Exp)
}
if *ret2 != ret2Exp {
t.Error("int result", *ret2, "!= Expected", ret2Exp)
}
}
func TestUnmarshal(t *testing.T) {
const definition = `[
{ "name" : "int", "constant" : false, "outputs": [ { "type": "uint256" } ] },
{ "name" : "bool", "constant" : false, "outputs": [ { "type": "bool" } ] },
{ "name" : "bytes", "constant" : false, "outputs": [ { "type": "bytes" } ] },
{ "name" : "fixed", "constant" : false, "outputs": [ { "type": "bytes32" } ] },
{ "name" : "multi", "constant" : false, "outputs": [ { "type": "bytes" }, { "type": "bytes" } ] },
{ "name" : "intArraySingle", "constant" : false, "outputs": [ { "type": "uint256[3]" } ] },
{ "name" : "addressSliceSingle", "constant" : false, "outputs": [ { "type": "address[]" } ] },
{ "name" : "addressSliceDouble", "constant" : false, "outputs": [ { "name": "a", "type": "address[]" }, { "name": "b", "type": "address[]" } ] },
{ "name" : "mixedBytes", "constant" : true, "outputs": [ { "name": "a", "type": "bytes" }, { "name": "b", "type": "bytes32" } ] }]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
buff := new(bytes.Buffer)
// marshall mixed bytes (mixedBytes)
p0, p0Exp := []byte{}, common.Hex2Bytes("01020000000000000000")
p1, p1Exp := [32]byte{}, common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000ddeeff")
mixedBytes := []interface{}{&p0, &p1}
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000ddeeff"))
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000000a"))
buff.Write(common.Hex2Bytes("0102000000000000000000000000000000000000000000000000000000000000"))
err = abi.Unpack(&mixedBytes, "mixedBytes", buff.Bytes())
if err != nil {
t.Error(err)
} else {
if !bytes.Equal(p0, p0Exp) {
t.Errorf("unexpected value unpacked: want %x, got %x", p0Exp, p0)
}
if !bytes.Equal(p1[:], p1Exp) {
t.Errorf("unexpected value unpacked: want %x, got %x", p1Exp, p1)
}
}
// marshal int
var Int *big.Int
err = abi.Unpack(&Int, "int", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
if Int == nil || Int.Cmp(big.NewInt(1)) != 0 {
t.Error("expected Int to be 1 got", Int)
}
// marshal bool
var Bool bool
err = abi.Unpack(&Bool, "bool", common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
if err != nil {
t.Error(err)
}
if !Bool {
t.Error("expected Bool to be true")
}
// marshal dynamic bytes max length 32
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
bytesOut := common.RightPadBytes([]byte("hello"), 32)
buff.Write(bytesOut)
var Bytes []byte
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, bytesOut) {
t.Errorf("expected %x got %x", bytesOut, Bytes)
}
// marshall dynamic bytes max length 64
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040"))
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, bytesOut) {
t.Errorf("expected %x got %x", bytesOut, Bytes)
}
// marshall dynamic bytes max length 64
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000000000003f"))
bytesOut = common.RightPadBytes([]byte("hello"), 64)
buff.Write(bytesOut)
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, bytesOut[:len(bytesOut)-1]) {
t.Errorf("expected %x got %x", bytesOut[:len(bytesOut)-1], Bytes)
}
// marshal dynamic bytes output empty
err = abi.Unpack(&Bytes, "bytes", nil)
if err == nil {
t.Error("expected error")
}
// marshal dynamic bytes length 5
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000005"))
buff.Write(common.RightPadBytes([]byte("hello"), 32))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err != nil {
t.Error(err)
}
if !bytes.Equal(Bytes, []byte("hello")) {
t.Errorf("expected %x got %x", bytesOut, Bytes)
}
// marshal dynamic bytes length 5
buff.Reset()
buff.Write(common.RightPadBytes([]byte("hello"), 32))
var hash common.Hash
err = abi.Unpack(&hash, "fixed", buff.Bytes())
if err != nil {
t.Error(err)
}
helloHash := common.BytesToHash(common.RightPadBytes([]byte("hello"), 32))
if hash != helloHash {
t.Errorf("Expected %x to equal %x", hash, helloHash)
}
// marshal error
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020"))
err = abi.Unpack(&Bytes, "bytes", buff.Bytes())
if err == nil {
t.Error("expected error")
}
err = abi.Unpack(&Bytes, "multi", make([]byte, 64))
if err == nil {
t.Error("expected error")
}
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000003"))
// marshal int array
var intArray [3]*big.Int
err = abi.Unpack(&intArray, "intArraySingle", buff.Bytes())
if err != nil {
t.Error(err)
}
var testAgainstIntArray [3]*big.Int
testAgainstIntArray[0] = big.NewInt(1)
testAgainstIntArray[1] = big.NewInt(2)
testAgainstIntArray[2] = big.NewInt(3)
for i, Int := range intArray {
if Int.Cmp(testAgainstIntArray[i]) != 0 {
t.Errorf("expected %v, got %v", testAgainstIntArray[i], Int)
}
}
// marshal address slice
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000020")) // offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // size
buff.Write(common.Hex2Bytes("0000000000000000000000000100000000000000000000000000000000000000"))
var outAddr []common.Address
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
if len(outAddr) != 1 {
t.Fatal("expected 1 item, got", len(outAddr))
}
if outAddr[0] != (common.Address{1}) {
t.Errorf("expected %x, got %x", common.Address{1}, outAddr[0])
}
// marshal multiple address slice
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000040")) // offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000080")) // offset
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000001")) // size
buff.Write(common.Hex2Bytes("0000000000000000000000000100000000000000000000000000000000000000"))
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000002")) // size
buff.Write(common.Hex2Bytes("0000000000000000000000000200000000000000000000000000000000000000"))
buff.Write(common.Hex2Bytes("0000000000000000000000000300000000000000000000000000000000000000"))
var outAddrStruct struct {
A []common.Address
B []common.Address
}
err = abi.Unpack(&outAddrStruct, "addressSliceDouble", buff.Bytes())
if err != nil {
t.Fatal("didn't expect error:", err)
}
if len(outAddrStruct.A) != 1 {
t.Fatal("expected 1 item, got", len(outAddrStruct.A))
}
if outAddrStruct.A[0] != (common.Address{1}) {
t.Errorf("expected %x, got %x", common.Address{1}, outAddrStruct.A[0])
}
if len(outAddrStruct.B) != 2 {
t.Fatal("expected 1 item, got", len(outAddrStruct.B))
}
if outAddrStruct.B[0] != (common.Address{2}) {
t.Errorf("expected %x, got %x", common.Address{2}, outAddrStruct.B[0])
}
if outAddrStruct.B[1] != (common.Address{3}) {
t.Errorf("expected %x, got %x", common.Address{3}, outAddrStruct.B[1])
}
// marshal invalid address slice
buff.Reset()
buff.Write(common.Hex2Bytes("0000000000000000000000000000000000000000000000000000000000000100"))
err = abi.Unpack(&outAddr, "addressSliceSingle", buff.Bytes())
if err == nil {
t.Fatal("expected error:", err)
}
}
func TestOOMMaliciousInput(t *testing.T) {
oomTests := []unpackTest{
{
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"0000000000000000000000000000000000000000000000000000000000000003" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Length larger than 64 bits
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"00ffffffffffffffffffffffffffffffffffffffffffffff0000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Offset very large (over 64 bits)
def: `[{"type": "uint8[]"}]`,
enc: "00ffffffffffffffffffffffffffffffffffffffffffffff0000000000000020" + // offset
"0000000000000000000000000000000000000000000000000000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Offset very large (below 64 bits)
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000007ffffffffff00020" + // offset
"0000000000000000000000000000000000000000000000000000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Offset negative (as 64 bit)
def: `[{"type": "uint8[]"}]`,
enc: "000000000000000000000000000000000000000000000000f000000000000020" + // offset
"0000000000000000000000000000000000000000000000000000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Negative length
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"000000000000000000000000000000000000000000000000f000000000000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
{ // Very large length
def: `[{"type": "uint8[]"}]`,
enc: "0000000000000000000000000000000000000000000000000000000000000020" + // offset
"0000000000000000000000000000000000000000000000007fffffffff000002" + // num elems
"0000000000000000000000000000000000000000000000000000000000000001" + // elem 1
"0000000000000000000000000000000000000000000000000000000000000002", // elem 2
},
}
for i, test := range oomTests {
def := fmt.Sprintf(`[{ "name" : "method", "outputs": %s}]`, test.def)
abi, err := JSON(strings.NewReader(def))
if err != nil {
t.Fatalf("invalid ABI definition %s: %v", def, err)
}
encb, err := hex.DecodeString(test.enc)
if err != nil {
t.Fatalf("invalid hex: %s" + test.enc)
}
_, err = abi.Methods["method"].Outputs.UnpackValues(encb)
if err == nil {
t.Fatalf("Expected error on malicious input, test %d", i)
}
}
}

View File

@ -1,173 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package accounts implements high level Ethereum account management.
package accounts
import (
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
)
// Account represents an Ethereum account located at a specific location defined
// by the optional URL field.
type Account struct {
Address common.Address `json:"address"` // Ethereum account address derived from the key
URL URL `json:"url"` // Optional resource locator within a backend
}
// Wallet represents a software or hardware wallet that might contain one or more
// accounts (derived from the same seed).
type Wallet interface {
// URL retrieves the canonical path under which this wallet is reachable. It is
// user by upper layers to define a sorting order over all wallets from multiple
// backends.
URL() URL
// Status returns a textual status to aid the user in the current state of the
// wallet. It also returns an error indicating any failure the wallet might have
// encountered.
Status() (string, error)
// Open initializes access to a wallet instance. It is not meant to unlock or
// decrypt account keys, rather simply to establish a connection to hardware
// wallets and/or to access derivation seeds.
//
// The passphrase parameter may or may not be used by the implementation of a
// particular wallet instance. The reason there is no passwordless open method
// is to strive towards a uniform wallet handling, oblivious to the different
// backend providers.
//
// Please note, if you open a wallet, you must close it to release any allocated
// resources (especially important when working with hardware wallets).
Open(passphrase string) error
// Close releases any resources held by an open wallet instance.
Close() error
// Accounts retrieves the list of signing accounts the wallet is currently aware
// of. For hierarchical deterministic wallets, the list will not be exhaustive,
// rather only contain the accounts explicitly pinned during account derivation.
Accounts() []Account
// Contains returns whether an account is part of this particular wallet or not.
Contains(account Account) bool
// Derive attempts to explicitly derive a hierarchical deterministic account at
// the specified derivation path. If requested, the derived account will be added
// to the wallet's tracked account list.
Derive(path DerivationPath, pin bool) (Account, error)
// SelfDerive sets a base account derivation path from which the wallet attempts
// to discover non zero accounts and automatically add them to list of tracked
// accounts.
//
// Note, self derivaton will increment the last component of the specified path
// opposed to decending into a child path to allow discovering accounts starting
// from non zero components.
//
// You can disable automatic account discovery by calling SelfDerive with a nil
// chain state reader.
SelfDerive(base DerivationPath, chain ethereum.ChainStateReader)
// SignHash requests the wallet to sign the given hash.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
//
// If the wallet requires additional authentication to sign the request (e.g.
// a password to decrypt the account, or a PIN code o verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignHashWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
SignHash(account Account, hash []byte) ([]byte, error)
// SignTx requests the wallet to sign the given transaction.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
//
// If the wallet requires additional authentication to sign the request (e.g.
// a password to decrypt the account, or a PIN code to verify the transaction),
// an AuthNeededError instance will be returned, containing infos for the user
// about which fields or actions are needed. The user may retry by providing
// the needed details via SignTxWithPassphrase, or by other means (e.g. unlock
// the account in a keystore).
SignTx(account Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error)
// SignHashWithPassphrase requests the wallet to sign the given hash with the
// given passphrase as extra authentication information.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
SignHashWithPassphrase(account Account, passphrase string, hash []byte) ([]byte, error)
// SignTxWithPassphrase requests the wallet to sign the given transaction, with the
// given passphrase as extra authentication information.
//
// It looks up the account specified either solely via its address contained within,
// or optionally with the aid of any location metadata from the embedded URL field.
SignTxWithPassphrase(account Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error)
}
// Backend is a "wallet provider" that may contain a batch of accounts they can
// sign transactions with and upon request, do so.
type Backend interface {
// Wallets retrieves the list of wallets the backend is currently aware of.
//
// The returned wallets are not opened by default. For software HD wallets this
// means that no base seeds are decrypted, and for hardware wallets that no actual
// connection is established.
//
// The resulting wallet list will be sorted alphabetically based on its internal
// URL assigned by the backend. Since wallets (especially hardware) may come and
// go, the same wallet might appear at a different positions in the list during
// subsequent retrievals.
Wallets() []Wallet
// Subscribe creates an async subscription to receive notifications when the
// backend detects the arrival or departure of a wallet.
Subscribe(sink chan<- WalletEvent) event.Subscription
}
// WalletEventType represents the different event types that can be fired by
// the wallet subscription subsystem.
type WalletEventType int
const (
// WalletArrived is fired when a new wallet is detected either via USB or via
// a filesystem event in the keystore.
WalletArrived WalletEventType = iota
// WalletOpened is fired when a wallet is successfully opened with the purpose
// of starting any background processes such as automatic key derivation.
WalletOpened
// WalletDropped
WalletDropped
)
// WalletEvent is an event fired by an account backend when a wallet arrival or
// departure is detected.
type WalletEvent struct {
Wallet Wallet // Wallet instance arrived or departed
Kind WalletEventType // Event type that happened in the system
}

View File

@ -1,79 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"reflect"
"testing"
)
// Tests that HD derivation paths can be correctly parsed into our internal binary
// representation.
func TestHDPathParsing(t *testing.T) {
tests := []struct {
input string
output DerivationPath
}{
// Plain absolute derivation paths
{"m/44'/60'/0'/0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/44'/60'/0'/128", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"m/44'/60'/0'/0'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"m/44'/60'/0'/128'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"m/2147483692/2147483708/2147483648/0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/2147483692/2147483708/2147483648/2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Plain relative derivation paths
{"0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}},
{"128", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 128}},
{"0'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
{"128'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 128}},
{"2147483648", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
// Hexadecimal absolute derivation paths
{"m/0x2C'/0x3c'/0x00'/0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/0x2C'/0x3c'/0x00'/0x80", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 128}},
{"m/0x2C'/0x3c'/0x00'/0x00'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
{"m/0x2C'/0x3c'/0x00'/0x80'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 128}},
{"m/0x8000002C/0x8000003c/0x80000000/0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
{"m/0x8000002C/0x8000003c/0x80000000/0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0x80000000 + 0}},
// Hexadecimal relative derivation paths
{"0x00", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}},
{"0x80", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 128}},
{"0x00'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
{"0x80'", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 128}},
{"0x80000000", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0x80000000 + 0}},
// Weird inputs just to ensure they work
{" m / 44 '\n/\n 60 \n\n\t' /\n0 ' /\t\t 0", DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}},
// Invaid derivation paths
{"", nil}, // Empty relative derivation path
{"m", nil}, // Empty absolute derivation path
{"m/", nil}, // Missing last derivation component
{"/44'/60'/0'/0", nil}, // Absolute path without m prefix, might be user error
{"m/2147483648'", nil}, // Overflows 32 bit integer
{"m/-1'", nil}, // Cannot contain negative number
}
for i, tt := range tests {
if path, err := ParseDerivationPath(tt.input); !reflect.DeepEqual(path, tt.output) {
t.Errorf("test %d: parse mismatch: have %v (%v), want %v", i, path, err, tt.output)
} else if path == nil && err == nil {
t.Errorf("test %d: nil path and error: %v", i, err)
}
}
}

View File

@ -1,406 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"fmt"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
"reflect"
"sort"
"testing"
"time"
"github.com/cespare/cp"
"github.com/davecgh/go-spew/spew"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
)
var (
cachetestDir, _ = filepath.Abs(filepath.Join("testdata", "keystore"))
cachetestAccounts = []accounts.Account{
{
Address: common.HexToAddress("7ef5a6135f1fd6a02593eedc869c6d41d934aef8"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(cachetestDir, "UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8")},
},
{
Address: common.HexToAddress("f466859ead1932d743d622cb74fc058882e8648a"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(cachetestDir, "aaa")},
},
{
Address: common.HexToAddress("289d485d9771714cce91d3393d764e1311907acc"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(cachetestDir, "zzz")},
},
}
)
func TestWatchNewFile(t *testing.T) {
t.Parallel()
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Ensure the watcher is started before adding any files.
ks.Accounts()
time.Sleep(1000 * time.Millisecond)
// Move in the files.
wantAccounts := make([]accounts.Account, len(cachetestAccounts))
for i := range cachetestAccounts {
wantAccounts[i] = accounts.Account{
Address: cachetestAccounts[i].Address,
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, filepath.Base(cachetestAccounts[i].URL.Path))},
}
if err := cp.CopyFile(wantAccounts[i].URL.Path, cachetestAccounts[i].URL.Path); err != nil {
t.Fatal(err)
}
}
// ks should see the accounts.
var list []accounts.Account
for d := 200 * time.Millisecond; d < 5*time.Second; d *= 2 {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
t.Fatalf("wasn't notified of new accounts")
}
return
}
time.Sleep(d)
}
t.Errorf("got %s, want %s", spew.Sdump(list), spew.Sdump(wantAccounts))
}
func TestWatchNoDir(t *testing.T) {
t.Parallel()
// Create ks but not the directory that it watches.
rand.Seed(time.Now().UnixNano())
dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-watch-test-%d-%d", os.Getpid(), rand.Int()))
ks := NewKeyStore(dir, LightScryptN, LightScryptP)
list := ks.Accounts()
if len(list) > 0 {
t.Error("initial account list not empty:", list)
}
time.Sleep(100 * time.Millisecond)
// Create the directory and copy a key file into it.
os.MkdirAll(dir, 0700)
defer os.RemoveAll(dir)
file := filepath.Join(dir, "aaa")
if err := cp.CopyFile(file, cachetestAccounts[0].URL.Path); err != nil {
t.Fatal(err)
}
// ks should see the account.
wantAccounts := []accounts.Account{cachetestAccounts[0]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
for d := 200 * time.Millisecond; d < 8*time.Second; d *= 2 {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
t.Fatalf("wasn't notified of new accounts")
}
return
}
time.Sleep(d)
}
t.Errorf("\ngot %v\nwant %v", list, wantAccounts)
}
func TestCacheInitialReload(t *testing.T) {
cache, _ := newAccountCache(cachetestDir)
accounts := cache.accounts()
if !reflect.DeepEqual(accounts, cachetestAccounts) {
t.Fatalf("got initial accounts: %swant %s", spew.Sdump(accounts), spew.Sdump(cachetestAccounts))
}
}
func TestCacheAddDeleteOrder(t *testing.T) {
cache, _ := newAccountCache("testdata/no-such-dir")
cache.watcher.running = true // prevent unexpected reloads
accs := []accounts.Account{
{
Address: common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "-309830980"},
},
{
Address: common.HexToAddress("2cac1adea150210703ba75ed097ddfe24e14f213"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "ggg"},
},
{
Address: common.HexToAddress("8bda78331c916a08481428e4b07c96d3e916d165"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "zzzzzz-the-very-last-one.keyXXX"},
},
{
Address: common.HexToAddress("d49ff4eeb0b2686ed89c0fc0f2b6ea533ddbbd5e"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "SOMETHING.key"},
},
{
Address: common.HexToAddress("7ef5a6135f1fd6a02593eedc869c6d41d934aef8"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8"},
},
{
Address: common.HexToAddress("f466859ead1932d743d622cb74fc058882e8648a"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "aaa"},
},
{
Address: common.HexToAddress("289d485d9771714cce91d3393d764e1311907acc"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: "zzz"},
},
}
for _, a := range accs {
cache.add(a)
}
// Add some of them twice to check that they don't get reinserted.
cache.add(accs[0])
cache.add(accs[2])
// Check that the account list is sorted by filename.
wantAccounts := make([]accounts.Account, len(accs))
copy(wantAccounts, accs)
sort.Sort(accountsByURL(wantAccounts))
list := cache.accounts()
if !reflect.DeepEqual(list, wantAccounts) {
t.Fatalf("got accounts: %s\nwant %s", spew.Sdump(accs), spew.Sdump(wantAccounts))
}
for _, a := range accs {
if !cache.hasAddress(a.Address) {
t.Errorf("expected hasAccount(%x) to return true", a.Address)
}
}
if cache.hasAddress(common.HexToAddress("fd9bd350f08ee3c0c19b85a8e16114a11a60aa4e")) {
t.Errorf("expected hasAccount(%x) to return false", common.HexToAddress("fd9bd350f08ee3c0c19b85a8e16114a11a60aa4e"))
}
// Delete a few keys from the cache.
for i := 0; i < len(accs); i += 2 {
cache.delete(wantAccounts[i])
}
cache.delete(accounts.Account{Address: common.HexToAddress("fd9bd350f08ee3c0c19b85a8e16114a11a60aa4e"), URL: accounts.URL{Scheme: KeyStoreScheme, Path: "something"}})
// Check content again after deletion.
wantAccountsAfterDelete := []accounts.Account{
wantAccounts[1],
wantAccounts[3],
wantAccounts[5],
}
list = cache.accounts()
if !reflect.DeepEqual(list, wantAccountsAfterDelete) {
t.Fatalf("got accounts after delete: %s\nwant %s", spew.Sdump(list), spew.Sdump(wantAccountsAfterDelete))
}
for _, a := range wantAccountsAfterDelete {
if !cache.hasAddress(a.Address) {
t.Errorf("expected hasAccount(%x) to return true", a.Address)
}
}
if cache.hasAddress(wantAccounts[0].Address) {
t.Errorf("expected hasAccount(%x) to return false", wantAccounts[0].Address)
}
}
func TestCacheFind(t *testing.T) {
dir := filepath.Join("testdata", "dir")
cache, _ := newAccountCache(dir)
cache.watcher.running = true // prevent unexpected reloads
accs := []accounts.Account{
{
Address: common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "a.key")},
},
{
Address: common.HexToAddress("2cac1adea150210703ba75ed097ddfe24e14f213"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "b.key")},
},
{
Address: common.HexToAddress("d49ff4eeb0b2686ed89c0fc0f2b6ea533ddbbd5e"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "c.key")},
},
{
Address: common.HexToAddress("d49ff4eeb0b2686ed89c0fc0f2b6ea533ddbbd5e"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "c2.key")},
},
}
for _, a := range accs {
cache.add(a)
}
nomatchAccount := accounts.Account{
Address: common.HexToAddress("f466859ead1932d743d622cb74fc058882e8648a"),
URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Join(dir, "something")},
}
tests := []struct {
Query accounts.Account
WantResult accounts.Account
WantError error
}{
// by address
{Query: accounts.Account{Address: accs[0].Address}, WantResult: accs[0]},
// by file
{Query: accounts.Account{URL: accs[0].URL}, WantResult: accs[0]},
// by basename
{Query: accounts.Account{URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Base(accs[0].URL.Path)}}, WantResult: accs[0]},
// by file and address
{Query: accs[0], WantResult: accs[0]},
// ambiguous address, tie resolved by file
{Query: accs[2], WantResult: accs[2]},
// ambiguous address error
{
Query: accounts.Account{Address: accs[2].Address},
WantError: &AmbiguousAddrError{
Addr: accs[2].Address,
Matches: []accounts.Account{accs[2], accs[3]},
},
},
// no match error
{Query: nomatchAccount, WantError: ErrNoMatch},
{Query: accounts.Account{URL: nomatchAccount.URL}, WantError: ErrNoMatch},
{Query: accounts.Account{URL: accounts.URL{Scheme: KeyStoreScheme, Path: filepath.Base(nomatchAccount.URL.Path)}}, WantError: ErrNoMatch},
{Query: accounts.Account{Address: nomatchAccount.Address}, WantError: ErrNoMatch},
}
for i, test := range tests {
a, err := cache.find(test.Query)
if !reflect.DeepEqual(err, test.WantError) {
t.Errorf("test %d: error mismatch for query %v\ngot %q\nwant %q", i, test.Query, err, test.WantError)
continue
}
if a != test.WantResult {
t.Errorf("test %d: result mismatch for query %v\ngot %v\nwant %v", i, test.Query, a, test.WantResult)
continue
}
}
}
func waitForAccounts(wantAccounts []accounts.Account, ks *KeyStore) error {
var list []accounts.Account
for d := 200 * time.Millisecond; d < 8*time.Second; d *= 2 {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
return fmt.Errorf("wasn't notified of new accounts")
}
return nil
}
time.Sleep(d)
}
return fmt.Errorf("\ngot %v\nwant %v", list, wantAccounts)
}
// TestUpdatedKeyfileContents tests that updating the contents of a keystore file
// is noticed by the watcher, and the account cache is updated accordingly
func TestUpdatedKeyfileContents(t *testing.T) {
t.Parallel()
// Create a temporary kesytore to test with
rand.Seed(time.Now().UnixNano())
dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-watch-test-%d-%d", os.Getpid(), rand.Int()))
ks := NewKeyStore(dir, LightScryptN, LightScryptP)
list := ks.Accounts()
if len(list) > 0 {
t.Error("initial account list not empty:", list)
}
time.Sleep(100 * time.Millisecond)
// Create the directory and copy a key file into it.
os.MkdirAll(dir, 0700)
defer os.RemoveAll(dir)
file := filepath.Join(dir, "aaa")
// Place one of our testfiles in there
if err := cp.CopyFile(file, cachetestAccounts[0].URL.Path); err != nil {
t.Fatal(err)
}
// ks should see the account.
wantAccounts := []accounts.Account{cachetestAccounts[0]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
if err := waitForAccounts(wantAccounts, ks); err != nil {
t.Error(err)
return
}
// needed so that modTime of `file` is different to its current value after forceCopyFile
time.Sleep(1000 * time.Millisecond)
// Now replace file contents
if err := forceCopyFile(file, cachetestAccounts[1].URL.Path); err != nil {
t.Fatal(err)
return
}
wantAccounts = []accounts.Account{cachetestAccounts[1]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
if err := waitForAccounts(wantAccounts, ks); err != nil {
t.Errorf("First replacement failed")
t.Error(err)
return
}
// needed so that modTime of `file` is different to its current value after forceCopyFile
time.Sleep(1000 * time.Millisecond)
// Now replace file contents again
if err := forceCopyFile(file, cachetestAccounts[2].URL.Path); err != nil {
t.Fatal(err)
return
}
wantAccounts = []accounts.Account{cachetestAccounts[2]}
wantAccounts[0].URL = accounts.URL{Scheme: KeyStoreScheme, Path: file}
if err := waitForAccounts(wantAccounts, ks); err != nil {
t.Errorf("Second replacement failed")
t.Error(err)
return
}
// needed so that modTime of `file` is different to its current value after ioutil.WriteFile
time.Sleep(1000 * time.Millisecond)
// Now replace file contents with crap
if err := ioutil.WriteFile(file, []byte("foo"), 0644); err != nil {
t.Fatal(err)
return
}
if err := waitForAccounts([]accounts.Account{}, ks); err != nil {
t.Errorf("Emptying account file failed")
t.Error(err)
return
}
}
// forceCopyFile is like cp.CopyFile, but doesn't complain if the destination exists.
func forceCopyFile(dst, src string) error {
data, err := ioutil.ReadFile(src)
if err != nil {
return err
}
return ioutil.WriteFile(dst, data, 0644)
}

View File

@ -1,493 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package keystore implements encrypted storage of secp256k1 private keys.
//
// Keys are stored as encrypted JSON files according to the Web3 Secret Storage specification.
// See https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition for more information.
package keystore
import (
"crypto/ecdsa"
crand "crypto/rand"
"errors"
"fmt"
"math/big"
"os"
"path/filepath"
"reflect"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/event"
)
var (
ErrLocked = accounts.NewAuthNeededError("password or unlock")
ErrNoMatch = errors.New("no key for given address or file")
ErrDecrypt = errors.New("could not decrypt key with given passphrase")
)
// KeyStoreType is the reflect type of a keystore backend.
var KeyStoreType = reflect.TypeOf(&KeyStore{})
// KeyStoreScheme is the protocol scheme prefixing account and wallet URLs.
const KeyStoreScheme = "keystore"
// Maximum time between wallet refreshes (if filesystem notifications don't work).
const walletRefreshCycle = 3 * time.Second
// KeyStore manages a key storage directory on disk.
type KeyStore struct {
storage keyStore // Storage backend, might be cleartext or encrypted
cache *accountCache // In-memory account cache over the filesystem storage
changes chan struct{} // Channel receiving change notifications from the cache
unlocked map[common.Address]*unlocked // Currently unlocked account (decrypted private keys)
wallets []accounts.Wallet // Wallet wrappers around the individual key files
updateFeed event.Feed // Event feed to notify wallet additions/removals
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
mu sync.RWMutex
}
type unlocked struct {
*Key
abort chan struct{}
}
// NewKeyStore creates a keystore for the given directory.
func NewKeyStore(keydir string, scryptN, scryptP int) *KeyStore {
keydir, _ = filepath.Abs(keydir)
ks := &KeyStore{storage: &keyStorePassphrase{keydir, scryptN, scryptP, false}}
ks.init(keydir)
return ks
}
// NewPlaintextKeyStore creates a keystore for the given directory.
// Deprecated: Use NewKeyStore.
func NewPlaintextKeyStore(keydir string) *KeyStore {
keydir, _ = filepath.Abs(keydir)
ks := &KeyStore{storage: &keyStorePlain{keydir}}
ks.init(keydir)
return ks
}
func (ks *KeyStore) init(keydir string) {
// Lock the mutex since the account cache might call back with events
ks.mu.Lock()
defer ks.mu.Unlock()
// Initialize the set of unlocked keys and the account cache
ks.unlocked = make(map[common.Address]*unlocked)
ks.cache, ks.changes = newAccountCache(keydir)
// TODO: In order for this finalizer to work, there must be no references
// to ks. addressCache doesn't keep a reference but unlocked keys do,
// so the finalizer will not trigger until all timed unlocks have expired.
runtime.SetFinalizer(ks, func(m *KeyStore) {
m.cache.close()
})
// Create the initial list of wallets from the cache
accs := ks.cache.accounts()
ks.wallets = make([]accounts.Wallet, len(accs))
for i := 0; i < len(accs); i++ {
ks.wallets[i] = &keystoreWallet{account: accs[i], keystore: ks}
}
}
// Wallets implements accounts.Backend, returning all single-key wallets from the
// keystore directory.
func (ks *KeyStore) Wallets() []accounts.Wallet {
// Make sure the list of wallets is in sync with the account cache
ks.refreshWallets()
ks.mu.RLock()
defer ks.mu.RUnlock()
cpy := make([]accounts.Wallet, len(ks.wallets))
copy(cpy, ks.wallets)
return cpy
}
// refreshWallets retrieves the current account list and based on that does any
// necessary wallet refreshes.
func (ks *KeyStore) refreshWallets() {
// Retrieve the current list of accounts
ks.mu.Lock()
accs := ks.cache.accounts()
// Transform the current list of wallets into the new one
wallets := make([]accounts.Wallet, 0, len(accs))
events := []accounts.WalletEvent{}
for _, account := range accs {
// Drop wallets while they were in front of the next account
for len(ks.wallets) > 0 && ks.wallets[0].URL().Cmp(account.URL) < 0 {
events = append(events, accounts.WalletEvent{Wallet: ks.wallets[0], Kind: accounts.WalletDropped})
ks.wallets = ks.wallets[1:]
}
// If there are no more wallets or the account is before the next, wrap new wallet
if len(ks.wallets) == 0 || ks.wallets[0].URL().Cmp(account.URL) > 0 {
wallet := &keystoreWallet{account: account, keystore: ks}
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletArrived})
wallets = append(wallets, wallet)
continue
}
// If the account is the same as the first wallet, keep it
if ks.wallets[0].Accounts()[0] == account {
wallets = append(wallets, ks.wallets[0])
ks.wallets = ks.wallets[1:]
continue
}
}
// Drop any leftover wallets and set the new batch
for _, wallet := range ks.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletDropped})
}
ks.wallets = wallets
ks.mu.Unlock()
// Fire all wallet events and return
for _, event := range events {
ks.updateFeed.Send(event)
}
}
// Subscribe implements accounts.Backend, creating an async subscription to
// receive notifications on the addition or removal of keystore wallets.
func (ks *KeyStore) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
// We need the mutex to reliably start/stop the update loop
ks.mu.Lock()
defer ks.mu.Unlock()
// Subscribe the caller and track the subscriber count
sub := ks.updateScope.Track(ks.updateFeed.Subscribe(sink))
// Subscribers require an active notification loop, start it
if !ks.updating {
ks.updating = true
go ks.updater()
}
return sub
}
// updater is responsible for maintaining an up-to-date list of wallets stored in
// the keystore, and for firing wallet addition/removal events. It listens for
// account change events from the underlying account cache, and also periodically
// forces a manual refresh (only triggers for systems where the filesystem notifier
// is not running).
func (ks *KeyStore) updater() {
for {
// Wait for an account update or a refresh timeout
select {
case <-ks.changes:
case <-time.After(walletRefreshCycle):
}
// Run the wallet refresher
ks.refreshWallets()
// If all our subscribers left, stop the updater
ks.mu.Lock()
if ks.updateScope.Count() == 0 {
ks.updating = false
ks.mu.Unlock()
return
}
ks.mu.Unlock()
}
}
// HasAddress reports whether a key with the given address is present.
func (ks *KeyStore) HasAddress(addr common.Address) bool {
return ks.cache.hasAddress(addr)
}
// Accounts returns all key files present in the directory.
func (ks *KeyStore) Accounts() []accounts.Account {
return ks.cache.accounts()
}
// Delete deletes the key matched by account if the passphrase is correct.
// If the account contains no filename, the address must match a unique key.
func (ks *KeyStore) Delete(a accounts.Account, passphrase string) error {
// Decrypting the key isn't really necessary, but we do
// it anyway to check the password and zero out the key
// immediately afterwards.
a, key, err := ks.getDecryptedKey(a, passphrase)
if key != nil {
zeroKey(key.PrivateKey)
}
if err != nil {
return err
}
// The order is crucial here. The key is dropped from the
// cache after the file is gone so that a reload happening in
// between won't insert it into the cache again.
err = os.Remove(a.URL.Path)
if err == nil {
ks.cache.delete(a)
ks.refreshWallets()
}
return err
}
// SignHash calculates a ECDSA signature for the given hash. The produced
// signature is in the [R || S || V] format where V is 0 or 1.
func (ks *KeyStore) SignHash(a accounts.Account, hash []byte) ([]byte, error) {
// Look up the key to sign with and abort if it cannot be found
ks.mu.RLock()
defer ks.mu.RUnlock()
unlockedKey, found := ks.unlocked[a.Address]
if !found {
return nil, ErrLocked
}
// Sign the hash using plain ECDSA operations
return crypto.Sign(hash, unlockedKey.PrivateKey)
}
// SignTx signs the given transaction with the requested account.
func (ks *KeyStore) SignTx(a accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Look up the key to sign with and abort if it cannot be found
ks.mu.RLock()
defer ks.mu.RUnlock()
unlockedKey, found := ks.unlocked[a.Address]
if !found {
return nil, ErrLocked
}
// Depending on the presence of the chain ID, sign with EIP155 or homestead
if chainID != nil {
return types.SignTx(tx, types.NewEIP155Signer(chainID), unlockedKey.PrivateKey)
}
return types.SignTx(tx, types.HomesteadSigner{}, unlockedKey.PrivateKey)
}
// SignHashWithPassphrase signs hash if the private key matching the given address
// can be decrypted with the given passphrase. The produced signature is in the
// [R || S || V] format where V is 0 or 1.
func (ks *KeyStore) SignHashWithPassphrase(a accounts.Account, passphrase string, hash []byte) (signature []byte, err error) {
_, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
defer zeroKey(key.PrivateKey)
return crypto.Sign(hash, key.PrivateKey)
}
// SignTxWithPassphrase signs the transaction if the private key matching the
// given address can be decrypted with the given passphrase.
func (ks *KeyStore) SignTxWithPassphrase(a accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
_, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
defer zeroKey(key.PrivateKey)
// Depending on the presence of the chain ID, sign with EIP155 or homestead
if chainID != nil {
return types.SignTx(tx, types.NewEIP155Signer(chainID), key.PrivateKey)
}
return types.SignTx(tx, types.HomesteadSigner{}, key.PrivateKey)
}
// Unlock unlocks the given account indefinitely.
func (ks *KeyStore) Unlock(a accounts.Account, passphrase string) error {
return ks.TimedUnlock(a, passphrase, 0)
}
// Lock removes the private key with the given address from memory.
func (ks *KeyStore) Lock(addr common.Address) error {
ks.mu.Lock()
if unl, found := ks.unlocked[addr]; found {
ks.mu.Unlock()
ks.expire(addr, unl, time.Duration(0)*time.Nanosecond)
} else {
ks.mu.Unlock()
}
return nil
}
// TimedUnlock unlocks the given account with the passphrase. The account
// stays unlocked for the duration of timeout. A timeout of 0 unlocks the account
// until the program exits. The account must match a unique key file.
//
// If the account address is already unlocked for a duration, TimedUnlock extends or
// shortens the active unlock timeout. If the address was previously unlocked
// indefinitely the timeout is not altered.
func (ks *KeyStore) TimedUnlock(a accounts.Account, passphrase string, timeout time.Duration) error {
a, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return err
}
ks.mu.Lock()
defer ks.mu.Unlock()
u, found := ks.unlocked[a.Address]
if found {
if u.abort == nil {
// The address was unlocked indefinitely, so unlocking
// it with a timeout would be confusing.
zeroKey(key.PrivateKey)
return nil
}
// Terminate the expire goroutine and replace it below.
close(u.abort)
}
if timeout > 0 {
u = &unlocked{Key: key, abort: make(chan struct{})}
go ks.expire(a.Address, u, timeout)
} else {
u = &unlocked{Key: key}
}
ks.unlocked[a.Address] = u
return nil
}
// Find resolves the given account into a unique entry in the keystore.
func (ks *KeyStore) Find(a accounts.Account) (accounts.Account, error) {
ks.cache.maybeReload()
ks.cache.mu.Lock()
a, err := ks.cache.find(a)
ks.cache.mu.Unlock()
return a, err
}
func (ks *KeyStore) getDecryptedKey(a accounts.Account, auth string) (accounts.Account, *Key, error) {
a, err := ks.Find(a)
if err != nil {
return a, nil, err
}
key, err := ks.storage.GetKey(a.Address, a.URL.Path, auth)
return a, key, err
}
func (ks *KeyStore) expire(addr common.Address, u *unlocked, timeout time.Duration) {
t := time.NewTimer(timeout)
defer t.Stop()
select {
case <-u.abort:
// just quit
case <-t.C:
ks.mu.Lock()
// only drop if it's still the same key instance that dropLater
// was launched with. we can check that using pointer equality
// because the map stores a new pointer every time the key is
// unlocked.
if ks.unlocked[addr] == u {
zeroKey(u.PrivateKey)
delete(ks.unlocked, addr)
}
ks.mu.Unlock()
}
}
// NewAccount generates a new key and stores it into the key directory,
// encrypting it with the passphrase.
func (ks *KeyStore) NewAccount(passphrase string) (accounts.Account, error) {
_, account, err := storeNewKey(ks.storage, crand.Reader, passphrase)
if err != nil {
return accounts.Account{}, err
}
// Add the account to the cache immediately rather
// than waiting for file system notifications to pick it up.
ks.cache.add(account)
ks.refreshWallets()
return account, nil
}
// Export exports as a JSON key, encrypted with newPassphrase.
func (ks *KeyStore) Export(a accounts.Account, passphrase, newPassphrase string) (keyJSON []byte, err error) {
_, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return nil, err
}
var N, P int
if store, ok := ks.storage.(*keyStorePassphrase); ok {
N, P = store.scryptN, store.scryptP
} else {
N, P = StandardScryptN, StandardScryptP
}
return EncryptKey(key, newPassphrase, N, P)
}
// Import stores the given encrypted JSON key into the key directory.
func (ks *KeyStore) Import(keyJSON []byte, passphrase, newPassphrase string) (accounts.Account, error) {
key, err := DecryptKey(keyJSON, passphrase)
if key != nil && key.PrivateKey != nil {
defer zeroKey(key.PrivateKey)
}
if err != nil {
return accounts.Account{}, err
}
return ks.importKey(key, newPassphrase)
}
// ImportECDSA stores the given key into the key directory, encrypting it with the passphrase.
func (ks *KeyStore) ImportECDSA(priv *ecdsa.PrivateKey, passphrase string) (accounts.Account, error) {
key := newKeyFromECDSA(priv)
if ks.cache.hasAddress(key.Address) {
return accounts.Account{}, fmt.Errorf("account already exists")
}
return ks.importKey(key, passphrase)
}
func (ks *KeyStore) importKey(key *Key, passphrase string) (accounts.Account, error) {
a := accounts.Account{Address: key.Address, URL: accounts.URL{Scheme: KeyStoreScheme, Path: ks.storage.JoinPath(keyFileName(key.Address))}}
if err := ks.storage.StoreKey(a.URL.Path, key, passphrase); err != nil {
return accounts.Account{}, err
}
ks.cache.add(a)
ks.refreshWallets()
return a, nil
}
// Update changes the passphrase of an existing account.
func (ks *KeyStore) Update(a accounts.Account, passphrase, newPassphrase string) error {
a, key, err := ks.getDecryptedKey(a, passphrase)
if err != nil {
return err
}
return ks.storage.StoreKey(a.URL.Path, key, newPassphrase)
}
// ImportPreSaleKey decrypts the given Ethereum presale wallet and stores
// a key file in the key directory. The key file is encrypted with the same passphrase.
func (ks *KeyStore) ImportPreSaleKey(keyJSON []byte, passphrase string) (accounts.Account, error) {
a, _, err := importPreSaleKey(ks.storage, keyJSON, passphrase)
if err != nil {
return a, err
}
ks.cache.add(a)
ks.refreshWallets()
return a, nil
}
// zeroKey zeroes a private key in memory.
func zeroKey(k *ecdsa.PrivateKey) {
b := k.D.Bits()
for i := range b {
b[i] = 0
}
}

View File

@ -1,387 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"io/ioutil"
"math/rand"
"os"
"runtime"
"sort"
"strings"
"testing"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/event"
)
var testSigData = make([]byte, 32)
func TestKeyStore(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
a, err := ks.NewAccount("foo")
if err != nil {
t.Fatal(err)
}
if !strings.HasPrefix(a.URL.Path, dir) {
t.Errorf("account file %s doesn't have dir prefix", a.URL)
}
stat, err := os.Stat(a.URL.Path)
if err != nil {
t.Fatalf("account file %s doesn't exist (%v)", a.URL, err)
}
if runtime.GOOS != "windows" && stat.Mode() != 0600 {
t.Fatalf("account file has wrong mode: got %o, want %o", stat.Mode(), 0600)
}
if !ks.HasAddress(a.Address) {
t.Errorf("HasAccount(%x) should've returned true", a.Address)
}
if err := ks.Update(a, "foo", "bar"); err != nil {
t.Errorf("Update error: %v", err)
}
if err := ks.Delete(a, "bar"); err != nil {
t.Errorf("Delete error: %v", err)
}
if common.FileExist(a.URL.Path) {
t.Errorf("account file %s should be gone after Delete", a.URL)
}
if ks.HasAddress(a.Address) {
t.Errorf("HasAccount(%x) should've returned true after Delete", a.Address)
}
}
func TestSign(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "" // not used but required by API
a1, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
if err := ks.Unlock(a1, ""); err != nil {
t.Fatal(err)
}
if _, err := ks.SignHash(accounts.Account{Address: a1.Address}, testSigData); err != nil {
t.Fatal(err)
}
}
func TestSignWithPassphrase(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "passwd"
acc, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
if _, unlocked := ks.unlocked[acc.Address]; unlocked {
t.Fatal("expected account to be locked")
}
_, err = ks.SignHashWithPassphrase(acc, pass, testSigData)
if err != nil {
t.Fatal(err)
}
if _, unlocked := ks.unlocked[acc.Address]; unlocked {
t.Fatal("expected account to be locked")
}
if _, err = ks.SignHashWithPassphrase(acc, "invalid passwd", testSigData); err == nil {
t.Fatal("expected SignHashWithPassphrase to fail with invalid password")
}
}
func TestTimedUnlock(t *testing.T) {
dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "foo"
a1, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
// Signing without passphrase fails because account is locked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked before unlocking, got ", err)
}
// Signing with passphrase works
if err = ks.TimedUnlock(a1, pass, 100*time.Millisecond); err != nil {
t.Fatal(err)
}
// Signing without passphrase works because account is temp unlocked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// Signing fails again after automatic locking
time.Sleep(250 * time.Millisecond)
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked timeout expired, got ", err)
}
}
func TestOverrideUnlock(t *testing.T) {
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
pass := "foo"
a1, err := ks.NewAccount(pass)
if err != nil {
t.Fatal(err)
}
// Unlock indefinitely.
if err = ks.TimedUnlock(a1, pass, 5*time.Minute); err != nil {
t.Fatal(err)
}
// Signing without passphrase works because account is temp unlocked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// reset unlock to a shorter period, invalidates the previous unlock
if err = ks.TimedUnlock(a1, pass, 100*time.Millisecond); err != nil {
t.Fatal(err)
}
// Signing without passphrase still works because account is temp unlocked
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != nil {
t.Fatal("Signing shouldn't return an error after unlocking, got ", err)
}
// Signing fails again after automatic locking
time.Sleep(250 * time.Millisecond)
_, err = ks.SignHash(accounts.Account{Address: a1.Address}, testSigData)
if err != ErrLocked {
t.Fatal("Signing should've failed with ErrLocked timeout expired, got ", err)
}
}
// This test should fail under -race if signing races the expiration goroutine.
func TestSignRace(t *testing.T) {
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Create a test account.
a1, err := ks.NewAccount("")
if err != nil {
t.Fatal("could not create the test account", err)
}
if err := ks.TimedUnlock(a1, "", 15*time.Millisecond); err != nil {
t.Fatal("could not unlock the test account", err)
}
end := time.Now().Add(500 * time.Millisecond)
for time.Now().Before(end) {
if _, err := ks.SignHash(accounts.Account{Address: a1.Address}, testSigData); err == ErrLocked {
return
} else if err != nil {
t.Errorf("Sign error: %v", err)
return
}
time.Sleep(1 * time.Millisecond)
}
t.Errorf("Account did not lock within the timeout")
}
// Tests that the wallet notifier loop starts and stops correctly based on the
// addition and removal of wallet event subscriptions.
func TestWalletNotifierLifecycle(t *testing.T) {
// Create a temporary kesytore to test with
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Ensure that the notification updater is not running yet
time.Sleep(250 * time.Millisecond)
ks.mu.RLock()
updating := ks.updating
ks.mu.RUnlock()
if updating {
t.Errorf("wallet notifier running without subscribers")
}
// Subscribe to the wallet feed and ensure the updater boots up
updates := make(chan accounts.WalletEvent)
subs := make([]event.Subscription, 2)
for i := 0; i < len(subs); i++ {
// Create a new subscription
subs[i] = ks.Subscribe(updates)
// Ensure the notifier comes online
time.Sleep(250 * time.Millisecond)
ks.mu.RLock()
updating = ks.updating
ks.mu.RUnlock()
if !updating {
t.Errorf("sub %d: wallet notifier not running after subscription", i)
}
}
// Unsubscribe and ensure the updater terminates eventually
for i := 0; i < len(subs); i++ {
// Close an existing subscription
subs[i].Unsubscribe()
// Ensure the notifier shuts down at and only at the last close
for k := 0; k < int(walletRefreshCycle/(250*time.Millisecond))+2; k++ {
ks.mu.RLock()
updating = ks.updating
ks.mu.RUnlock()
if i < len(subs)-1 && !updating {
t.Fatalf("sub %d: event notifier stopped prematurely", i)
}
if i == len(subs)-1 && !updating {
return
}
time.Sleep(250 * time.Millisecond)
}
}
t.Errorf("wallet notifier didn't terminate after unsubscribe")
}
type walletEvent struct {
accounts.WalletEvent
a accounts.Account
}
// Tests that wallet notifications and correctly fired when accounts are added
// or deleted from the keystore.
func TestWalletNotifications(t *testing.T) {
dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Subscribe to the wallet feed and collect events.
var (
events []walletEvent
updates = make(chan accounts.WalletEvent)
sub = ks.Subscribe(updates)
)
defer sub.Unsubscribe()
go func() {
for {
select {
case ev := <-updates:
events = append(events, walletEvent{ev, ev.Wallet.Accounts()[0]})
case <-sub.Err():
close(updates)
return
}
}
}()
// Randomly add and remove accounts.
var (
live = make(map[common.Address]accounts.Account)
wantEvents []walletEvent
)
for i := 0; i < 1024; i++ {
if create := len(live) == 0 || rand.Int()%4 > 0; create {
// Add a new account and ensure wallet notifications arrives
account, err := ks.NewAccount("")
if err != nil {
t.Fatalf("failed to create test account: %v", err)
}
live[account.Address] = account
wantEvents = append(wantEvents, walletEvent{accounts.WalletEvent{Kind: accounts.WalletArrived}, account})
} else {
// Delete a random account.
var account accounts.Account
for _, a := range live {
account = a
break
}
if err := ks.Delete(account, ""); err != nil {
t.Fatalf("failed to delete test account: %v", err)
}
delete(live, account.Address)
wantEvents = append(wantEvents, walletEvent{accounts.WalletEvent{Kind: accounts.WalletDropped}, account})
}
}
// Shut down the event collector and check events.
sub.Unsubscribe()
<-updates
checkAccounts(t, live, ks.Wallets())
checkEvents(t, wantEvents, events)
}
// checkAccounts checks that all known live accounts are present in the wallet list.
func checkAccounts(t *testing.T, live map[common.Address]accounts.Account, wallets []accounts.Wallet) {
if len(live) != len(wallets) {
t.Errorf("wallet list doesn't match required accounts: have %d, want %d", len(wallets), len(live))
return
}
liveList := make([]accounts.Account, 0, len(live))
for _, account := range live {
liveList = append(liveList, account)
}
sort.Sort(accountsByURL(liveList))
for j, wallet := range wallets {
if accs := wallet.Accounts(); len(accs) != 1 {
t.Errorf("wallet %d: contains invalid number of accounts: have %d, want 1", j, len(accs))
} else if accs[0] != liveList[j] {
t.Errorf("wallet %d: account mismatch: have %v, want %v", j, accs[0], liveList[j])
}
}
}
// checkEvents checks that all events in 'want' are present in 'have'. Events may be present multiple times.
func checkEvents(t *testing.T, want []walletEvent, have []walletEvent) {
for _, wantEv := range want {
nmatch := 0
for ; len(have) > 0; nmatch++ {
if have[0].Kind != wantEv.Kind || have[0].a != wantEv.a {
break
}
have = have[1:]
}
if nmatch == 0 {
t.Fatalf("can't find event with Kind=%v for %x", wantEv.Kind, wantEv.a.Address)
}
}
}
func tmpKeyStore(t *testing.T, encrypted bool) (string, *KeyStore) {
d, err := ioutil.TempDir("", "eth-keystore-test")
if err != nil {
t.Fatal(err)
}
new := NewPlaintextKeyStore
if encrypted {
new = func(kd string) *KeyStore { return NewKeyStore(kd, veryLightScryptN, veryLightScryptP) }
}
return d, new(d)
}

View File

@ -1,60 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"io/ioutil"
"testing"
"github.com/ethereum/go-ethereum/common"
)
const (
veryLightScryptN = 2
veryLightScryptP = 1
)
// Tests that a json key file can be decrypted and encrypted in multiple rounds.
func TestKeyEncryptDecrypt(t *testing.T) {
keyjson, err := ioutil.ReadFile("testdata/very-light-scrypt.json")
if err != nil {
t.Fatal(err)
}
password := ""
address := common.HexToAddress("45dea0fb0bba44f4fcf290bba71fd57d7117cbb8")
// Do a few rounds of decryption and encryption
for i := 0; i < 3; i++ {
// Try a bad password first
if _, err := DecryptKey(keyjson, password+"bad"); err == nil {
t.Errorf("test %d: json key decrypted with bad password", i)
}
// Decrypt with the correct password
key, err := DecryptKey(keyjson, password)
if err != nil {
t.Fatalf("test %d: json key failed to decrypt: %v", i, err)
}
if key.Address != address {
t.Errorf("test %d: key address mismatch: have %x, want %x", i, key.Address, address)
}
// Recrypt with a new password and start over
password += "new data appended"
if keyjson, err = EncryptKey(key, password, veryLightScryptN, veryLightScryptP); err != nil {
t.Errorf("test %d: failed to recrypt key %v", i, err)
}
}
}

View File

@ -1,266 +0,0 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"crypto/rand"
"encoding/hex"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) {
d, err := ioutil.TempDir("", "geth-keystore-test")
if err != nil {
t.Fatal(err)
}
if encrypted {
ks = &keyStorePassphrase{d, veryLightScryptN, veryLightScryptP, true}
} else {
ks = &keyStorePlain{d}
}
return d, ks
}
func TestKeyStorePlain(t *testing.T) {
dir, ks := tmpKeyStoreIface(t, false)
defer os.RemoveAll(dir)
pass := "" // not used but required by API
k1, account, err := storeNewKey(ks, rand.Reader, pass)
if err != nil {
t.Fatal(err)
}
k2, err := ks.GetKey(k1.Address, account.URL.Path, pass)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(k1.Address, k2.Address) {
t.Fatal(err)
}
if !reflect.DeepEqual(k1.PrivateKey, k2.PrivateKey) {
t.Fatal(err)
}
}
func TestKeyStorePassphrase(t *testing.T) {
dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
pass := "foo"
k1, account, err := storeNewKey(ks, rand.Reader, pass)
if err != nil {
t.Fatal(err)
}
k2, err := ks.GetKey(k1.Address, account.URL.Path, pass)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(k1.Address, k2.Address) {
t.Fatal(err)
}
if !reflect.DeepEqual(k1.PrivateKey, k2.PrivateKey) {
t.Fatal(err)
}
}
func TestKeyStorePassphraseDecryptionFail(t *testing.T) {
dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
pass := "foo"
k1, account, err := storeNewKey(ks, rand.Reader, pass)
if err != nil {
t.Fatal(err)
}
if _, err = ks.GetKey(k1.Address, account.URL.Path, "bar"); err != ErrDecrypt {
t.Fatalf("wrong error for invalid passphrase\ngot %q\nwant %q", err, ErrDecrypt)
}
}
func TestImportPreSaleKey(t *testing.T) {
dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
// file content of a presale key file generated with:
// python pyethsaletool.py genwallet
// with password "foo"
fileContent := "{\"encseed\": \"26d87f5f2bf9835f9a47eefae571bc09f9107bb13d54ff12a4ec095d01f83897494cf34f7bed2ed34126ecba9db7b62de56c9d7cd136520a0427bfb11b8954ba7ac39b90d4650d3448e31185affcd74226a68f1e94b1108e6e0a4a91cdd83eba\", \"ethaddr\": \"d4584b5f6229b7be90727b0fc8c6b91bb427821f\", \"email\": \"gustav.simonsson@gmail.com\", \"btcaddr\": \"1EVknXyFC68kKNLkh6YnKzW41svSRoaAcx\"}"
pass := "foo"
account, _, err := importPreSaleKey(ks, []byte(fileContent), pass)
if err != nil {
t.Fatal(err)
}
if account.Address != common.HexToAddress("d4584b5f6229b7be90727b0fc8c6b91bb427821f") {
t.Errorf("imported account has wrong address %x", account.Address)
}
if !strings.HasPrefix(account.URL.Path, dir) {
t.Errorf("imported account file not in keystore directory: %q", account.URL)
}
}
// Test and utils for the key store tests in the Ethereum JSON tests;
// testdataKeyStoreTests/basic_tests.json
type KeyStoreTestV3 struct {
Json encryptedKeyJSONV3
Password string
Priv string
}
type KeyStoreTestV1 struct {
Json encryptedKeyJSONV1
Password string
Priv string
}
func TestV3_PBKDF2_1(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("testdata/v3_test_vector.json", t)
testDecryptV3(tests["wikipage_test_vector_pbkdf2"], t)
}
var testsSubmodule = filepath.Join("..", "..", "tests", "testdata", "KeyStoreTests")
func skipIfSubmoduleMissing(t *testing.T) {
if !common.FileExist(testsSubmodule) {
t.Skipf("can't find JSON tests from submodule at %s", testsSubmodule)
}
}
func TestV3_PBKDF2_2(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["test1"], t)
}
func TestV3_PBKDF2_3(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["python_generated_test_with_odd_iv"], t)
}
func TestV3_PBKDF2_4(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["evilnonce"], t)
}
func TestV3_Scrypt_1(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("testdata/v3_test_vector.json", t)
testDecryptV3(tests["wikipage_test_vector_scrypt"], t)
}
func TestV3_Scrypt_2(t *testing.T) {
skipIfSubmoduleMissing(t)
t.Parallel()
tests := loadKeyStoreTestV3(filepath.Join(testsSubmodule, "basic_tests.json"), t)
testDecryptV3(tests["test2"], t)
}
func TestV1_1(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV1("testdata/v1_test_vector.json", t)
testDecryptV1(tests["test1"], t)
}
func TestV1_2(t *testing.T) {
t.Parallel()
ks := &keyStorePassphrase{"testdata/v1", LightScryptN, LightScryptP, true}
addr := common.HexToAddress("cb61d5a9c4896fb9658090b597ef0e7be6f7b67e")
file := "testdata/v1/cb61d5a9c4896fb9658090b597ef0e7be6f7b67e/cb61d5a9c4896fb9658090b597ef0e7be6f7b67e"
k, err := ks.GetKey(addr, file, "g")
if err != nil {
t.Fatal(err)
}
privHex := hex.EncodeToString(crypto.FromECDSA(k.PrivateKey))
expectedHex := "d1b1178d3529626a1a93e073f65028370d14c7eb0936eb42abef05db6f37ad7d"
if privHex != expectedHex {
t.Fatal(fmt.Errorf("Unexpected privkey: %v, expected %v", privHex, expectedHex))
}
}
func testDecryptV3(test KeyStoreTestV3, t *testing.T) {
privBytes, _, err := decryptKeyV3(&test.Json, test.Password)
if err != nil {
t.Fatal(err)
}
privHex := hex.EncodeToString(privBytes)
if test.Priv != privHex {
t.Fatal(fmt.Errorf("Decrypted bytes not equal to test, expected %v have %v", test.Priv, privHex))
}
}
func testDecryptV1(test KeyStoreTestV1, t *testing.T) {
privBytes, _, err := decryptKeyV1(&test.Json, test.Password)
if err != nil {
t.Fatal(err)
}
privHex := hex.EncodeToString(privBytes)
if test.Priv != privHex {
t.Fatal(fmt.Errorf("Decrypted bytes not equal to test, expected %v have %v", test.Priv, privHex))
}
}
func loadKeyStoreTestV3(file string, t *testing.T) map[string]KeyStoreTestV3 {
tests := make(map[string]KeyStoreTestV3)
err := common.LoadJSON(file, &tests)
if err != nil {
t.Fatal(err)
}
return tests
}
func loadKeyStoreTestV1(file string, t *testing.T) map[string]KeyStoreTestV1 {
tests := make(map[string]KeyStoreTestV1)
err := common.LoadJSON(file, &tests)
if err != nil {
t.Fatal(err)
}
return tests
}
func TestKeyForDirectICAP(t *testing.T) {
t.Parallel()
key := NewKeyForDirectICAP(rand.Reader)
if !strings.HasPrefix(key.Address.Hex(), "0x00") {
t.Errorf("Expected first address byte to be zero, have: %s", key.Address.Hex())
}
}
func TestV3_31_Byte_Key(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("testdata/v3_test_vector.json", t)
testDecryptV3(tests["31_byte_key"], t)
}
func TestV3_30_Byte_Key(t *testing.T) {
t.Parallel()
tests := loadKeyStoreTestV3("testdata/v3_test_vector.json", t)
testDecryptV3(tests["30_byte_key"], t)
}

View File

@ -1 +0,0 @@
{"address":"f466859ead1932d743d622cb74fc058882e8648a","crypto":{"cipher":"aes-128-ctr","ciphertext":"cb664472deacb41a2e995fa7f96fe29ce744471deb8d146a0e43c7898c9ddd4d","cipherparams":{"iv":"dfd9ee70812add5f4b8f89d0811c9158"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"0d6769bf016d45c479213990d6a08d938469c4adad8a02ce507b4a4e7b7739f1"},"mac":"bac9af994b15a45dd39669fc66f9aa8a3b9dd8c22cb16e4d8d7ea089d0f1a1a9"},"id":"472e8b3d-afb6-45b5-8111-72c89895099a","version":3}

View File

@ -1 +0,0 @@
{"address":"f466859ead1932d743d622cb74fc058882e8648a","crypto":{"cipher":"aes-128-ctr","ciphertext":"cb664472deacb41a2e995fa7f96fe29ce744471deb8d146a0e43c7898c9ddd4d","cipherparams":{"iv":"dfd9ee70812add5f4b8f89d0811c9158"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"0d6769bf016d45c479213990d6a08d938469c4adad8a02ce507b4a4e7b7739f1"},"mac":"bac9af994b15a45dd39669fc66f9aa8a3b9dd8c22cb16e4d8d7ea089d0f1a1a9"},"id":"472e8b3d-afb6-45b5-8111-72c89895099a","version":3}

View File

@ -1 +0,0 @@
{"address":"7ef5a6135f1fd6a02593eedc869c6d41d934aef8","crypto":{"cipher":"aes-128-ctr","ciphertext":"1d0839166e7a15b9c1333fc865d69858b22df26815ccf601b28219b6192974e1","cipherparams":{"iv":"8df6caa7ff1b00c4e871f002cb7921ed"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"e5e6ef3f4ea695f496b643ebd3f75c0aa58ef4070e90c80c5d3fb0241bf1595c"},"mac":"6d16dfde774845e4585357f24bce530528bc69f4f84e1e22880d34fa45c273e5"},"id":"950077c7-71e3-4c44-a4a1-143919141ed4","version":3}

View File

@ -1 +0,0 @@
{"address":"f466859ead1932d743d622cb74fc058882e8648a","crypto":{"cipher":"aes-128-ctr","ciphertext":"cb664472deacb41a2e995fa7f96fe29ce744471deb8d146a0e43c7898c9ddd4d","cipherparams":{"iv":"dfd9ee70812add5f4b8f89d0811c9158"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"0d6769bf016d45c479213990d6a08d938469c4adad8a02ce507b4a4e7b7739f1"},"mac":"bac9af994b15a45dd39669fc66f9aa8a3b9dd8c22cb16e4d8d7ea089d0f1a1a9"},"id":"472e8b3d-afb6-45b5-8111-72c89895099a","version":3}

View File

@ -1,21 +0,0 @@
This directory contains accounts for testing.
The passphrase that unlocks them is "foobar".
The "good" key files which are supposed to be loadable are:
- File: UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Address: 0x7ef5a6135f1fd6a02593eedc869c6d41d934aef8
- File: aaa
Address: 0xf466859ead1932d743d622cb74fc058882e8648a
- File: zzz
Address: 0x289d485d9771714cce91d3393d764e1311907acc
The other files (including this README) are broken in various ways
and should not be picked up by package accounts:
- File: no-address (missing address field, otherwise same as "aaa")
- File: garbage (file with random data)
- File: empty (file with no content)
- File: swapfile~ (should be skipped)
- File: .hiddenfile (should be skipped)
- File: foo/... (should be skipped because it is a directory)

View File

@ -1 +0,0 @@
{"address":"7ef5a6135f1fd6a02593eedc869c6d41d934aef8","crypto":{"cipher":"aes-128-ctr","ciphertext":"1d0839166e7a15b9c1333fc865d69858b22df26815ccf601b28219b6192974e1","cipherparams":{"iv":"8df6caa7ff1b00c4e871f002cb7921ed"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"e5e6ef3f4ea695f496b643ebd3f75c0aa58ef4070e90c80c5d3fb0241bf1595c"},"mac":"6d16dfde774845e4585357f24bce530528bc69f4f84e1e22880d34fa45c273e5"},"id":"950077c7-71e3-4c44-a4a1-143919141ed4","version":3}

View File

@ -1 +0,0 @@
{"address":"f466859ead1932d743d622cb74fc058882e8648a","crypto":{"cipher":"aes-128-ctr","ciphertext":"cb664472deacb41a2e995fa7f96fe29ce744471deb8d146a0e43c7898c9ddd4d","cipherparams":{"iv":"dfd9ee70812add5f4b8f89d0811c9158"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"0d6769bf016d45c479213990d6a08d938469c4adad8a02ce507b4a4e7b7739f1"},"mac":"bac9af994b15a45dd39669fc66f9aa8a3b9dd8c22cb16e4d8d7ea089d0f1a1a9"},"id":"472e8b3d-afb6-45b5-8111-72c89895099a","version":3}

View File

@ -1 +0,0 @@
{"address":"fd9bd350f08ee3c0c19b85a8e16114a11a60aa4e","crypto":{"cipher":"aes-128-ctr","ciphertext":"8124d5134aa4a927c79fd852989e4b5419397566f04b0936a1eb1d168c7c68a5","cipherparams":{"iv":"e2febe17176414dd2cda28287947eb2f"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":4096,"p":6,"r":8,"salt":"44b415ede89f3bdd6830390a21b78965f571b347a589d1d943029f016c5e8bd5"},"mac":"5e149ff25bfd9dd45746a84bb2bcd2f015f2cbca2b6d25c5de8c29617f71fe5b"},"id":"d6ac5452-2b2c-4d3c-ad80-4bf0327d971c","version":3}

Binary file not shown.

View File

@ -1 +0,0 @@
{"crypto":{"cipher":"aes-128-ctr","ciphertext":"cb664472deacb41a2e995fa7f96fe29ce744471deb8d146a0e43c7898c9ddd4d","cipherparams":{"iv":"dfd9ee70812add5f4b8f89d0811c9158"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"0d6769bf016d45c479213990d6a08d938469c4adad8a02ce507b4a4e7b7739f1"},"mac":"bac9af994b15a45dd39669fc66f9aa8a3b9dd8c22cb16e4d8d7ea089d0f1a1a9"},"id":"472e8b3d-afb6-45b5-8111-72c89895099a","version":3}

View File

@ -1 +0,0 @@
{"address":"0000000000000000000000000000000000000000","crypto":{"cipher":"aes-128-ctr","ciphertext":"cb664472deacb41a2e995fa7f96fe29ce744471deb8d146a0e43c7898c9ddd4d","cipherparams":{"iv":"dfd9ee70812add5f4b8f89d0811c9158"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"0d6769bf016d45c479213990d6a08d938469c4adad8a02ce507b4a4e7b7739f1"},"mac":"bac9af994b15a45dd39669fc66f9aa8a3b9dd8c22cb16e4d8d7ea089d0f1a1a9"},"id":"472e8b3d-afb6-45b5-8111-72c89895099a","version":3}

View File

@ -1 +0,0 @@
{"address":"289d485d9771714cce91d3393d764e1311907acc","crypto":{"cipher":"aes-128-ctr","ciphertext":"faf32ca89d286b107f5e6d842802e05263c49b78d46eac74e6109e9a963378ab","cipherparams":{"iv":"558833eec4a665a8c55608d7d503407d"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":8,"p":16,"r":8,"salt":"d571fff447ffb24314f9513f5160246f09997b857ac71348b73e785aab40dc04"},"mac":"21edb85ff7d0dab1767b9bf498f2c3cb7be7609490756bd32300bb213b59effe"},"id":"3279afcf-55ba-43ff-8997-02dcc46a6525","version":3}

View File

@ -1 +0,0 @@
{"address":"cb61d5a9c4896fb9658090b597ef0e7be6f7b67e","Crypto":{"cipher":"aes-128-cbc","ciphertext":"6143d3192db8b66eabd693d9c4e414dcfaee52abda451af79ccf474dafb35f1bfc7ea013aa9d2ee35969a1a2e8d752d0","cipherparams":{"iv":"35337770fc2117994ecdcad026bccff4"},"kdf":"scrypt","kdfparams":{"n":262144,"r":8,"p":1,"dklen":32,"salt":"9afcddebca541253a2f4053391c673ff9fe23097cd8555d149d929e4ccf1257f"},"mac":"3f3d5af884b17a100b0b3232c0636c230a54dc2ac8d986227219b0dd89197644","version":"1"},"id":"e25f7c1f-d318-4f29-b62c-687190d4d299","version":"1"}

View File

@ -1,28 +0,0 @@
{
"test1": {
"json": {
"Crypto": {
"cipher": "aes-128-cbc",
"cipherparams": {
"iv": "35337770fc2117994ecdcad026bccff4"
},
"ciphertext": "6143d3192db8b66eabd693d9c4e414dcfaee52abda451af79ccf474dafb35f1bfc7ea013aa9d2ee35969a1a2e8d752d0",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "9afcddebca541253a2f4053391c673ff9fe23097cd8555d149d929e4ccf1257f"
},
"mac": "3f3d5af884b17a100b0b3232c0636c230a54dc2ac8d986227219b0dd89197644",
"version": "1"
},
"address": "cb61d5a9c4896fb9658090b597ef0e7be6f7b67e",
"id": "e25f7c1f-d318-4f29-b62c-687190d4d299",
"version": "1"
},
"password": "g",
"priv": "d1b1178d3529626a1a93e073f65028370d14c7eb0936eb42abef05db6f37ad7d"
}
}

View File

@ -1,97 +0,0 @@
{
"wikipage_test_vector_scrypt": {
"json": {
"crypto" : {
"cipher" : "aes-128-ctr",
"cipherparams" : {
"iv" : "83dbcc02d8ccb40e466191a123791e0e"
},
"ciphertext" : "d172bf743a674da9cdad04534d56926ef8358534d458fffccd4e6ad2fbde479c",
"kdf" : "scrypt",
"kdfparams" : {
"dklen" : 32,
"n" : 262144,
"r" : 1,
"p" : 8,
"salt" : "ab0c7876052600dd703518d6fc3fe8984592145b591fc8fb5c6d43190334ba19"
},
"mac" : "2103ac29920d71da29f15d75b4a16dbe95cfd7ff8faea1056c33131d846e3097"
},
"id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6",
"version" : 3
},
"password": "testpassword",
"priv": "7a28b5ba57c53603b0b07b56bba752f7784bf506fa95edc395f5cf6c7514fe9d"
},
"wikipage_test_vector_pbkdf2": {
"json": {
"crypto" : {
"cipher" : "aes-128-ctr",
"cipherparams" : {
"iv" : "6087dab2f9fdbbfaddc31a909735c1e6"
},
"ciphertext" : "5318b4d5bcd28de64ee5559e671353e16f075ecae9f99c7a79a38af5f869aa46",
"kdf" : "pbkdf2",
"kdfparams" : {
"c" : 262144,
"dklen" : 32,
"prf" : "hmac-sha256",
"salt" : "ae3cd4e7013836a3df6bd7241b12db061dbe2c6785853cce422d148a624ce0bd"
},
"mac" : "517ead924a9d0dc3124507e3393d175ce3ff7c1e96529c6c555ce9e51205e9b2"
},
"id" : "3198bc9c-6672-5ab3-d995-4942343ae5b6",
"version" : 3
},
"password": "testpassword",
"priv": "7a28b5ba57c53603b0b07b56bba752f7784bf506fa95edc395f5cf6c7514fe9d"
},
"31_byte_key": {
"json": {
"crypto" : {
"cipher" : "aes-128-ctr",
"cipherparams" : {
"iv" : "e0c41130a323adc1446fc82f724bca2f"
},
"ciphertext" : "9517cd5bdbe69076f9bf5057248c6c050141e970efa36ce53692d5d59a3984",
"kdf" : "scrypt",
"kdfparams" : {
"dklen" : 32,
"n" : 2,
"r" : 8,
"p" : 1,
"salt" : "711f816911c92d649fb4c84b047915679933555030b3552c1212609b38208c63"
},
"mac" : "d5e116151c6aa71470e67a7d42c9620c75c4d23229847dcc127794f0732b0db5"
},
"id" : "fecfc4ce-e956-48fd-953b-30f8b52ed66c",
"version" : 3
},
"password": "foo",
"priv": "fa7b3db73dc7dfdf8c5fbdb796d741e4488628c41fc4febd9160a866ba0f35"
},
"30_byte_key": {
"json": {
"crypto" : {
"cipher" : "aes-128-ctr",
"cipherparams" : {
"iv" : "3ca92af36ad7c2cd92454c59cea5ef00"
},
"ciphertext" : "108b7d34f3442fc26ab1ab90ca91476ba6bfa8c00975a49ef9051dc675aa",
"kdf" : "scrypt",
"kdfparams" : {
"dklen" : 32,
"n" : 2,
"r" : 8,
"p" : 1,
"salt" : "d0769e608fb86cda848065642a9c6fa046845c928175662b8e356c77f914cd3b"
},
"mac" : "75d0e6759f7b3cefa319c3be41680ab6beea7d8328653474bd06706d4cc67420"
},
"id" : "a37e1559-5955-450d-8075-7b8931b392b2",
"version" : 3
},
"password": "foo",
"priv": "81c29e8142bb6a81bef5a92bda7a8328a5c85bb2f9542e76f9b0f94fc018"
}
}

View File

@ -1 +0,0 @@
{"address":"45dea0fb0bba44f4fcf290bba71fd57d7117cbb8","crypto":{"cipher":"aes-128-ctr","ciphertext":"b87781948a1befd247bff51ef4063f716cf6c2d3481163e9a8f42e1f9bb74145","cipherparams":{"iv":"dc4926b48a105133d2f16b96833abf1e"},"kdf":"scrypt","kdfparams":{"dklen":32,"n":2,"p":1,"r":8,"salt":"004244bbdc51cadda545b1cfa43cff9ed2ae88e08c61f1479dbb45410722f8f0"},"mac":"39990c1684557447940d4c69e06b1b82b2aceacb43f284df65c956daf3046b85"},"id":"ce541d8d-c79b-40f8-9f8c-20f59616faba","version":3}

View File

@ -1,139 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package keystore
import (
"math/big"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/core/types"
)
// keystoreWallet implements the accounts.Wallet interface for the original
// keystore.
type keystoreWallet struct {
account accounts.Account // Single account contained in this wallet
keystore *KeyStore // Keystore where the account originates from
}
// URL implements accounts.Wallet, returning the URL of the account within.
func (w *keystoreWallet) URL() accounts.URL {
return w.account.URL
}
// Status implements accounts.Wallet, returning whether the account held by the
// keystore wallet is unlocked or not.
func (w *keystoreWallet) Status() (string, error) {
w.keystore.mu.RLock()
defer w.keystore.mu.RUnlock()
if _, ok := w.keystore.unlocked[w.account.Address]; ok {
return "Unlocked", nil
}
return "Locked", nil
}
// Open implements accounts.Wallet, but is a noop for plain wallets since there
// is no connection or decryption step necessary to access the list of accounts.
func (w *keystoreWallet) Open(passphrase string) error { return nil }
// Close implements accounts.Wallet, but is a noop for plain wallets since is no
// meaningful open operation.
func (w *keystoreWallet) Close() error { return nil }
// Accounts implements accounts.Wallet, returning an account list consisting of
// a single account that the plain kestore wallet contains.
func (w *keystoreWallet) Accounts() []accounts.Account {
return []accounts.Account{w.account}
}
// Contains implements accounts.Wallet, returning whether a particular account is
// or is not wrapped by this wallet instance.
func (w *keystoreWallet) Contains(account accounts.Account) bool {
return account.Address == w.account.Address && (account.URL == (accounts.URL{}) || account.URL == w.account.URL)
}
// Derive implements accounts.Wallet, but is a noop for plain wallets since there
// is no notion of hierarchical account derivation for plain keystore accounts.
func (w *keystoreWallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Account, error) {
return accounts.Account{}, accounts.ErrNotSupported
}
// SelfDerive implements accounts.Wallet, but is a noop for plain wallets since
// there is no notion of hierarchical account derivation for plain keystore accounts.
func (w *keystoreWallet) SelfDerive(base accounts.DerivationPath, chain ethereum.ChainStateReader) {}
// SignHash implements accounts.Wallet, attempting to sign the given hash with
// the given account. If the wallet does not wrap this particular account, an
// error is returned to avoid account leakage (even though in theory we may be
// able to sign via our shared keystore backend).
func (w *keystoreWallet) SignHash(account accounts.Account, hash []byte) ([]byte, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignHash(account, hash)
}
// SignTx implements accounts.Wallet, attempting to sign the given transaction
// with the given account. If the wallet does not wrap this particular account,
// an error is returned to avoid account leakage (even though in theory we may
// be able to sign via our shared keystore backend).
func (w *keystoreWallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignTx(account, tx, chainID)
}
// SignHashWithPassphrase implements accounts.Wallet, attempting to sign the
// given hash with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignHashWithPassphrase(account, passphrase, hash)
}
// SignTxWithPassphrase implements accounts.Wallet, attempting to sign the given
// transaction with the given account using passphrase as extra authentication.
func (w *keystoreWallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
// Make sure the requested account is contained within
if account.Address != w.account.Address {
return nil, accounts.ErrUnknownAccount
}
if account.URL != (accounts.URL{}) && account.URL != w.account.URL {
return nil, accounts.ErrUnknownAccount
}
// Account seems valid, request the keystore to sign
return w.keystore.SignTxWithPassphrase(account, passphrase, tx, chainID)
}

View File

@ -1,108 +0,0 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build darwin,!ios freebsd linux,!arm64 netbsd solaris
package keystore
import (
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/rjeczalik/notify"
)
type watcher struct {
ac *accountCache
starting bool
running bool
ev chan notify.EventInfo
quit chan struct{}
}
func newWatcher(ac *accountCache) *watcher {
return &watcher{
ac: ac,
ev: make(chan notify.EventInfo, 10),
quit: make(chan struct{}),
}
}
// starts the watcher loop in the background.
// Start a watcher in the background if that's not already in progress.
// The caller must hold w.ac.mu.
func (w *watcher) start() {
if w.starting || w.running {
return
}
w.starting = true
go w.loop()
}
func (w *watcher) close() {
close(w.quit)
}
func (w *watcher) loop() {
defer func() {
w.ac.mu.Lock()
w.running = false
w.starting = false
w.ac.mu.Unlock()
}()
logger := log.New("path", w.ac.keydir)
if err := notify.Watch(w.ac.keydir, w.ev, notify.All); err != nil {
logger.Trace("Failed to watch keystore folder", "err", err)
return
}
defer notify.Stop(w.ev)
logger.Trace("Started watching keystore folder")
defer logger.Trace("Stopped watching keystore folder")
w.ac.mu.Lock()
w.running = true
w.ac.mu.Unlock()
// Wait for file system events and reload.
// When an event occurs, the reload call is delayed a bit so that
// multiple events arriving quickly only cause a single reload.
var (
debounceDuration = 500 * time.Millisecond
rescanTriggered = false
debounce = time.NewTimer(0)
)
// Ignore initial trigger
if !debounce.Stop() {
<-debounce.C
}
defer debounce.Stop()
for {
select {
case <-w.quit:
return
case <-w.ev:
// Trigger the scan (with delay), if not already triggered
if !rescanTriggered {
debounce.Reset(debounceDuration)
rescanTriggered = true
}
case <-debounce.C:
w.ac.scanAccounts()
rescanTriggered = false
}
}
}

View File

@ -1,199 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"reflect"
"sort"
"sync"
"github.com/ethereum/go-ethereum/event"
)
// Manager is an overarching account manager that can communicate with various
// backends for signing transactions.
type Manager struct {
backends map[reflect.Type][]Backend // Index of backends currently registered
updaters []event.Subscription // Wallet update subscriptions for all backends
updates chan WalletEvent // Subscription sink for backend wallet changes
wallets []Wallet // Cache of all wallets from all registered backends
feed event.Feed // Wallet feed notifying of arrivals/departures
quit chan chan error
lock sync.RWMutex
}
// NewManager creates a generic account manager to sign transaction via various
// supported backends.
func NewManager(backends ...Backend) *Manager {
// Retrieve the initial list of wallets from the backends and sort by URL
var wallets []Wallet
for _, backend := range backends {
wallets = merge(wallets, backend.Wallets()...)
}
// Subscribe to wallet notifications from all backends
updates := make(chan WalletEvent, 4*len(backends))
subs := make([]event.Subscription, len(backends))
for i, backend := range backends {
subs[i] = backend.Subscribe(updates)
}
// Assemble the account manager and return
am := &Manager{
backends: make(map[reflect.Type][]Backend),
updaters: subs,
updates: updates,
wallets: wallets,
quit: make(chan chan error),
}
for _, backend := range backends {
kind := reflect.TypeOf(backend)
am.backends[kind] = append(am.backends[kind], backend)
}
go am.update()
return am
}
// Close terminates the account manager's internal notification processes.
func (am *Manager) Close() error {
errc := make(chan error)
am.quit <- errc
return <-errc
}
// update is the wallet event loop listening for notifications from the backends
// and updating the cache of wallets.
func (am *Manager) update() {
// Close all subscriptions when the manager terminates
defer func() {
am.lock.Lock()
for _, sub := range am.updaters {
sub.Unsubscribe()
}
am.updaters = nil
am.lock.Unlock()
}()
// Loop until termination
for {
select {
case event := <-am.updates:
// Wallet event arrived, update local cache
am.lock.Lock()
switch event.Kind {
case WalletArrived:
am.wallets = merge(am.wallets, event.Wallet)
case WalletDropped:
am.wallets = drop(am.wallets, event.Wallet)
}
am.lock.Unlock()
// Notify any listeners of the event
am.feed.Send(event)
case errc := <-am.quit:
// Manager terminating, return
errc <- nil
return
}
}
}
// Backends retrieves the backend(s) with the given type from the account manager.
func (am *Manager) Backends(kind reflect.Type) []Backend {
return am.backends[kind]
}
// Wallets returns all signer accounts registered under this account manager.
func (am *Manager) Wallets() []Wallet {
am.lock.RLock()
defer am.lock.RUnlock()
cpy := make([]Wallet, len(am.wallets))
copy(cpy, am.wallets)
return cpy
}
// Wallet retrieves the wallet associated with a particular URL.
func (am *Manager) Wallet(url string) (Wallet, error) {
am.lock.RLock()
defer am.lock.RUnlock()
parsed, err := parseURL(url)
if err != nil {
return nil, err
}
for _, wallet := range am.Wallets() {
if wallet.URL() == parsed {
return wallet, nil
}
}
return nil, ErrUnknownWallet
}
// Find attempts to locate the wallet corresponding to a specific account. Since
// accounts can be dynamically added to and removed from wallets, this method has
// a linear runtime in the number of wallets.
func (am *Manager) Find(account Account) (Wallet, error) {
am.lock.RLock()
defer am.lock.RUnlock()
for _, wallet := range am.wallets {
if wallet.Contains(account) {
return wallet, nil
}
}
return nil, ErrUnknownAccount
}
// Subscribe creates an async subscription to receive notifications when the
// manager detects the arrival or departure of a wallet from any of its backends.
func (am *Manager) Subscribe(sink chan<- WalletEvent) event.Subscription {
return am.feed.Subscribe(sink)
}
// merge is a sorted analogue of append for wallets, where the ordering of the
// origin list is preserved by inserting new wallets at the correct position.
//
// The original slice is assumed to be already sorted by URL.
func merge(slice []Wallet, wallets ...Wallet) []Wallet {
for _, wallet := range wallets {
n := sort.Search(len(slice), func(i int) bool { return slice[i].URL().Cmp(wallet.URL()) >= 0 })
if n == len(slice) {
slice = append(slice, wallet)
continue
}
slice = append(slice[:n], append([]Wallet{wallet}, slice[n:]...)...)
}
return slice
}
// drop is the couterpart of merge, which looks up wallets from within the sorted
// cache and removes the ones specified.
func drop(slice []Wallet, wallets ...Wallet) []Wallet {
for _, wallet := range wallets {
n := sort.Search(len(slice), func(i int) bool { return slice[i].URL().Cmp(wallet.URL()) >= 0 })
if n == len(slice) {
// Wallet not found, may happen during startup
continue
}
slice = append(slice[:n], slice[n+1:]...)
}
return slice
}

View File

@ -1,96 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package accounts
import (
"testing"
)
func TestURLParsing(t *testing.T) {
url, err := parseURL("https://ethereum.org")
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if url.Scheme != "https" {
t.Errorf("expected: %v, got: %v", "https", url.Scheme)
}
if url.Path != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.Path)
}
_, err = parseURL("ethereum.org")
if err == nil {
t.Error("expected err, got: nil")
}
}
func TestURLString(t *testing.T) {
url := URL{Scheme: "https", Path: "ethereum.org"}
if url.String() != "https://ethereum.org" {
t.Errorf("expected: %v, got: %v", "https://ethereum.org", url.String())
}
url = URL{Scheme: "", Path: "ethereum.org"}
if url.String() != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.String())
}
}
func TestURLMarshalJSON(t *testing.T) {
url := URL{Scheme: "https", Path: "ethereum.org"}
json, err := url.MarshalJSON()
if err != nil {
t.Errorf("unexpcted error: %v", err)
}
if string(json) != "\"https://ethereum.org\"" {
t.Errorf("expected: %v, got: %v", "\"https://ethereum.org\"", string(json))
}
}
func TestURLUnmarshalJSON(t *testing.T) {
url := &URL{}
err := url.UnmarshalJSON([]byte("\"https://ethereum.org\""))
if err != nil {
t.Errorf("unexpcted error: %v", err)
}
if url.Scheme != "https" {
t.Errorf("expected: %v, got: %v", "https", url.Scheme)
}
if url.Path != "ethereum.org" {
t.Errorf("expected: %v, got: %v", "https", url.Path)
}
}
func TestURLComparison(t *testing.T) {
tests := []struct {
urlA URL
urlB URL
expect int
}{
{URL{"https", "ethereum.org"}, URL{"https", "ethereum.org"}, 0},
{URL{"http", "ethereum.org"}, URL{"https", "ethereum.org"}, -1},
{URL{"https", "ethereum.org/a"}, URL{"https", "ethereum.org"}, 1},
{URL{"https", "abc.org"}, URL{"https", "ethereum.org"}, -1},
}
for i, tt := range tests {
result := tt.urlA.Cmp(tt.urlB)
if result != tt.expect {
t.Errorf("test %d: cmp mismatch: expected: %d, got: %d", i, tt.expect, result)
}
}
}

View File

@ -1,238 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package usbwallet
import (
"errors"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
)
// LedgerScheme is the protocol scheme prefixing account and wallet URLs.
const LedgerScheme = "ledger"
// TrezorScheme is the protocol scheme prefixing account and wallet URLs.
const TrezorScheme = "trezor"
// refreshCycle is the maximum time between wallet refreshes (if USB hotplug
// notifications don't work).
const refreshCycle = time.Second
// refreshThrottling is the minimum time between wallet refreshes to avoid USB
// trashing.
const refreshThrottling = 500 * time.Millisecond
// Hub is a accounts.Backend that can find and handle generic USB hardware wallets.
type Hub struct {
scheme string // Protocol scheme prefixing account and wallet URLs.
vendorID uint16 // USB vendor identifier used for device discovery
productIDs []uint16 // USB product identifiers used for device discovery
usageID uint16 // USB usage page identifier used for macOS device discovery
endpointID int // USB endpoint identifier used for non-macOS device discovery
makeDriver func(log.Logger) driver // Factory method to construct a vendor specific driver
refreshed time.Time // Time instance when the list of wallets was last refreshed
wallets []accounts.Wallet // List of USB wallet devices currently tracking
updateFeed event.Feed // Event feed to notify wallet additions/removals
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
updating bool // Whether the event notification loop is running
quit chan chan error
stateLock sync.RWMutex // Protects the internals of the hub from racey access
// TODO(karalabe): remove if hotplug lands on Windows
commsPend int // Number of operations blocking enumeration
commsLock sync.Mutex // Lock protecting the pending counter and enumeration
}
// NewLedgerHub creates a new hardware wallet manager for Ledger devices.
func NewLedgerHub() (*Hub, error) {
return newHub(LedgerScheme, 0x2c97, []uint16{0x0000 /* Ledger Blue */, 0x0001 /* Ledger Nano S */}, 0xffa0, 0, newLedgerDriver)
}
// NewTrezorHub creates a new hardware wallet manager for Trezor devices.
func NewTrezorHub() (*Hub, error) {
return newHub(TrezorScheme, 0x534c, []uint16{0x0001 /* Trezor 1 */}, 0xff00, 0, newTrezorDriver)
}
// newHub creates a new hardware wallet manager for generic USB devices.
func newHub(scheme string, vendorID uint16, productIDs []uint16, usageID uint16, endpointID int, makeDriver func(log.Logger) driver) (*Hub, error) {
if !hid.Supported() {
return nil, errors.New("unsupported platform")
}
hub := &Hub{
scheme: scheme,
vendorID: vendorID,
productIDs: productIDs,
usageID: usageID,
endpointID: endpointID,
makeDriver: makeDriver,
quit: make(chan chan error),
}
hub.refreshWallets()
return hub, nil
}
// Wallets implements accounts.Backend, returning all the currently tracked USB
// devices that appear to be hardware wallets.
func (hub *Hub) Wallets() []accounts.Wallet {
// Make sure the list of wallets is up to date
hub.refreshWallets()
hub.stateLock.RLock()
defer hub.stateLock.RUnlock()
cpy := make([]accounts.Wallet, len(hub.wallets))
copy(cpy, hub.wallets)
return cpy
}
// refreshWallets scans the USB devices attached to the machine and updates the
// list of wallets based on the found devices.
func (hub *Hub) refreshWallets() {
// Don't scan the USB like crazy it the user fetches wallets in a loop
hub.stateLock.RLock()
elapsed := time.Since(hub.refreshed)
hub.stateLock.RUnlock()
if elapsed < refreshThrottling {
return
}
// Retrieve the current list of USB wallet devices
var devices []hid.DeviceInfo
if runtime.GOOS == "linux" {
// hidapi on Linux opens the device during enumeration to retrieve some infos,
// breaking the Ledger protocol if that is waiting for user confirmation. This
// is a bug acknowledged at Ledger, but it won't be fixed on old devices so we
// need to prevent concurrent comms ourselves. The more elegant solution would
// be to ditch enumeration in favor of hotplug events, but that don't work yet
// on Windows so if we need to hack it anyway, this is more elegant for now.
hub.commsLock.Lock()
if hub.commsPend > 0 { // A confirmation is pending, don't refresh
hub.commsLock.Unlock()
return
}
}
for _, info := range hid.Enumerate(hub.vendorID, 0) {
for _, id := range hub.productIDs {
if info.ProductID == id && (info.UsagePage == hub.usageID || info.Interface == hub.endpointID) {
devices = append(devices, info)
break
}
}
}
if runtime.GOOS == "linux" {
// See rationale before the enumeration why this is needed and only on Linux.
hub.commsLock.Unlock()
}
// Transform the current list of wallets into the new one
hub.stateLock.Lock()
wallets := make([]accounts.Wallet, 0, len(devices))
events := []accounts.WalletEvent{}
for _, device := range devices {
url := accounts.URL{Scheme: hub.scheme, Path: device.Path}
// Drop wallets in front of the next device or those that failed for some reason
for len(hub.wallets) > 0 {
// Abort if we're past the current device and found an operational one
_, failure := hub.wallets[0].Status()
if hub.wallets[0].URL().Cmp(url) >= 0 || failure == nil {
break
}
// Drop the stale and failed devices
events = append(events, accounts.WalletEvent{Wallet: hub.wallets[0], Kind: accounts.WalletDropped})
hub.wallets = hub.wallets[1:]
}
// If there are no more wallets or the device is before the next, wrap new wallet
if len(hub.wallets) == 0 || hub.wallets[0].URL().Cmp(url) > 0 {
logger := log.New("url", url)
wallet := &wallet{hub: hub, driver: hub.makeDriver(logger), url: &url, info: device, log: logger}
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletArrived})
wallets = append(wallets, wallet)
continue
}
// If the device is the same as the first wallet, keep it
if hub.wallets[0].URL().Cmp(url) == 0 {
wallets = append(wallets, hub.wallets[0])
hub.wallets = hub.wallets[1:]
continue
}
}
// Drop any leftover wallets and set the new batch
for _, wallet := range hub.wallets {
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletDropped})
}
hub.refreshed = time.Now()
hub.wallets = wallets
hub.stateLock.Unlock()
// Fire all wallet events and return
for _, event := range events {
hub.updateFeed.Send(event)
}
}
// Subscribe implements accounts.Backend, creating an async subscription to
// receive notifications on the addition or removal of USB wallets.
func (hub *Hub) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
// We need the mutex to reliably start/stop the update loop
hub.stateLock.Lock()
defer hub.stateLock.Unlock()
// Subscribe the caller and track the subscriber count
sub := hub.updateScope.Track(hub.updateFeed.Subscribe(sink))
// Subscribers require an active notification loop, start it
if !hub.updating {
hub.updating = true
go hub.updater()
}
return sub
}
// updater is responsible for maintaining an up-to-date list of wallets managed
// by the USB hub, and for firing wallet addition/removal events.
func (hub *Hub) updater() {
for {
// TODO: Wait for a USB hotplug event (not supported yet) or a refresh timeout
// <-hub.changes
time.Sleep(refreshCycle)
// Run the wallet refresher
hub.refreshWallets()
// If all our subscribers left, stop the updater
hub.stateLock.Lock()
if hub.updateScope.Count() == 0 {
hub.updating = false
hub.stateLock.Unlock()
return
}
hub.stateLock.Unlock()
}
}

View File

@ -1,562 +0,0 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package usbwallet implements support for USB hardware wallets.
package usbwallet
import (
"context"
"fmt"
"io"
"math/big"
"sync"
"time"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/karalabe/hid"
)
// Maximum time between wallet health checks to detect USB unplugs.
const heartbeatCycle = time.Second
// Minimum time to wait between self derivation attempts, even it the user is
// requesting accounts like crazy.
const selfDeriveThrottling = time.Second
// driver defines the vendor specific functionality hardware wallets instances
// must implement to allow using them with the wallet lifecycle management.
type driver interface {
// Status returns a textual status to aid the user in the current state of the
// wallet. It also returns an error indicating any failure the wallet might have
// encountered.
Status() (string, error)
// Open initializes access to a wallet instance. The passphrase parameter may
// or may not be used by the implementation of a particular wallet instance.
Open(device io.ReadWriter, passphrase string) error
// Close releases any resources held by an open wallet instance.
Close() error
// Heartbeat performs a sanity check against the hardware wallet to see if it
// is still online and healthy.
Heartbeat() error
// Derive sends a derivation request to the USB device and returns the Ethereum
// address located on that path.
Derive(path accounts.DerivationPath) (common.Address, error)
// SignTx sends the transaction to the USB device and waits for the user to confirm
// or deny the transaction.
SignTx(path accounts.DerivationPath, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error)
}
// wallet represents the common functionality shared by all USB hardware
// wallets to prevent reimplementing the same complex maintenance mechanisms
// for different vendors.
type wallet struct {
hub *Hub // USB hub scanning
driver driver // Hardware implementation of the low level device operations
url *accounts.URL // Textual URL uniquely identifying this wallet
info hid.DeviceInfo // Known USB device infos about the wallet
device *hid.Device // USB device advertising itself as a hardware wallet
accounts []accounts.Account // List of derive accounts pinned on the hardware wallet
paths map[common.Address]accounts.DerivationPath // Known derivation paths for signing operations
deriveNextPath accounts.DerivationPath // Next derivation path for account auto-discovery
deriveNextAddr common.Address // Next derived account address for auto-discovery
deriveChain ethereum.ChainStateReader // Blockchain state reader to discover used account with
deriveReq chan chan struct{} // Channel to request a self-derivation on
deriveQuit chan chan error // Channel to terminate the self-deriver with
healthQuit chan chan error
// Locking a hardware wallet is a bit special. Since hardware devices are lower
// performing, any communication with them might take a non negligible amount of
// time. Worse still, waiting for user confirmation can take arbitrarily long,
// but exclusive communication must be upheld during. Locking the entire wallet
// in the mean time however would stall any parts of the system that don't want
// to communicate, just read some state (e.g. list the accounts).
//
// As such, a hardware wallet needs two locks to function correctly. A state
// lock can be used to protect the wallet's software-side internal state, which
// must not be held exclusively during hardware communication. A communication
// lock can be used to achieve exclusive access to the device itself, this one
// however should allow "skipping" waiting for operations that might want to
// use the device, but can live without too (e.g. account self-derivation).
//
// Since we have two locks, it's important to know how to properly use them:
// - Communication requires the `device` to not change, so obtaining the
// commsLock should be done after having a stateLock.
// - Communication must not disable read access to the wallet state, so it
// must only ever hold a *read* lock to stateLock.
commsLock chan struct{} // Mutex (buf=1) for the USB comms without keeping the state locked
stateLock sync.RWMutex // Protects read and write access to the wallet struct fields
log log.Logger // Contextual logger to tag the base with its id
}
// URL implements accounts.Wallet, returning the URL of the USB hardware device.
func (w *wallet) URL() accounts.URL {
return *w.url // Immutable, no need for a lock
}
// Status implements accounts.Wallet, returning a custom status message from the
// underlying vendor-specific hardware wallet implementation.
func (w *wallet) Status() (string, error) {
w.stateLock.RLock() // No device communication, state lock is enough
defer w.stateLock.RUnlock()
status, failure := w.driver.Status()
if w.device == nil {
return "Closed", failure
}
return status, failure
}
// Open implements accounts.Wallet, attempting to open a USB connection to the
// hardware wallet.
func (w *wallet) Open(passphrase string) error {
w.stateLock.Lock() // State lock is enough since there's no connection yet at this point
defer w.stateLock.Unlock()
// If the device was already opened once, refuse to try again
if w.paths != nil {
return accounts.ErrWalletAlreadyOpen
}
// Make sure the actual device connection is done only once
if w.device == nil {
device, err := w.info.Open()
if err != nil {
return err
}
w.device = device
w.commsLock = make(chan struct{}, 1)
w.commsLock <- struct{}{} // Enable lock
}
// Delegate device initialization to the underlying driver
if err := w.driver.Open(w.device, passphrase); err != nil {
return err
}
// Connection successful, start life-cycle management
w.paths = make(map[common.Address]accounts.DerivationPath)
w.deriveReq = make(chan chan struct{})
w.deriveQuit = make(chan chan error)
w.healthQuit = make(chan chan error)
go w.heartbeat()
go w.selfDerive()
// Notify anyone listening for wallet events that a new device is accessible
go w.hub.updateFeed.Send(accounts.WalletEvent{Wallet: w, Kind: accounts.WalletOpened})
return nil
}
// heartbeat is a health check loop for the USB wallets to periodically verify
// whether they are still present or if they malfunctioned.
func (w *wallet) heartbeat() {
w.log.Debug("USB wallet health-check started")
defer w.log.Debug("USB wallet health-check stopped")
// Execute heartbeat checks until termination or error
var (
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until termination is requested or the heartbeat cycle arrives
select {
case errc = <-w.healthQuit:
// Termination requested
continue
case <-time.After(heartbeatCycle):
// Heartbeat time
}
// Execute a tiny data exchange to see responsiveness
w.stateLock.RLock()
if w.device == nil {
// Terminated while waiting for the lock
w.stateLock.RUnlock()
continue
}
<-w.commsLock // Don't lock state while resolving version
err = w.driver.Heartbeat()
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
if err != nil {
w.stateLock.Lock() // Lock state to tear the wallet down
w.close()
w.stateLock.Unlock()
}
// Ignore non hardware related errors
err = nil
}
// In case of error, wait for termination
if err != nil {
w.log.Debug("USB wallet health-check failed", "err", err)
errc = <-w.healthQuit
}
errc <- err
}
// Close implements accounts.Wallet, closing the USB connection to the device.
func (w *wallet) Close() error {
// Ensure the wallet was opened
w.stateLock.RLock()
hQuit, dQuit := w.healthQuit, w.deriveQuit
w.stateLock.RUnlock()
// Terminate the health checks
var herr error
if hQuit != nil {
errc := make(chan error)
hQuit <- errc
herr = <-errc // Save for later, we *must* close the USB
}
// Terminate the self-derivations
var derr error
if dQuit != nil {
errc := make(chan error)
dQuit <- errc
derr = <-errc // Save for later, we *must* close the USB
}
// Terminate the device connection
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.healthQuit = nil
w.deriveQuit = nil
w.deriveReq = nil
if err := w.close(); err != nil {
return err
}
if herr != nil {
return herr
}
return derr
}
// close is the internal wallet closer that terminates the USB connection and
// resets all the fields to their defaults.
//
// Note, close assumes the state lock is held!
func (w *wallet) close() error {
// Allow duplicate closes, especially for health-check failures
if w.device == nil {
return nil
}
// Close the device, clear everything, then return
w.device.Close()
w.device = nil
w.accounts, w.paths = nil, nil
w.driver.Close()
return nil
}
// Accounts implements accounts.Wallet, returning the list of accounts pinned to
// the USB hardware wallet. If self-derivation was enabled, the account list is
// periodically expanded based on current chain state.
func (w *wallet) Accounts() []accounts.Account {
// Attempt self-derivation if it's running
reqc := make(chan struct{}, 1)
select {
case w.deriveReq <- reqc:
// Self-derivation request accepted, wait for it
<-reqc
default:
// Self-derivation offline, throttled or busy, skip
}
// Return whatever account list we ended up with
w.stateLock.RLock()
defer w.stateLock.RUnlock()
cpy := make([]accounts.Account, len(w.accounts))
copy(cpy, w.accounts)
return cpy
}
// selfDerive is an account derivation loop that upon request attempts to find
// new non-zero accounts.
func (w *wallet) selfDerive() {
w.log.Debug("USB wallet self-derivation started")
defer w.log.Debug("USB wallet self-derivation stopped")
// Execute self-derivations until termination or error
var (
reqc chan struct{}
errc chan error
err error
)
for errc == nil && err == nil {
// Wait until either derivation or termination is requested
select {
case errc = <-w.deriveQuit:
// Termination requested
continue
case reqc = <-w.deriveReq:
// Account discovery requested
}
// Derivation needs a chain and device access, skip if either unavailable
w.stateLock.RLock()
if w.device == nil || w.deriveChain == nil {
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
select {
case <-w.commsLock:
default:
w.stateLock.RUnlock()
reqc <- struct{}{}
continue
}
// Device lock obtained, derive the next batch of accounts
var (
accs []accounts.Account
paths []accounts.DerivationPath
nextAddr = w.deriveNextAddr
nextPath = w.deriveNextPath
context = context.Background()
)
for empty := false; !empty; {
// Retrieve the next derived Ethereum account
if nextAddr == (common.Address{}) {
if nextAddr, err = w.driver.Derive(nextPath); err != nil {
w.log.Warn("USB wallet account derivation failed", "err", err)
break
}
}
// Check the account's status against the current chain state
var (
balance *big.Int
nonce uint64
)
balance, err = w.deriveChain.BalanceAt(context, nextAddr, nil)
if err != nil {
w.log.Warn("USB wallet balance retrieval failed", "err", err)
break
}
nonce, err = w.deriveChain.NonceAt(context, nextAddr, nil)
if err != nil {
w.log.Warn("USB wallet nonce retrieval failed", "err", err)
break
}
// If the next account is empty, stop self-derivation, but add it nonetheless
if balance.Sign() == 0 && nonce == 0 {
empty = true
}
// We've just self-derived a new account, start tracking it locally
path := make(accounts.DerivationPath, len(nextPath))
copy(path[:], nextPath[:])
paths = append(paths, path)
account := accounts.Account{
Address: nextAddr,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
accs = append(accs, account)
// Display a log message to the user for new (or previously empty accounts)
if _, known := w.paths[nextAddr]; !known || (!empty && nextAddr == w.deriveNextAddr) {
w.log.Info("USB wallet discovered new account", "address", nextAddr, "path", path, "balance", balance, "nonce", nonce)
}
// Fetch the next potential account
if !empty {
nextAddr = common.Address{}
nextPath[len(nextPath)-1]++
}
}
// Self derivation complete, release device lock
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// Insert any accounts successfully derived
w.stateLock.Lock()
for i := 0; i < len(accs); i++ {
if _, ok := w.paths[accs[i].Address]; !ok {
w.accounts = append(w.accounts, accs[i])
w.paths[accs[i].Address] = paths[i]
}
}
// Shift the self-derivation forward
// TODO(karalabe): don't overwrite changes from wallet.SelfDerive
w.deriveNextAddr = nextAddr
w.deriveNextPath = nextPath
w.stateLock.Unlock()
// Notify the user of termination and loop after a bit of time (to avoid trashing)
reqc <- struct{}{}
if err == nil {
select {
case errc = <-w.deriveQuit:
// Termination requested, abort
case <-time.After(selfDeriveThrottling):
// Waited enough, willing to self-derive again
}
}
}
// In case of error, wait for termination
if err != nil {
w.log.Debug("USB wallet self-derivation failed", "err", err)
errc = <-w.deriveQuit
}
errc <- err
}
// Contains implements accounts.Wallet, returning whether a particular account is
// or is not pinned into this wallet instance. Although we could attempt to resolve
// unpinned accounts, that would be an non-negligible hardware operation.
func (w *wallet) Contains(account accounts.Account) bool {
w.stateLock.RLock()
defer w.stateLock.RUnlock()
_, exists := w.paths[account.Address]
return exists
}
// Derive implements accounts.Wallet, deriving a new account at the specific
// derivation path. If pin is set to true, the account will be added to the list
// of tracked accounts.
func (w *wallet) Derive(path accounts.DerivationPath, pin bool) (accounts.Account, error) {
// Try to derive the actual account and update its URL if successful
w.stateLock.RLock() // Avoid device disappearing during derivation
if w.device == nil {
w.stateLock.RUnlock()
return accounts.Account{}, accounts.ErrWalletClosed
}
<-w.commsLock // Avoid concurrent hardware access
address, err := w.driver.Derive(path)
w.commsLock <- struct{}{}
w.stateLock.RUnlock()
// If an error occurred or no pinning was requested, return
if err != nil {
return accounts.Account{}, err
}
account := accounts.Account{
Address: address,
URL: accounts.URL{Scheme: w.url.Scheme, Path: fmt.Sprintf("%s/%s", w.url.Path, path)},
}
if !pin {
return account, nil
}
// Pinning needs to modify the state
w.stateLock.Lock()
defer w.stateLock.Unlock()
if _, ok := w.paths[address]; !ok {
w.accounts = append(w.accounts, account)
w.paths[address] = path
}
return account, nil
}
// SelfDerive implements accounts.Wallet, trying to discover accounts that the
// user used previously (based on the chain state), but ones that he/she did not
// explicitly pin to the wallet manually. To avoid chain head monitoring, self
// derivation only runs during account listing (and even then throttled).
func (w *wallet) SelfDerive(base accounts.DerivationPath, chain ethereum.ChainStateReader) {
w.stateLock.Lock()
defer w.stateLock.Unlock()
w.deriveNextPath = make(accounts.DerivationPath, len(base))
copy(w.deriveNextPath[:], base[:])
w.deriveNextAddr = common.Address{}
w.deriveChain = chain
}
// SignHash implements accounts.Wallet, however signing arbitrary data is not
// supported for hardware wallets, so this method will always return an error.
func (w *wallet) SignHash(account accounts.Account, hash []byte) ([]byte, error) {
return nil, accounts.ErrNotSupported
}
// SignTx implements accounts.Wallet. It sends the transaction over to the Ledger
// wallet to request a confirmation from the user. It returns either the signed
// transaction or a failure if the user denied the transaction.
//
// Note, if the version of the Ethereum application running on the Ledger wallet is
// too old to sign EIP-155 transactions, but such is requested nonetheless, an error
// will be returned opposed to silently signing in Homestead mode.
func (w *wallet) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
w.stateLock.RLock() // Comms have own mutex, this is for the state fields
defer w.stateLock.RUnlock()
// If the wallet is closed, abort
if w.device == nil {
return nil, accounts.ErrWalletClosed
}
// Make sure the requested account is contained within
path, ok := w.paths[account.Address]
if !ok {
return nil, accounts.ErrUnknownAccount
}
// All infos gathered and metadata checks out, request signing
<-w.commsLock
defer func() { w.commsLock <- struct{}{} }()
// Ensure the device isn't screwed with while user confirmation is pending
// TODO(karalabe): remove if hotplug lands on Windows
w.hub.commsLock.Lock()
w.hub.commsPend++
w.hub.commsLock.Unlock()
defer func() {
w.hub.commsLock.Lock()
w.hub.commsPend--
w.hub.commsLock.Unlock()
}()
// Sign the transaction and verify the sender to avoid hardware fault surprises
sender, signed, err := w.driver.SignTx(path, tx, chainID)
if err != nil {
return nil, err
}
if sender != account.Address {
return nil, fmt.Errorf("signer mismatch: expected %s, got %s", account.Address.Hex(), sender.Hex())
}
return signed, nil
}
// SignHashWithPassphrase implements accounts.Wallet, however signing arbitrary
// data is not supported for Ledger wallets, so this method will always return
// an error.
func (w *wallet) SignHashWithPassphrase(account accounts.Account, passphrase string, hash []byte) ([]byte, error) {
return w.SignHash(account, hash)
}
// SignTxWithPassphrase implements accounts.Wallet, attempting to sign the given
// transaction with the given account using passphrase as extra authentication.
// Since USB wallets don't rely on passphrases, these are silently ignored.
func (w *wallet) SignTxWithPassphrase(account accounts.Account, passphrase string, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
return w.SignTx(account, tx, chainID)
}

View File

@ -15,11 +15,11 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/crypto/ecies"
"github.com/ethereum/go-ethereum/crypto/sha3"
"github.com/ethereum/go-ethereum/swarm/log"
"github.com/ethereum/go-ethereum/swarm/sctx"
"github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/sctx"
"github.com/ethersphere/swarm/storage"
"golang.org/x/crypto/scrypt"
"golang.org/x/crypto/sha3"
cli "gopkg.in/urfave/cli.v1"
)
@ -33,7 +33,7 @@ var (
}
)
const EMPTY_CREDENTIALS = ""
const EmptyCredentials = ""
type AccessEntry struct {
Type AccessType
@ -336,7 +336,7 @@ func (a *API) doDecrypt(ctx context.Context, credentials string, pk *ecdsa.Priva
}
func (a *API) getACTDecryptionKey(ctx context.Context, actManifestAddress storage.Address, sessionKey []byte) (found bool, ciphertext, decryptionKey []byte, err error) {
hasher := sha3.NewKeccak256()
hasher := sha3.NewLegacyKeccak256()
hasher.Write(append(sessionKey, 0))
lookupKey := hasher.Sum(nil)
hasher.Reset()
@ -462,7 +462,7 @@ func DoACT(ctx *cli.Context, privateKey *ecdsa.PrivateKey, salt []byte, grantees
return nil, nil, nil, err
}
hasher := sha3.NewKeccak256()
hasher := sha3.NewLegacyKeccak256()
hasher.Write(append(sessionKey, 0))
lookupKey := hasher.Sum(nil)
@ -484,7 +484,7 @@ func DoACT(ctx *cli.Context, privateKey *ecdsa.PrivateKey, salt []byte, grantees
if err != nil {
return nil, nil, nil, err
}
hasher := sha3.NewKeccak256()
hasher := sha3.NewLegacyKeccak256()
hasher.Write(append(sessionKey, 0))
lookupKey := hasher.Sum(nil)

993
api/api.go Normal file
View File

@ -0,0 +1,993 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package api
//go:generate mimegen --types=./../../cmd/swarm/mimegen/mime.types --package=api --out=gen_mime.go
//go:generate gofmt -s -w gen_mime.go
import (
"archive/tar"
"context"
"crypto/ecdsa"
"encoding/hex"
"errors"
"fmt"
"io"
"math/big"
"net/http"
"path"
"strings"
"bytes"
"mime"
"path/filepath"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/contracts/ens"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/spancontext"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/storage/feed"
"github.com/ethersphere/swarm/storage/feed/lookup"
opentracing "github.com/opentracing/opentracing-go"
)
var (
apiResolveCount = metrics.NewRegisteredCounter("api.resolve.count", nil)
apiResolveFail = metrics.NewRegisteredCounter("api.resolve.fail", nil)
apiGetCount = metrics.NewRegisteredCounter("api.get.count", nil)
apiGetNotFound = metrics.NewRegisteredCounter("api.get.notfound", nil)
apiGetHTTP300 = metrics.NewRegisteredCounter("api.get.http.300", nil)
apiManifestUpdateCount = metrics.NewRegisteredCounter("api.manifestupdate.count", nil)
apiManifestUpdateFail = metrics.NewRegisteredCounter("api.manifestupdate.fail", nil)
apiManifestListCount = metrics.NewRegisteredCounter("api.manifestlist.count", nil)
apiManifestListFail = metrics.NewRegisteredCounter("api.manifestlist.fail", nil)
apiDeleteCount = metrics.NewRegisteredCounter("api.delete.count", nil)
apiDeleteFail = metrics.NewRegisteredCounter("api.delete.fail", nil)
apiGetTarCount = metrics.NewRegisteredCounter("api.gettar.count", nil)
apiGetTarFail = metrics.NewRegisteredCounter("api.gettar.fail", nil)
apiUploadTarCount = metrics.NewRegisteredCounter("api.uploadtar.count", nil)
apiUploadTarFail = metrics.NewRegisteredCounter("api.uploadtar.fail", nil)
apiModifyCount = metrics.NewRegisteredCounter("api.modify.count", nil)
apiModifyFail = metrics.NewRegisteredCounter("api.modify.fail", nil)
apiAddFileCount = metrics.NewRegisteredCounter("api.addfile.count", nil)
apiAddFileFail = metrics.NewRegisteredCounter("api.addfile.fail", nil)
apiRmFileCount = metrics.NewRegisteredCounter("api.removefile.count", nil)
apiRmFileFail = metrics.NewRegisteredCounter("api.removefile.fail", nil)
apiAppendFileCount = metrics.NewRegisteredCounter("api.appendfile.count", nil)
apiAppendFileFail = metrics.NewRegisteredCounter("api.appendfile.fail", nil)
apiGetInvalid = metrics.NewRegisteredCounter("api.get.invalid", nil)
)
// Resolver interface resolve a domain name to a hash using ENS
type Resolver interface {
Resolve(string) (common.Hash, error)
}
// ResolveValidator is used to validate the contained Resolver
type ResolveValidator interface {
Resolver
Owner(node [32]byte) (common.Address, error)
HeaderByNumber(context.Context, *big.Int) (*types.Header, error)
}
// NoResolverError is returned by MultiResolver.Resolve if no resolver
// can be found for the address.
type NoResolverError struct {
TLD string
}
// NewNoResolverError creates a NoResolverError for the given top level domain
func NewNoResolverError(tld string) *NoResolverError {
return &NoResolverError{TLD: tld}
}
// Error NoResolverError implements error
func (e *NoResolverError) Error() string {
if e.TLD == "" {
return "no ENS resolver"
}
return fmt.Sprintf("no ENS endpoint configured to resolve .%s TLD names", e.TLD)
}
// MultiResolver is used to resolve URL addresses based on their TLDs.
// Each TLD can have multiple resolvers, and the resolution from the
// first one in the sequence will be returned.
type MultiResolver struct {
resolvers map[string][]ResolveValidator
nameHash func(string) common.Hash
}
// MultiResolverOption sets options for MultiResolver and is used as
// arguments for its constructor.
type MultiResolverOption func(*MultiResolver)
// MultiResolverOptionWithResolver adds a Resolver to a list of resolvers
// for a specific TLD. If TLD is an empty string, the resolver will be added
// to the list of default resolver, the ones that will be used for resolution
// of addresses which do not have their TLD resolver specified.
func MultiResolverOptionWithResolver(r ResolveValidator, tld string) MultiResolverOption {
return func(m *MultiResolver) {
m.resolvers[tld] = append(m.resolvers[tld], r)
}
}
// NewMultiResolver creates a new instance of MultiResolver.
func NewMultiResolver(opts ...MultiResolverOption) (m *MultiResolver) {
m = &MultiResolver{
resolvers: make(map[string][]ResolveValidator),
nameHash: ens.EnsNode,
}
for _, o := range opts {
o(m)
}
return m
}
// Resolve resolves address by choosing a Resolver by TLD.
// If there are more default Resolvers, or for a specific TLD,
// the Hash from the first one which does not return error
// will be returned.
func (m *MultiResolver) Resolve(addr string) (h common.Hash, err error) {
rs, err := m.getResolveValidator(addr)
if err != nil {
return h, err
}
for _, r := range rs {
h, err = r.Resolve(addr)
if err == nil {
return
}
}
return
}
// getResolveValidator uses the hostname to retrieve the resolver associated with the top level domain
func (m *MultiResolver) getResolveValidator(name string) ([]ResolveValidator, error) {
rs := m.resolvers[""]
tld := path.Ext(name)
if tld != "" {
tld = tld[1:]
rstld, ok := m.resolvers[tld]
if ok {
return rstld, nil
}
}
if len(rs) == 0 {
return rs, NewNoResolverError(tld)
}
return rs, nil
}
/*
API implements webserver/file system related content storage and retrieval
on top of the FileStore
it is the public interface of the FileStore which is included in the ethereum stack
*/
type API struct {
feed *feed.Handler
fileStore *storage.FileStore
dns Resolver
Tags *chunk.Tags
Decryptor func(context.Context, string) DecryptFunc
}
// NewAPI the api constructor initialises a new API instance.
func NewAPI(fileStore *storage.FileStore, dns Resolver, feedHandler *feed.Handler, pk *ecdsa.PrivateKey, tags *chunk.Tags) (self *API) {
self = &API{
fileStore: fileStore,
dns: dns,
feed: feedHandler,
Tags: tags,
Decryptor: func(ctx context.Context, credentials string) DecryptFunc {
return self.doDecrypt(ctx, credentials, pk)
},
}
return
}
// Retrieve FileStore reader API
func (a *API) Retrieve(ctx context.Context, addr storage.Address) (reader storage.LazySectionReader, isEncrypted bool) {
return a.fileStore.Retrieve(ctx, addr)
}
// Store wraps the Store API call of the embedded FileStore
func (a *API) Store(ctx context.Context, data io.Reader, size int64, toEncrypt bool) (addr storage.Address, wait func(ctx context.Context) error, err error) {
log.Debug("api.store", "size", size)
return a.fileStore.Store(ctx, data, size, toEncrypt)
}
// Resolve a name into a content-addressed hash
// where address could be an ENS name, or a content addressed hash
func (a *API) Resolve(ctx context.Context, address string) (storage.Address, error) {
// if DNS is not configured, return an error
if a.dns == nil {
if hashMatcher.MatchString(address) {
return common.Hex2Bytes(address), nil
}
apiResolveFail.Inc(1)
return nil, fmt.Errorf("no DNS to resolve name: %q", address)
}
// try and resolve the address
resolved, err := a.dns.Resolve(address)
if err != nil {
if hashMatcher.MatchString(address) {
return common.Hex2Bytes(address), nil
}
return nil, err
}
return resolved[:], nil
}
// Resolve resolves a URI to an Address using the MultiResolver.
func (a *API) ResolveURI(ctx context.Context, uri *URI, credentials string) (storage.Address, error) {
apiResolveCount.Inc(1)
log.Trace("resolving", "uri", uri.Addr)
var sp opentracing.Span
ctx, sp = spancontext.StartSpan(
ctx,
"api.resolve")
defer sp.Finish()
// if the URI is immutable, check if the address looks like a hash
if uri.Immutable() {
key := uri.Address()
if key == nil {
return nil, fmt.Errorf("immutable address not a content hash: %q", uri.Addr)
}
return key, nil
}
addr, err := a.Resolve(ctx, uri.Addr)
if err != nil {
return nil, err
}
if uri.Path == "" {
return addr, nil
}
walker, err := a.NewManifestWalker(ctx, addr, a.Decryptor(ctx, credentials), nil)
if err != nil {
return nil, err
}
var entry *ManifestEntry
walker.Walk(func(e *ManifestEntry) error {
// if the entry matches the path, set entry and stop
// the walk
if e.Path == uri.Path {
entry = e
// return an error to cancel the walk
return errors.New("found")
}
// ignore non-manifest files
if e.ContentType != ManifestType {
return nil
}
// if the manifest's path is a prefix of the
// requested path, recurse into it by returning
// nil and continuing the walk
if strings.HasPrefix(uri.Path, e.Path) {
return nil
}
return ErrSkipManifest
})
if entry == nil {
return nil, errors.New("not found")
}
addr = storage.Address(common.Hex2Bytes(entry.Hash))
return addr, nil
}
// Get uses iterative manifest retrieval and prefix matching
// to resolve basePath to content using FileStore retrieve
// it returns a section reader, mimeType, status, the key of the actual content and an error
func (a *API) Get(ctx context.Context, decrypt DecryptFunc, manifestAddr storage.Address, path string) (reader storage.LazySectionReader, mimeType string, status int, contentAddr storage.Address, err error) {
log.Debug("api.get", "key", manifestAddr, "path", path)
apiGetCount.Inc(1)
trie, err := loadManifest(ctx, a.fileStore, manifestAddr, nil, decrypt)
if err != nil {
apiGetNotFound.Inc(1)
status = http.StatusNotFound
return nil, "", http.StatusNotFound, nil, err
}
log.Debug("trie getting entry", "key", manifestAddr, "path", path)
entry, _ := trie.getEntry(path)
if entry != nil {
log.Debug("trie got entry", "key", manifestAddr, "path", path, "entry.Hash", entry.Hash)
if entry.ContentType == ManifestType {
log.Debug("entry is manifest", "key", manifestAddr, "new key", entry.Hash)
adr, err := hex.DecodeString(entry.Hash)
if err != nil {
return nil, "", 0, nil, err
}
return a.Get(ctx, decrypt, adr, entry.Path)
}
// we need to do some extra work if this is a Swarm feed manifest
if entry.ContentType == FeedContentType {
if entry.Feed == nil {
return reader, mimeType, status, nil, fmt.Errorf("Cannot decode Feed in manifest")
}
_, err := a.feed.Lookup(ctx, feed.NewQueryLatest(entry.Feed, lookup.NoClue))
if err != nil {
apiGetNotFound.Inc(1)
status = http.StatusNotFound
log.Debug(fmt.Sprintf("get feed update content error: %v", err))
return reader, mimeType, status, nil, err
}
// get the data of the update
_, contentAddr, err := a.feed.GetContent(entry.Feed)
if err != nil {
apiGetNotFound.Inc(1)
status = http.StatusNotFound
log.Warn(fmt.Sprintf("get feed update content error: %v", err))
return reader, mimeType, status, nil, err
}
// extract content hash
if len(contentAddr) != storage.AddressLength {
apiGetInvalid.Inc(1)
status = http.StatusUnprocessableEntity
errorMessage := fmt.Sprintf("invalid swarm hash in feed update. Expected %d bytes. Got %d", storage.AddressLength, len(contentAddr))
log.Warn(errorMessage)
return reader, mimeType, status, nil, errors.New(errorMessage)
}
manifestAddr = storage.Address(contentAddr)
log.Trace("feed update contains swarm hash", "key", manifestAddr)
// get the manifest the swarm hash points to
trie, err := loadManifest(ctx, a.fileStore, manifestAddr, nil, NOOPDecrypt)
if err != nil {
apiGetNotFound.Inc(1)
status = http.StatusNotFound
log.Warn(fmt.Sprintf("loadManifestTrie (feed update) error: %v", err))
return reader, mimeType, status, nil, err
}
// finally, get the manifest entry
// it will always be the entry on path ""
entry, _ = trie.getEntry(path)
if entry == nil {
status = http.StatusNotFound
apiGetNotFound.Inc(1)
err = fmt.Errorf("manifest (feed update) entry for '%s' not found", path)
log.Trace("manifest (feed update) entry not found", "key", manifestAddr, "path", path)
return reader, mimeType, status, nil, err
}
}
// regardless of feed update manifests or normal manifests we will converge at this point
// get the key the manifest entry points to and serve it if it's unambiguous
contentAddr = common.Hex2Bytes(entry.Hash)
status = entry.Status
if status == http.StatusMultipleChoices {
apiGetHTTP300.Inc(1)
return nil, entry.ContentType, status, contentAddr, err
}
mimeType = entry.ContentType
log.Debug("content lookup key", "key", contentAddr, "mimetype", mimeType)
reader, _ = a.fileStore.Retrieve(ctx, contentAddr)
} else {
// no entry found
status = http.StatusNotFound
apiGetNotFound.Inc(1)
err = fmt.Errorf("Not found: could not find resource '%s'", path)
log.Trace("manifest entry not found", "key", contentAddr, "path", path)
}
return
}
func (a *API) Delete(ctx context.Context, addr string, path string) (storage.Address, error) {
apiDeleteCount.Inc(1)
uri, err := Parse("bzz:/" + addr)
if err != nil {
apiDeleteFail.Inc(1)
return nil, err
}
key, err := a.ResolveURI(ctx, uri, EmptyCredentials)
if err != nil {
return nil, err
}
newKey, err := a.UpdateManifest(ctx, key, func(mw *ManifestWriter) error {
log.Debug(fmt.Sprintf("removing %s from manifest %s", path, key.Log()))
return mw.RemoveEntry(path)
})
if err != nil {
apiDeleteFail.Inc(1)
return nil, err
}
return newKey, nil
}
// GetDirectoryTar fetches a requested directory as a tarstream
// it returns an io.Reader and an error. Do not forget to Close() the returned ReadCloser
func (a *API) GetDirectoryTar(ctx context.Context, decrypt DecryptFunc, uri *URI) (io.ReadCloser, error) {
apiGetTarCount.Inc(1)
addr, err := a.Resolve(ctx, uri.Addr)
if err != nil {
return nil, err
}
walker, err := a.NewManifestWalker(ctx, addr, decrypt, nil)
if err != nil {
apiGetTarFail.Inc(1)
return nil, err
}
piper, pipew := io.Pipe()
tw := tar.NewWriter(pipew)
go func() {
err := walker.Walk(func(entry *ManifestEntry) error {
// ignore manifests (walk will recurse into them)
if entry.ContentType == ManifestType {
return nil
}
// retrieve the entry's key and size
reader, _ := a.Retrieve(ctx, storage.Address(common.Hex2Bytes(entry.Hash)))
size, err := reader.Size(ctx, nil)
if err != nil {
return err
}
// write a tar header for the entry
hdr := &tar.Header{
Name: entry.Path,
Mode: entry.Mode,
Size: size,
ModTime: entry.ModTime,
Xattrs: map[string]string{
"user.swarm.content-type": entry.ContentType,
},
}
if err := tw.WriteHeader(hdr); err != nil {
return err
}
// copy the file into the tar stream
n, err := io.Copy(tw, io.LimitReader(reader, hdr.Size))
if err != nil {
return err
} else if n != size {
return fmt.Errorf("error writing %s: expected %d bytes but sent %d", entry.Path, size, n)
}
return nil
})
// close tar writer before closing pipew
// to flush remaining data to pipew
// regardless of error value
tw.Close()
if err != nil {
apiGetTarFail.Inc(1)
pipew.CloseWithError(err)
} else {
pipew.Close()
}
}()
return piper, nil
}
// GetManifestList lists the manifest entries for the specified address and prefix
// and returns it as a ManifestList
func (a *API) GetManifestList(ctx context.Context, decryptor DecryptFunc, addr storage.Address, prefix string) (list ManifestList, err error) {
apiManifestListCount.Inc(1)
walker, err := a.NewManifestWalker(ctx, addr, decryptor, nil)
if err != nil {
apiManifestListFail.Inc(1)
return ManifestList{}, err
}
err = walker.Walk(func(entry *ManifestEntry) error {
// handle non-manifest files
if entry.ContentType != ManifestType {
// ignore the file if it doesn't have the specified prefix
if !strings.HasPrefix(entry.Path, prefix) {
return nil
}
// if the path after the prefix contains a slash, add a
// common prefix to the list, otherwise add the entry
suffix := strings.TrimPrefix(entry.Path, prefix)
if index := strings.Index(suffix, "/"); index > -1 {
list.CommonPrefixes = append(list.CommonPrefixes, prefix+suffix[:index+1])
return nil
}
if entry.Path == "" {
entry.Path = "/"
}
list.Entries = append(list.Entries, entry)
return nil
}
// if the manifest's path is a prefix of the specified prefix
// then just recurse into the manifest by returning nil and
// continuing the walk
if strings.HasPrefix(prefix, entry.Path) {
return nil
}
// if the manifest's path has the specified prefix, then if the
// path after the prefix contains a slash, add a common prefix
// to the list and skip the manifest, otherwise recurse into
// the manifest by returning nil and continuing the walk
if strings.HasPrefix(entry.Path, prefix) {
suffix := strings.TrimPrefix(entry.Path, prefix)
if index := strings.Index(suffix, "/"); index > -1 {
list.CommonPrefixes = append(list.CommonPrefixes, prefix+suffix[:index+1])
return ErrSkipManifest
}
return nil
}
// the manifest neither has the prefix or needs recursing in to
// so just skip it
return ErrSkipManifest
})
if err != nil {
apiManifestListFail.Inc(1)
return ManifestList{}, err
}
return list, nil
}
func (a *API) UpdateManifest(ctx context.Context, addr storage.Address, update func(mw *ManifestWriter) error) (storage.Address, error) {
apiManifestUpdateCount.Inc(1)
mw, err := a.NewManifestWriter(ctx, addr, nil)
if err != nil {
apiManifestUpdateFail.Inc(1)
return nil, err
}
if err := update(mw); err != nil {
apiManifestUpdateFail.Inc(1)
return nil, err
}
addr, err = mw.Store()
if err != nil {
apiManifestUpdateFail.Inc(1)
return nil, err
}
log.Debug(fmt.Sprintf("generated manifest %s", addr))
return addr, nil
}
// Modify loads manifest and checks the content hash before recalculating and storing the manifest.
func (a *API) Modify(ctx context.Context, addr storage.Address, path, contentHash, contentType string) (storage.Address, error) {
apiModifyCount.Inc(1)
quitC := make(chan bool)
trie, err := loadManifest(ctx, a.fileStore, addr, quitC, NOOPDecrypt)
if err != nil {
apiModifyFail.Inc(1)
return nil, err
}
if contentHash != "" {
entry := newManifestTrieEntry(&ManifestEntry{
Path: path,
ContentType: contentType,
}, nil)
entry.Hash = contentHash
trie.addEntry(entry, quitC)
} else {
trie.deleteEntry(path, quitC)
}
if err := trie.recalcAndStore(); err != nil {
apiModifyFail.Inc(1)
return nil, err
}
return trie.ref, nil
}
// AddFile creates a new manifest entry, adds it to swarm, then adds a file to swarm.
func (a *API) AddFile(ctx context.Context, mhash, path, fname string, content []byte, nameresolver bool) (storage.Address, string, error) {
apiAddFileCount.Inc(1)
uri, err := Parse("bzz:/" + mhash)
if err != nil {
apiAddFileFail.Inc(1)
return nil, "", err
}
mkey, err := a.ResolveURI(ctx, uri, EmptyCredentials)
if err != nil {
apiAddFileFail.Inc(1)
return nil, "", err
}
// trim the root dir we added
if path[:1] == "/" {
path = path[1:]
}
entry := &ManifestEntry{
Path: filepath.Join(path, fname),
ContentType: mime.TypeByExtension(filepath.Ext(fname)),
Mode: 0700,
Size: int64(len(content)),
ModTime: time.Now(),
}
mw, err := a.NewManifestWriter(ctx, mkey, nil)
if err != nil {
apiAddFileFail.Inc(1)
return nil, "", err
}
fkey, err := mw.AddEntry(ctx, bytes.NewReader(content), entry)
if err != nil {
apiAddFileFail.Inc(1)
return nil, "", err
}
newMkey, err := mw.Store()
if err != nil {
apiAddFileFail.Inc(1)
return nil, "", err
}
return fkey, newMkey.String(), nil
}
func (a *API) UploadTar(ctx context.Context, bodyReader io.ReadCloser, manifestPath, defaultPath string, mw *ManifestWriter) (storage.Address, error) {
apiUploadTarCount.Inc(1)
var contentKey storage.Address
tr := tar.NewReader(bodyReader)
defer bodyReader.Close()
var defaultPathFound bool
for {
hdr, err := tr.Next()
if err == io.EOF {
break
} else if err != nil {
apiUploadTarFail.Inc(1)
return nil, fmt.Errorf("error reading tar stream: %s", err)
}
// only store regular files
if !hdr.FileInfo().Mode().IsRegular() {
continue
}
// add the entry under the path from the request
manifestPath := path.Join(manifestPath, hdr.Name)
contentType := hdr.Xattrs["user.swarm.content-type"]
if contentType == "" {
contentType = mime.TypeByExtension(filepath.Ext(hdr.Name))
}
//DetectContentType("")
entry := &ManifestEntry{
Path: manifestPath,
ContentType: contentType,
Mode: hdr.Mode,
Size: hdr.Size,
ModTime: hdr.ModTime,
}
contentKey, err = mw.AddEntry(ctx, tr, entry)
if err != nil {
apiUploadTarFail.Inc(1)
return nil, fmt.Errorf("error adding manifest entry from tar stream: %s", err)
}
if hdr.Name == defaultPath {
contentType := hdr.Xattrs["user.swarm.content-type"]
if contentType == "" {
contentType = mime.TypeByExtension(filepath.Ext(hdr.Name))
}
entry := &ManifestEntry{
Hash: contentKey.Hex(),
Path: "", // default entry
ContentType: contentType,
Mode: hdr.Mode,
Size: hdr.Size,
ModTime: hdr.ModTime,
}
contentKey, err = mw.AddEntry(ctx, nil, entry)
if err != nil {
apiUploadTarFail.Inc(1)
return nil, fmt.Errorf("error adding default manifest entry from tar stream: %s", err)
}
defaultPathFound = true
}
}
if defaultPath != "" && !defaultPathFound {
return contentKey, fmt.Errorf("default path %q not found", defaultPath)
}
return contentKey, nil
}
// RemoveFile removes a file entry in a manifest.
func (a *API) RemoveFile(ctx context.Context, mhash string, path string, fname string, nameresolver bool) (string, error) {
apiRmFileCount.Inc(1)
uri, err := Parse("bzz:/" + mhash)
if err != nil {
apiRmFileFail.Inc(1)
return "", err
}
mkey, err := a.ResolveURI(ctx, uri, EmptyCredentials)
if err != nil {
apiRmFileFail.Inc(1)
return "", err
}
// trim the root dir we added
if path[:1] == "/" {
path = path[1:]
}
mw, err := a.NewManifestWriter(ctx, mkey, nil)
if err != nil {
apiRmFileFail.Inc(1)
return "", err
}
err = mw.RemoveEntry(filepath.Join(path, fname))
if err != nil {
apiRmFileFail.Inc(1)
return "", err
}
newMkey, err := mw.Store()
if err != nil {
apiRmFileFail.Inc(1)
return "", err
}
return newMkey.String(), nil
}
// AppendFile removes old manifest, appends file entry to new manifest and adds it to Swarm.
func (a *API) AppendFile(ctx context.Context, mhash, path, fname string, existingSize int64, content []byte, oldAddr storage.Address, offset int64, addSize int64, nameresolver bool) (storage.Address, string, error) {
apiAppendFileCount.Inc(1)
buffSize := offset + addSize
if buffSize < existingSize {
buffSize = existingSize
}
buf := make([]byte, buffSize)
oldReader, _ := a.Retrieve(ctx, oldAddr)
io.ReadAtLeast(oldReader, buf, int(offset))
newReader := bytes.NewReader(content)
io.ReadAtLeast(newReader, buf[offset:], int(addSize))
if buffSize < existingSize {
io.ReadAtLeast(oldReader, buf[addSize:], int(buffSize))
}
combinedReader := bytes.NewReader(buf)
totalSize := int64(len(buf))
// TODO(jmozah): to append using pyramid chunker when it is ready
//oldReader := a.Retrieve(oldKey)
//newReader := bytes.NewReader(content)
//combinedReader := io.MultiReader(oldReader, newReader)
uri, err := Parse("bzz:/" + mhash)
if err != nil {
apiAppendFileFail.Inc(1)
return nil, "", err
}
mkey, err := a.ResolveURI(ctx, uri, EmptyCredentials)
if err != nil {
apiAppendFileFail.Inc(1)
return nil, "", err
}
// trim the root dir we added
if path[:1] == "/" {
path = path[1:]
}
mw, err := a.NewManifestWriter(ctx, mkey, nil)
if err != nil {
apiAppendFileFail.Inc(1)
return nil, "", err
}
err = mw.RemoveEntry(filepath.Join(path, fname))
if err != nil {
apiAppendFileFail.Inc(1)
return nil, "", err
}
entry := &ManifestEntry{
Path: filepath.Join(path, fname),
ContentType: mime.TypeByExtension(filepath.Ext(fname)),
Mode: 0700,
Size: totalSize,
ModTime: time.Now(),
}
fkey, err := mw.AddEntry(ctx, io.Reader(combinedReader), entry)
if err != nil {
apiAppendFileFail.Inc(1)
return nil, "", err
}
newMkey, err := mw.Store()
if err != nil {
apiAppendFileFail.Inc(1)
return nil, "", err
}
return fkey, newMkey.String(), nil
}
// BuildDirectoryTree used by swarmfs_unix
func (a *API) BuildDirectoryTree(ctx context.Context, mhash string, nameresolver bool) (addr storage.Address, manifestEntryMap map[string]*manifestTrieEntry, err error) {
uri, err := Parse("bzz:/" + mhash)
if err != nil {
return nil, nil, err
}
addr, err = a.Resolve(ctx, uri.Addr)
if err != nil {
return nil, nil, err
}
quitC := make(chan bool)
rootTrie, err := loadManifest(ctx, a.fileStore, addr, quitC, NOOPDecrypt)
if err != nil {
return nil, nil, fmt.Errorf("can't load manifest %v: %v", addr.String(), err)
}
manifestEntryMap = map[string]*manifestTrieEntry{}
err = rootTrie.listWithPrefix(uri.Path, quitC, func(entry *manifestTrieEntry, suffix string) {
manifestEntryMap[suffix] = entry
})
if err != nil {
return nil, nil, fmt.Errorf("list with prefix failed %v: %v", addr.String(), err)
}
return addr, manifestEntryMap, nil
}
// FeedsLookup finds Swarm feeds updates at specific points in time, or the latest update
func (a *API) FeedsLookup(ctx context.Context, query *feed.Query) ([]byte, error) {
_, err := a.feed.Lookup(ctx, query)
if err != nil {
return nil, err
}
var data []byte
_, data, err = a.feed.GetContent(&query.Feed)
if err != nil {
return nil, err
}
return data, nil
}
// FeedsNewRequest creates a Request object to update a specific feed
func (a *API) FeedsNewRequest(ctx context.Context, feed *feed.Feed) (*feed.Request, error) {
return a.feed.NewRequest(ctx, feed)
}
// FeedsUpdate publishes a new update on the given feed
func (a *API) FeedsUpdate(ctx context.Context, request *feed.Request) (storage.Address, error) {
return a.feed.Update(ctx, request)
}
// ErrCannotLoadFeedManifest is returned when looking up a feeds manifest fails
var ErrCannotLoadFeedManifest = errors.New("Cannot load feed manifest")
// ErrNotAFeedManifest is returned when the address provided returned something other than a valid manifest
var ErrNotAFeedManifest = errors.New("Not a feed manifest")
// ResolveFeedManifest retrieves the Swarm feed manifest for the given address, and returns the referenced Feed.
func (a *API) ResolveFeedManifest(ctx context.Context, addr storage.Address) (*feed.Feed, error) {
trie, err := loadManifest(ctx, a.fileStore, addr, nil, NOOPDecrypt)
if err != nil {
return nil, ErrCannotLoadFeedManifest
}
entry, _ := trie.getEntry("")
if entry.ContentType != FeedContentType {
return nil, ErrNotAFeedManifest
}
return entry.Feed, nil
}
// ErrCannotResolveFeedURI is returned when the ENS resolver is not able to translate a name to a Swarm feed
var ErrCannotResolveFeedURI = errors.New("Cannot resolve Feed URI")
// ErrCannotResolveFeed is returned when values provided are not enough or invalid to recreate a
// feed out of them.
var ErrCannotResolveFeed = errors.New("Cannot resolve Feed")
// ResolveFeed attempts to extract feed information out of the manifest, if provided
// If not, it attempts to extract the feed out of a set of key-value pairs
func (a *API) ResolveFeed(ctx context.Context, uri *URI, values feed.Values) (*feed.Feed, error) {
var fd *feed.Feed
var err error
if uri.Addr != "" {
// resolve the content key.
manifestAddr := uri.Address()
if manifestAddr == nil {
manifestAddr, err = a.Resolve(ctx, uri.Addr)
if err != nil {
return nil, ErrCannotResolveFeedURI
}
}
// get the Swarm feed from the manifest
fd, err = a.ResolveFeedManifest(ctx, manifestAddr)
if err != nil {
return nil, err
}
log.Debug("handle.get.feed: resolved", "manifestkey", manifestAddr, "feed", fd.Hex())
} else {
var f feed.Feed
if err := f.FromValues(values); err != nil {
return nil, ErrCannotResolveFeed
}
fd = &f
}
return fd, nil
}
// MimeOctetStream default value of http Content-Type header
const MimeOctetStream = "application/octet-stream"
// DetectContentType by file file extension, or fallback to content sniff
func DetectContentType(fileName string, f io.ReadSeeker) (string, error) {
ctype := mime.TypeByExtension(filepath.Ext(fileName))
if ctype != "" {
return ctype, nil
}
// save/rollback to get content probe from begin of file
currentPosition, err := f.Seek(0, io.SeekCurrent)
if err != nil {
return MimeOctetStream, fmt.Errorf("seeker can't seek, %s", err)
}
// read a chunk to decide between utf-8 text and binary
var buf [512]byte
n, _ := f.Read(buf[:])
ctype = http.DetectContentType(buf[:n])
_, err = f.Seek(currentPosition, io.SeekStart) // rewind to output whole file
if err != nil {
return MimeOctetStream, fmt.Errorf("seeker can't seek, %s", err)
}
return ctype, nil
}

577
api/api_test.go Normal file
View File

@ -0,0 +1,577 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package api
import (
"bytes"
"context"
crand "crypto/rand"
"errors"
"flag"
"fmt"
"io"
"io/ioutil"
"math/big"
"os"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/sctx"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/testutil"
)
func init() {
loglevel := flag.Int("loglevel", 2, "loglevel")
flag.Parse()
log.Root().SetHandler(log.CallerFileHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(os.Stderr, log.TerminalFormat(true)))))
}
func testAPI(t *testing.T, f func(*API, *chunk.Tags, bool)) {
for _, v := range []bool{true, false} {
datadir, err := ioutil.TempDir("", "bzz-test")
if err != nil {
t.Fatalf("unable to create temp dir: %v", err)
}
defer os.RemoveAll(datadir)
tags := chunk.NewTags()
fileStore, cleanup, err := storage.NewLocalFileStore(datadir, make([]byte, 32), tags)
if err != nil {
return
}
defer cleanup()
api := NewAPI(fileStore, nil, nil, nil, tags)
f(api, tags, v)
}
}
type testResponse struct {
reader storage.LazySectionReader
*Response
}
type Response struct {
MimeType string
Status int
Size int64
Content string
}
func checkResponse(t *testing.T, resp *testResponse, exp *Response) {
if resp.MimeType != exp.MimeType {
t.Errorf("incorrect mimeType. expected '%s', got '%s'", exp.MimeType, resp.MimeType)
}
if resp.Status != exp.Status {
t.Errorf("incorrect status. expected '%d', got '%d'", exp.Status, resp.Status)
}
if resp.Size != exp.Size {
t.Errorf("incorrect size. expected '%d', got '%d'", exp.Size, resp.Size)
}
if resp.reader != nil {
content := make([]byte, resp.Size)
read, _ := resp.reader.Read(content)
if int64(read) != exp.Size {
t.Errorf("incorrect content length. expected '%d...', got '%d...'", read, exp.Size)
}
resp.Content = string(content)
}
if resp.Content != exp.Content {
// if !bytes.Equal(resp.Content, exp.Content)
t.Errorf("incorrect content. expected '%s...', got '%s...'", string(exp.Content), string(resp.Content))
}
}
// func expResponse(content []byte, mimeType string, status int) *Response {
func expResponse(content string, mimeType string, status int) *Response {
log.Trace(fmt.Sprintf("expected content (%v): %v ", len(content), content))
return &Response{mimeType, status, int64(len(content)), content}
}
func testGet(t *testing.T, api *API, bzzhash, path string) *testResponse {
addr := storage.Address(common.Hex2Bytes(bzzhash))
reader, mimeType, status, _, err := api.Get(context.TODO(), NOOPDecrypt, addr, path)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
quitC := make(chan bool)
size, err := reader.Size(context.TODO(), quitC)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
log.Trace(fmt.Sprintf("reader size: %v ", size))
s := make([]byte, size)
_, err = reader.Read(s)
if err != io.EOF {
t.Fatalf("unexpected error: %v", err)
}
reader.Seek(0, 0)
return &testResponse{reader, &Response{mimeType, status, size, string(s)}}
}
func TestApiPut(t *testing.T) {
testAPI(t, func(api *API, tags *chunk.Tags, toEncrypt bool) {
content := "hello"
exp := expResponse(content, "text/plain", 0)
ctx := context.TODO()
addr, wait, err := putString(ctx, api, content, exp.MimeType, toEncrypt)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
err = wait(ctx)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
resp := testGet(t, api, addr.Hex(), "")
checkResponse(t, resp, exp)
tag := tags.All()[0]
testutil.CheckTag(t, tag, 2, 2, 0, 2) //1 chunk data, 1 chunk manifest
})
}
// TestApiTagLarge tests that the the number of chunks counted is larger for a larger input
func TestApiTagLarge(t *testing.T) {
const contentLength = 4096 * 4095
testAPI(t, func(api *API, tags *chunk.Tags, toEncrypt bool) {
randomContentReader := io.LimitReader(crand.Reader, int64(contentLength))
tag, err := api.Tags.New("unnamed-tag", 0)
if err != nil {
t.Fatal(err)
}
ctx := sctx.SetTag(context.Background(), tag.Uid)
key, waitContent, err := api.Store(ctx, randomContentReader, int64(contentLength), toEncrypt)
if err != nil {
t.Fatal(err)
}
err = waitContent(ctx)
if err != nil {
t.Fatal(err)
}
tag.DoneSplit(key)
if toEncrypt {
tag := tags.All()[0]
expect := int64(4095 + 64 + 1)
testutil.CheckTag(t, tag, expect, expect, 0, expect)
} else {
tag := tags.All()[0]
expect := int64(4095 + 32 + 1)
testutil.CheckTag(t, tag, expect, expect, 0, expect)
}
})
}
// testResolver implements the Resolver interface and either returns the given
// hash if it is set, or returns a "name not found" error
type testResolveValidator struct {
hash *common.Hash
}
func newTestResolveValidator(addr string) *testResolveValidator {
r := &testResolveValidator{}
if addr != "" {
hash := common.HexToHash(addr)
r.hash = &hash
}
return r
}
func (t *testResolveValidator) Resolve(addr string) (common.Hash, error) {
if t.hash == nil {
return common.Hash{}, fmt.Errorf("DNS name not found: %q", addr)
}
return *t.hash, nil
}
func (t *testResolveValidator) Owner(node [32]byte) (addr common.Address, err error) {
return
}
func (t *testResolveValidator) HeaderByNumber(context.Context, *big.Int) (header *types.Header, err error) {
return
}
// TestAPIResolve tests resolving URIs which can either contain content hashes
// or ENS names
func TestAPIResolve(t *testing.T) {
ensAddr := "swarm.eth"
hashAddr := "1111111111111111111111111111111111111111111111111111111111111111"
resolvedAddr := "2222222222222222222222222222222222222222222222222222222222222222"
doesResolve := newTestResolveValidator(resolvedAddr)
doesntResolve := newTestResolveValidator("")
type test struct {
desc string
dns Resolver
addr string
immutable bool
result string
expectErr error
}
tests := []*test{
{
desc: "DNS not configured, hash address, returns hash address",
dns: nil,
addr: hashAddr,
result: hashAddr,
},
{
desc: "DNS not configured, ENS address, returns error",
dns: nil,
addr: ensAddr,
expectErr: errors.New(`no DNS to resolve name: "swarm.eth"`),
},
{
desc: "DNS configured, hash address, hash resolves, returns resolved address",
dns: doesResolve,
addr: hashAddr,
result: resolvedAddr,
},
{
desc: "DNS configured, immutable hash address, hash resolves, returns hash address",
dns: doesResolve,
addr: hashAddr,
immutable: true,
result: hashAddr,
},
{
desc: "DNS configured, hash address, hash doesn't resolve, returns hash address",
dns: doesntResolve,
addr: hashAddr,
result: hashAddr,
},
{
desc: "DNS configured, ENS address, name resolves, returns resolved address",
dns: doesResolve,
addr: ensAddr,
result: resolvedAddr,
},
{
desc: "DNS configured, immutable ENS address, name resolves, returns error",
dns: doesResolve,
addr: ensAddr,
immutable: true,
expectErr: errors.New(`immutable address not a content hash: "swarm.eth"`),
},
{
desc: "DNS configured, ENS address, name doesn't resolve, returns error",
dns: doesntResolve,
addr: ensAddr,
expectErr: errors.New(`DNS name not found: "swarm.eth"`),
},
}
for _, x := range tests {
t.Run(x.desc, func(t *testing.T) {
api := &API{dns: x.dns}
uri := &URI{Addr: x.addr, Scheme: "bzz"}
if x.immutable {
uri.Scheme = "bzz-immutable"
}
res, err := api.ResolveURI(context.TODO(), uri, "")
if err == nil {
if x.expectErr != nil {
t.Fatalf("expected error %q, got result %q", x.expectErr, res)
}
if res.String() != x.result {
t.Fatalf("expected result %q, got %q", x.result, res)
}
} else {
if x.expectErr == nil {
t.Fatalf("expected no error, got %q", err)
}
if err.Error() != x.expectErr.Error() {
t.Fatalf("expected error %q, got %q", x.expectErr, err)
}
}
})
}
}
func TestMultiResolver(t *testing.T) {
doesntResolve := newTestResolveValidator("")
ethAddr := "swarm.eth"
ethHash := "0x2222222222222222222222222222222222222222222222222222222222222222"
ethResolve := newTestResolveValidator(ethHash)
testAddr := "swarm.test"
testHash := "0x1111111111111111111111111111111111111111111111111111111111111111"
testResolve := newTestResolveValidator(testHash)
tests := []struct {
desc string
r Resolver
addr string
result string
err error
}{
{
desc: "No resolvers, returns error",
r: NewMultiResolver(),
err: NewNoResolverError(""),
},
{
desc: "One default resolver, returns resolved address",
r: NewMultiResolver(MultiResolverOptionWithResolver(ethResolve, "")),
addr: ethAddr,
result: ethHash,
},
{
desc: "Two default resolvers, returns resolved address",
r: NewMultiResolver(
MultiResolverOptionWithResolver(ethResolve, ""),
MultiResolverOptionWithResolver(ethResolve, ""),
),
addr: ethAddr,
result: ethHash,
},
{
desc: "Two default resolvers, first doesn't resolve, returns resolved address",
r: NewMultiResolver(
MultiResolverOptionWithResolver(doesntResolve, ""),
MultiResolverOptionWithResolver(ethResolve, ""),
),
addr: ethAddr,
result: ethHash,
},
{
desc: "Default resolver doesn't resolve, tld resolver resolve, returns resolved address",
r: NewMultiResolver(
MultiResolverOptionWithResolver(doesntResolve, ""),
MultiResolverOptionWithResolver(ethResolve, "eth"),
),
addr: ethAddr,
result: ethHash,
},
{
desc: "Three TLD resolvers, third resolves, returns resolved address",
r: NewMultiResolver(
MultiResolverOptionWithResolver(doesntResolve, "eth"),
MultiResolverOptionWithResolver(doesntResolve, "eth"),
MultiResolverOptionWithResolver(ethResolve, "eth"),
),
addr: ethAddr,
result: ethHash,
},
{
desc: "One TLD resolver doesn't resolve, returns error",
r: NewMultiResolver(
MultiResolverOptionWithResolver(doesntResolve, ""),
MultiResolverOptionWithResolver(ethResolve, "eth"),
),
addr: ethAddr,
result: ethHash,
},
{
desc: "One defautl and one TLD resolver, all doesn't resolve, returns error",
r: NewMultiResolver(
MultiResolverOptionWithResolver(doesntResolve, ""),
MultiResolverOptionWithResolver(doesntResolve, "eth"),
),
addr: ethAddr,
result: ethHash,
err: errors.New(`DNS name not found: "swarm.eth"`),
},
{
desc: "Two TLD resolvers, both resolve, returns resolved address",
r: NewMultiResolver(
MultiResolverOptionWithResolver(ethResolve, "eth"),
MultiResolverOptionWithResolver(testResolve, "test"),
),
addr: testAddr,
result: testHash,
},
{
desc: "One TLD resolver, no default resolver, returns error for different TLD",
r: NewMultiResolver(
MultiResolverOptionWithResolver(ethResolve, "eth"),
),
addr: testAddr,
err: NewNoResolverError("test"),
},
}
for _, x := range tests {
t.Run(x.desc, func(t *testing.T) {
res, err := x.r.Resolve(x.addr)
if err == nil {
if x.err != nil {
t.Fatalf("expected error %q, got result %q", x.err, res.Hex())
}
if res.Hex() != x.result {
t.Fatalf("expected result %q, got %q", x.result, res.Hex())
}
} else {
if x.err == nil {
t.Fatalf("expected no error, got %q", err)
}
if err.Error() != x.err.Error() {
t.Fatalf("expected error %q, got %q", x.err, err)
}
}
})
}
}
func TestDecryptOriginForbidden(t *testing.T) {
ctx := context.TODO()
ctx = sctx.SetHost(ctx, "swarm-gateways.net")
me := &ManifestEntry{
Access: &AccessEntry{Type: AccessTypePass},
}
api := NewAPI(nil, nil, nil, nil, chunk.NewTags())
f := api.Decryptor(ctx, "")
err := f(me)
if err != ErrDecryptDomainForbidden {
t.Fatalf("should fail with ErrDecryptDomainForbidden, got %v", err)
}
}
func TestDecryptOrigin(t *testing.T) {
for _, v := range []struct {
host string
expectError error
}{
{
host: "localhost",
expectError: ErrDecrypt,
},
{
host: "127.0.0.1",
expectError: ErrDecrypt,
},
{
host: "swarm-gateways.net",
expectError: ErrDecryptDomainForbidden,
},
} {
ctx := context.TODO()
ctx = sctx.SetHost(ctx, v.host)
me := &ManifestEntry{
Access: &AccessEntry{Type: AccessTypePass},
}
api := NewAPI(nil, nil, nil, nil, chunk.NewTags())
f := api.Decryptor(ctx, "")
err := f(me)
if err != v.expectError {
t.Fatalf("should fail with %v, got %v", v.expectError, err)
}
}
}
func TestDetectContentType(t *testing.T) {
for _, tc := range []struct {
file string
content string
expectedContentType string
}{
{
file: "file-with-correct-css.css",
content: "body {background-color: orange}",
expectedContentType: "text/css; charset=utf-8",
},
{
file: "empty-file.css",
content: "",
expectedContentType: "text/css; charset=utf-8",
},
{
file: "empty-file.pdf",
content: "",
expectedContentType: "application/pdf",
},
{
file: "empty-file.md",
content: "",
expectedContentType: "text/markdown; charset=utf-8",
},
{
file: "empty-file-with-unknown-content.strangeext",
content: "",
expectedContentType: "text/plain; charset=utf-8",
},
{
file: "file-with-unknown-extension-and-content.strangeext",
content: "Lorem Ipsum",
expectedContentType: "text/plain; charset=utf-8",
},
{
file: "file-no-extension",
content: "Lorem Ipsum",
expectedContentType: "text/plain; charset=utf-8",
},
{
file: "file-no-extension-no-content",
content: "",
expectedContentType: "text/plain; charset=utf-8",
},
{
file: "css-file-with-html-inside.css",
content: "<!doctype html><html><head></head><body></body></html>",
expectedContentType: "text/css; charset=utf-8",
},
} {
t.Run(tc.file, func(t *testing.T) {
detected, err := DetectContentType(tc.file, bytes.NewReader([]byte(tc.content)))
if err != nil {
t.Fatal(err)
}
if detected != tc.expectedContentType {
t.Fatalf("File: %s, Expected mime type %s, got %s", tc.file, tc.expectedContentType, detected)
}
})
}
}
// putString provides singleton manifest creation on top of api.API
func putString(ctx context.Context, a *API, content string, contentType string, toEncrypt bool) (k storage.Address, wait func(context.Context) error, err error) {
r := strings.NewReader(content)
tag, err := a.Tags.New("unnamed-tag", 0)
log.Trace("created new tag", "uid", tag.Uid)
cCtx := sctx.SetTag(ctx, tag.Uid)
key, waitContent, err := a.Store(cCtx, r, int64(len(content)), toEncrypt)
if err != nil {
return nil, nil, err
}
manifest := fmt.Sprintf(`{"entries":[{"hash":"%v","contentType":"%s"}]}`, key, contentType)
r = strings.NewReader(manifest)
key, waitManifest, err := a.Store(cCtx, r, int64(len(manifest)), toEncrypt)
if err != nil {
return nil, nil, err
}
tag.DoneSplit(key)
return key, func(ctx context.Context) error {
err := waitContent(ctx)
if err != nil {
return err
}
return waitManifest(ctx)
}, nil
}

829
api/client/client.go Normal file
View File

@ -0,0 +1,829 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package client
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"mime/multipart"
"net/http"
"net/http/httptrace"
"net/textproto"
"net/url"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/api"
swarmhttp "github.com/ethersphere/swarm/api/http"
"github.com/ethersphere/swarm/spancontext"
"github.com/ethersphere/swarm/storage/feed"
"github.com/pborman/uuid"
)
var (
ErrUnauthorized = errors.New("unauthorized")
)
func NewClient(gateway string) *Client {
return &Client{
Gateway: gateway,
}
}
// Client wraps interaction with a swarm HTTP gateway.
type Client struct {
Gateway string
}
// UploadRaw uploads raw data to swarm and returns the resulting hash. If toEncrypt is true it
// uploads encrypted data
func (c *Client) UploadRaw(r io.Reader, size int64, toEncrypt bool) (string, error) {
if size <= 0 {
return "", errors.New("data size must be greater than zero")
}
addr := ""
if toEncrypt {
addr = "encrypt"
}
req, err := http.NewRequest("POST", c.Gateway+"/bzz-raw:/"+addr, r)
if err != nil {
return "", err
}
req.ContentLength = size
req.Header.Set(swarmhttp.SwarmTagHeaderName, fmt.Sprintf("raw_upload_%d", time.Now().Unix()))
res, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return "", fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
data, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(data), nil
}
// DownloadRaw downloads raw data from swarm and it returns a ReadCloser and a bool whether the
// content was encrypted
func (c *Client) DownloadRaw(hash string) (io.ReadCloser, bool, error) {
uri := c.Gateway + "/bzz-raw:/" + hash
res, err := http.DefaultClient.Get(uri)
if err != nil {
return nil, false, err
}
if res.StatusCode != http.StatusOK {
res.Body.Close()
return nil, false, fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
isEncrypted := (res.Header.Get("X-Decrypted") == "true")
return res.Body, isEncrypted, nil
}
// File represents a file in a swarm manifest and is used for uploading and
// downloading content to and from swarm
type File struct {
io.ReadCloser
api.ManifestEntry
Tag string
}
// Open opens a local file which can then be passed to client.Upload to upload
// it to swarm
func Open(path string) (*File, error) {
f, err := os.Open(path)
if err != nil {
return nil, err
}
stat, err := f.Stat()
if err != nil {
f.Close()
return nil, err
}
contentType, err := api.DetectContentType(f.Name(), f)
if err != nil {
return nil, err
}
return &File{
ReadCloser: f,
ManifestEntry: api.ManifestEntry{
ContentType: contentType,
Mode: int64(stat.Mode()),
Size: stat.Size(),
ModTime: stat.ModTime(),
},
Tag: filepath.Base(path),
}, nil
}
// Upload uploads a file to swarm and either adds it to an existing manifest
// (if the manifest argument is non-empty) or creates a new manifest containing
// the file, returning the resulting manifest hash (the file will then be
// available at bzz:/<hash>/<path>)
func (c *Client) Upload(file *File, manifest string, toEncrypt bool) (string, error) {
if file.Size <= 0 {
return "", errors.New("file size must be greater than zero")
}
return c.TarUpload(manifest, &FileUploader{file}, "", toEncrypt)
}
// Download downloads a file with the given path from the swarm manifest with
// the given hash (i.e. it gets bzz:/<hash>/<path>)
func (c *Client) Download(hash, path string) (*File, error) {
uri := c.Gateway + "/bzz:/" + hash + "/" + path
res, err := http.DefaultClient.Get(uri)
if err != nil {
return nil, err
}
if res.StatusCode != http.StatusOK {
res.Body.Close()
return nil, fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
return &File{
ReadCloser: res.Body,
ManifestEntry: api.ManifestEntry{
ContentType: res.Header.Get("Content-Type"),
Size: res.ContentLength,
},
}, nil
}
// UploadDirectory uploads a directory tree to swarm and either adds the files
// to an existing manifest (if the manifest argument is non-empty) or creates a
// new manifest, returning the resulting manifest hash (files from the
// directory will then be available at bzz:/<hash>/path/to/file), with
// the file specified in defaultPath being uploaded to the root of the manifest
// (i.e. bzz:/<hash>/)
func (c *Client) UploadDirectory(dir, defaultPath, manifest string, toEncrypt bool) (string, error) {
stat, err := os.Stat(dir)
if err != nil {
return "", err
} else if !stat.IsDir() {
return "", fmt.Errorf("not a directory: %s", dir)
}
if defaultPath != "" {
if _, err := os.Stat(filepath.Join(dir, defaultPath)); err != nil {
if os.IsNotExist(err) {
return "", fmt.Errorf("the default path %q was not found in the upload directory %q", defaultPath, dir)
}
return "", fmt.Errorf("default path: %v", err)
}
}
return c.TarUpload(manifest, &DirectoryUploader{dir}, defaultPath, toEncrypt)
}
// DownloadDirectory downloads the files contained in a swarm manifest under
// the given path into a local directory (existing files will be overwritten)
func (c *Client) DownloadDirectory(hash, path, destDir, credentials string) error {
stat, err := os.Stat(destDir)
if err != nil {
return err
} else if !stat.IsDir() {
return fmt.Errorf("not a directory: %s", destDir)
}
uri := c.Gateway + "/bzz:/" + hash + "/" + path
req, err := http.NewRequest("GET", uri, nil)
if err != nil {
return err
}
if credentials != "" {
req.SetBasicAuth("", credentials)
}
req.Header.Set("Accept", "application/x-tar")
res, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
case http.StatusUnauthorized:
return ErrUnauthorized
default:
return fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
tr := tar.NewReader(res.Body)
for {
hdr, err := tr.Next()
if err == io.EOF {
return nil
} else if err != nil {
return err
}
// ignore the default path file
if hdr.Name == "" {
continue
}
dstPath := filepath.Join(destDir, filepath.Clean(strings.TrimPrefix(hdr.Name, path)))
if err := os.MkdirAll(filepath.Dir(dstPath), 0755); err != nil {
return err
}
var mode os.FileMode = 0644
if hdr.Mode > 0 {
mode = os.FileMode(hdr.Mode)
}
dst, err := os.OpenFile(dstPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, mode)
if err != nil {
return err
}
n, err := io.Copy(dst, tr)
dst.Close()
if err != nil {
return err
} else if n != hdr.Size {
return fmt.Errorf("expected %s to be %d bytes but got %d", hdr.Name, hdr.Size, n)
}
}
}
// DownloadFile downloads a single file into the destination directory
// if the manifest entry does not specify a file name - it will fallback
// to the hash of the file as a filename
func (c *Client) DownloadFile(hash, path, dest, credentials string) error {
hasDestinationFilename := false
if stat, err := os.Stat(dest); err == nil {
hasDestinationFilename = !stat.IsDir()
} else {
if os.IsNotExist(err) {
// does not exist - should be created
hasDestinationFilename = true
} else {
return fmt.Errorf("could not stat path: %v", err)
}
}
manifestList, err := c.List(hash, path, credentials)
if err != nil {
return err
}
switch len(manifestList.Entries) {
case 0:
return fmt.Errorf("could not find path requested at manifest address. make sure the path you've specified is correct")
case 1:
//continue
default:
return fmt.Errorf("got too many matches for this path")
}
uri := c.Gateway + "/bzz:/" + hash + "/" + path
req, err := http.NewRequest("GET", uri, nil)
if err != nil {
return err
}
if credentials != "" {
req.SetBasicAuth("", credentials)
}
res, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
case http.StatusUnauthorized:
return ErrUnauthorized
default:
return fmt.Errorf("unexpected HTTP status: expected 200 OK, got %d", res.StatusCode)
}
filename := ""
if hasDestinationFilename {
filename = dest
} else {
// try to assert
re := regexp.MustCompile("[^/]+$") //everything after last slash
if results := re.FindAllString(path, -1); len(results) > 0 {
filename = results[len(results)-1]
} else {
if entry := manifestList.Entries[0]; entry.Path != "" && entry.Path != "/" {
filename = entry.Path
} else {
// assume hash as name if there's nothing from the command line
filename = hash
}
}
filename = filepath.Join(dest, filename)
}
filePath, err := filepath.Abs(filename)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(filePath), 0777); err != nil {
return err
}
dst, err := os.Create(filename)
if err != nil {
return err
}
defer dst.Close()
_, err = io.Copy(dst, res.Body)
return err
}
// UploadManifest uploads the given manifest to swarm
func (c *Client) UploadManifest(m *api.Manifest, toEncrypt bool) (string, error) {
data, err := json.Marshal(m)
if err != nil {
return "", err
}
return c.UploadRaw(bytes.NewReader(data), int64(len(data)), toEncrypt)
}
// DownloadManifest downloads a swarm manifest
func (c *Client) DownloadManifest(hash string) (*api.Manifest, bool, error) {
res, isEncrypted, err := c.DownloadRaw(hash)
if err != nil {
return nil, isEncrypted, err
}
defer res.Close()
var manifest api.Manifest
if err := json.NewDecoder(res).Decode(&manifest); err != nil {
return nil, isEncrypted, err
}
return &manifest, isEncrypted, nil
}
// List list files in a swarm manifest which have the given prefix, grouping
// common prefixes using "/" as a delimiter.
//
// For example, if the manifest represents the following directory structure:
//
// file1.txt
// file2.txt
// dir1/file3.txt
// dir1/dir2/file4.txt
//
// Then:
//
// - a prefix of "" would return [dir1/, file1.txt, file2.txt]
// - a prefix of "file" would return [file1.txt, file2.txt]
// - a prefix of "dir1/" would return [dir1/dir2/, dir1/file3.txt]
//
// where entries ending with "/" are common prefixes.
func (c *Client) List(hash, prefix, credentials string) (*api.ManifestList, error) {
req, err := http.NewRequest(http.MethodGet, c.Gateway+"/bzz-list:/"+hash+"/"+prefix, nil)
if err != nil {
return nil, err
}
if credentials != "" {
req.SetBasicAuth("", credentials)
}
res, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
case http.StatusUnauthorized:
return nil, ErrUnauthorized
default:
return nil, fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
var list api.ManifestList
if err := json.NewDecoder(res.Body).Decode(&list); err != nil {
return nil, err
}
return &list, nil
}
// Uploader uploads files to swarm using a provided UploadFn
type Uploader interface {
Upload(UploadFn) error
Tag() string
}
type UploaderFunc func(UploadFn) error
func (u UploaderFunc) Upload(upload UploadFn) error {
return u(upload)
}
func (u UploaderFunc) Tag() string {
return fmt.Sprintf("multipart_upload_%d", time.Now().Unix())
}
// DirectoryUploader implements Uploader
var _ Uploader = &DirectoryUploader{}
// DirectoryUploader uploads all files in a directory, optionally uploading
// a file to the default path
type DirectoryUploader struct {
Dir string
}
func (d *DirectoryUploader) Tag() string {
return filepath.Base(d.Dir)
}
// Upload performs the upload of the directory and default path
func (d *DirectoryUploader) Upload(upload UploadFn) error {
return filepath.Walk(d.Dir, func(path string, f os.FileInfo, err error) error {
if err != nil {
return err
}
if f.IsDir() {
return nil
}
file, err := Open(path)
if err != nil {
return err
}
relPath, err := filepath.Rel(d.Dir, path)
if err != nil {
return err
}
file.Path = filepath.ToSlash(relPath)
return upload(file)
})
}
var _ Uploader = &FileUploader{}
// FileUploader uploads a single file
type FileUploader struct {
File *File
}
func (f *FileUploader) Tag() string {
return f.File.Tag
}
// Upload performs the upload of the file
func (f *FileUploader) Upload(upload UploadFn) error {
return upload(f.File)
}
// UploadFn is the type of function passed to an Uploader to perform the upload
// of a single file (for example, a directory uploader would call a provided
// UploadFn for each file in the directory tree)
type UploadFn func(file *File) error
// TarUpload uses the given Uploader to upload files to swarm as a tar stream,
// returning the resulting manifest hash
func (c *Client) TarUpload(hash string, uploader Uploader, defaultPath string, toEncrypt bool) (string, error) {
ctx, sp := spancontext.StartSpan(context.Background(), "api.client.tarupload")
defer sp.Finish()
var tn time.Time
reqR, reqW := io.Pipe()
defer reqR.Close()
addr := hash
// If there is a hash already (a manifest), then that manifest will determine if the upload has
// to be encrypted or not. If there is no manifest then the toEncrypt parameter decides if
// there is encryption or not.
if hash == "" && toEncrypt {
// This is the built-in address for the encrypted upload endpoint
addr = "encrypt"
}
req, err := http.NewRequest("POST", c.Gateway+"/bzz:/"+addr, reqR)
if err != nil {
return "", err
}
trace := GetClientTrace("swarm api client - upload tar", "api.client.uploadtar", uuid.New()[:8], &tn)
req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
transport := http.DefaultTransport
req.Header.Set("Content-Type", "application/x-tar")
if defaultPath != "" {
q := req.URL.Query()
q.Set("defaultpath", defaultPath)
req.URL.RawQuery = q.Encode()
}
tag := uploader.Tag()
if tag == "" {
tag = "unnamed_tag_" + fmt.Sprintf("%d", time.Now().Unix())
}
log.Trace("setting upload tag", "tag", tag)
req.Header.Set(swarmhttp.SwarmTagHeaderName, tag)
// use 'Expect: 100-continue' so we don't send the request body if
// the server refuses the request
req.Header.Set("Expect", "100-continue")
tw := tar.NewWriter(reqW)
// define an UploadFn which adds files to the tar stream
uploadFn := func(file *File) error {
hdr := &tar.Header{
Name: file.Path,
Mode: file.Mode,
Size: file.Size,
ModTime: file.ModTime,
Xattrs: map[string]string{
"user.swarm.content-type": file.ContentType,
},
}
if err := tw.WriteHeader(hdr); err != nil {
return err
}
_, err = io.Copy(tw, file)
return err
}
// run the upload in a goroutine so we can send the request headers and
// wait for a '100 Continue' response before sending the tar stream
go func() {
err := uploader.Upload(uploadFn)
if err == nil {
err = tw.Close()
}
reqW.CloseWithError(err)
}()
tn = time.Now()
res, err := transport.RoundTrip(req)
if err != nil {
return "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return "", fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
data, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(data), nil
}
// MultipartUpload uses the given Uploader to upload files to swarm as a
// multipart form, returning the resulting manifest hash
func (c *Client) MultipartUpload(hash string, uploader Uploader) (string, error) {
reqR, reqW := io.Pipe()
defer reqR.Close()
req, err := http.NewRequest("POST", c.Gateway+"/bzz:/"+hash, reqR)
if err != nil {
return "", err
}
// use 'Expect: 100-continue' so we don't send the request body if
// the server refuses the request
req.Header.Set("Expect", "100-continue")
mw := multipart.NewWriter(reqW)
req.Header.Set("Content-Type", fmt.Sprintf("multipart/form-data; boundary=%q", mw.Boundary()))
req.Header.Set(swarmhttp.SwarmTagHeaderName, fmt.Sprintf("multipart_upload_%d", time.Now().Unix()))
// define an UploadFn which adds files to the multipart form
uploadFn := func(file *File) error {
hdr := make(textproto.MIMEHeader)
hdr.Set("Content-Disposition", fmt.Sprintf("form-data; name=%q", file.Path))
hdr.Set("Content-Type", file.ContentType)
hdr.Set("Content-Length", strconv.FormatInt(file.Size, 10))
w, err := mw.CreatePart(hdr)
if err != nil {
return err
}
_, err = io.Copy(w, file)
return err
}
// run the upload in a goroutine so we can send the request headers and
// wait for a '100 Continue' response before sending the multipart form
go func() {
err := uploader.Upload(uploadFn)
if err == nil {
err = mw.Close()
}
reqW.CloseWithError(err)
}()
res, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return "", fmt.Errorf("unexpected HTTP status: %s", res.Status)
}
data, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(data), nil
}
// ErrNoFeedUpdatesFound is returned when Swarm cannot find updates of the given feed
var ErrNoFeedUpdatesFound = errors.New("No updates found for this feed")
// CreateFeedWithManifest creates a feed manifest, initializing it with the provided
// data
// Returns the resulting feed manifest address that you can use to include in an ENS Resolver (setContent)
// or reference future updates (Client.UpdateFeed)
func (c *Client) CreateFeedWithManifest(request *feed.Request) (string, error) {
responseStream, err := c.updateFeed(request, true)
if err != nil {
return "", err
}
defer responseStream.Close()
body, err := ioutil.ReadAll(responseStream)
if err != nil {
return "", err
}
var manifestAddress string
if err = json.Unmarshal(body, &manifestAddress); err != nil {
return "", err
}
return manifestAddress, nil
}
// UpdateFeed allows you to set a new version of your content
func (c *Client) UpdateFeed(request *feed.Request) error {
_, err := c.updateFeed(request, false)
return err
}
func (c *Client) updateFeed(request *feed.Request, createManifest bool) (io.ReadCloser, error) {
URL, err := url.Parse(c.Gateway)
if err != nil {
return nil, err
}
URL.Path = "/bzz-feed:/"
values := URL.Query()
body := request.AppendValues(values)
if createManifest {
values.Set("manifest", "1")
}
URL.RawQuery = values.Encode()
req, err := http.NewRequest("POST", URL.String(), bytes.NewBuffer(body))
if err != nil {
return nil, err
}
res, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}
return res.Body, nil
}
// QueryFeed returns a byte stream with the raw content of the feed update
// manifestAddressOrDomain is the address you obtained in CreateFeedWithManifest or an ENS domain whose Resolver
// points to that address
func (c *Client) QueryFeed(query *feed.Query, manifestAddressOrDomain string) (io.ReadCloser, error) {
return c.queryFeed(query, manifestAddressOrDomain, false)
}
// queryFeed returns a byte stream with the raw content of the feed update
// manifestAddressOrDomain is the address you obtained in CreateFeedWithManifest or an ENS domain whose Resolver
// points to that address
// meta set to true will instruct the node return feed metainformation instead
func (c *Client) queryFeed(query *feed.Query, manifestAddressOrDomain string, meta bool) (io.ReadCloser, error) {
URL, err := url.Parse(c.Gateway)
if err != nil {
return nil, err
}
URL.Path = "/bzz-feed:/" + manifestAddressOrDomain
values := URL.Query()
if query != nil {
query.AppendValues(values) //adds query parameters
}
if meta {
values.Set("meta", "1")
}
URL.RawQuery = values.Encode()
res, err := http.Get(URL.String())
if err != nil {
return nil, err
}
if res.StatusCode != http.StatusOK {
if res.StatusCode == http.StatusNotFound {
return nil, ErrNoFeedUpdatesFound
}
errorMessageBytes, err := ioutil.ReadAll(res.Body)
var errorMessage string
if err != nil {
errorMessage = "cannot retrieve error message: " + err.Error()
} else {
errorMessage = string(errorMessageBytes)
}
return nil, fmt.Errorf("Error retrieving feed updates: %s", errorMessage)
}
return res.Body, nil
}
// GetFeedRequest returns a structure that describes the referenced feed status
// manifestAddressOrDomain is the address you obtained in CreateFeedWithManifest or an ENS domain whose Resolver
// points to that address
func (c *Client) GetFeedRequest(query *feed.Query, manifestAddressOrDomain string) (*feed.Request, error) {
responseStream, err := c.queryFeed(query, manifestAddressOrDomain, true)
if err != nil {
return nil, err
}
defer responseStream.Close()
body, err := ioutil.ReadAll(responseStream)
if err != nil {
return nil, err
}
var metadata feed.Request
if err := metadata.UnmarshalJSON(body); err != nil {
return nil, err
}
return &metadata, nil
}
func GetClientTrace(traceMsg, metricPrefix, ruid string, tn *time.Time) *httptrace.ClientTrace {
trace := &httptrace.ClientTrace{
GetConn: func(_ string) {
log.Trace(traceMsg+" - http get", "event", "GetConn", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".getconn", nil).Update(time.Since(*tn))
},
GotConn: func(_ httptrace.GotConnInfo) {
log.Trace(traceMsg+" - http get", "event", "GotConn", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".gotconn", nil).Update(time.Since(*tn))
},
PutIdleConn: func(err error) {
log.Trace(traceMsg+" - http get", "event", "PutIdleConn", "ruid", ruid, "err", err)
metrics.GetOrRegisterResettingTimer(metricPrefix+".putidle", nil).Update(time.Since(*tn))
},
GotFirstResponseByte: func() {
log.Trace(traceMsg+" - http get", "event", "GotFirstResponseByte", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".firstbyte", nil).Update(time.Since(*tn))
},
Got100Continue: func() {
log.Trace(traceMsg, "event", "Got100Continue", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".got100continue", nil).Update(time.Since(*tn))
},
DNSStart: func(_ httptrace.DNSStartInfo) {
log.Trace(traceMsg, "event", "DNSStart", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".dnsstart", nil).Update(time.Since(*tn))
},
DNSDone: func(_ httptrace.DNSDoneInfo) {
log.Trace(traceMsg, "event", "DNSDone", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".dnsdone", nil).Update(time.Since(*tn))
},
ConnectStart: func(network, addr string) {
log.Trace(traceMsg, "event", "ConnectStart", "ruid", ruid, "network", network, "addr", addr)
metrics.GetOrRegisterResettingTimer(metricPrefix+".connectstart", nil).Update(time.Since(*tn))
},
ConnectDone: func(network, addr string, err error) {
log.Trace(traceMsg, "event", "ConnectDone", "ruid", ruid, "network", network, "addr", addr, "err", err)
metrics.GetOrRegisterResettingTimer(metricPrefix+".connectdone", nil).Update(time.Since(*tn))
},
WroteHeaders: func() {
log.Trace(traceMsg, "event", "WroteHeaders(request)", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".wroteheaders", nil).Update(time.Since(*tn))
},
Wait100Continue: func() {
log.Trace(traceMsg, "event", "Wait100Continue", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".wait100continue", nil).Update(time.Since(*tn))
},
WroteRequest: func(_ httptrace.WroteRequestInfo) {
log.Trace(traceMsg, "event", "WroteRequest", "ruid", ruid)
metrics.GetOrRegisterResettingTimer(metricPrefix+".wroterequest", nil).Update(time.Since(*tn))
},
}
return trace
}

608
api/client/client_test.go Normal file
View File

@ -0,0 +1,608 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package client
import (
"bytes"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"sort"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethersphere/swarm/api"
swarmhttp "github.com/ethersphere/swarm/api/http"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/storage/feed"
"github.com/ethersphere/swarm/storage/feed/lookup"
"github.com/ethersphere/swarm/testutil"
)
func serverFunc(api *api.API) swarmhttp.TestServer {
return swarmhttp.NewServer(api, "")
}
// TestClientUploadDownloadRaw test uploading and downloading raw data to swarm
func TestClientUploadDownloadRaw(t *testing.T) {
testClientUploadDownloadRaw(false, t)
}
func TestClientUploadDownloadRawEncrypted(t *testing.T) {
if testutil.RaceEnabled {
t.Skip("flaky with -race on Travis")
// See: https://github.com/ethersphere/go-ethereum/issues/1254
}
testClientUploadDownloadRaw(true, t)
}
func testClientUploadDownloadRaw(toEncrypt bool, t *testing.T) {
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
client := NewClient(srv.URL)
// upload some raw data
data := []byte("foo123")
hash, err := client.UploadRaw(bytes.NewReader(data), int64(len(data)), toEncrypt)
if err != nil {
t.Fatal(err)
}
// check the tag was created successfully
tag := srv.Tags.All()[0]
testutil.CheckTag(t, tag, 1, 1, 0, 1)
// check we can download the same data
res, isEncrypted, err := client.DownloadRaw(hash)
if err != nil {
t.Fatal(err)
}
if isEncrypted != toEncrypt {
t.Fatalf("Expected encyption status %v got %v", toEncrypt, isEncrypted)
}
defer res.Close()
gotData, err := ioutil.ReadAll(res)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(gotData, data) {
t.Fatalf("expected downloaded data to be %q, got %q", data, gotData)
}
}
// TestClientUploadDownloadFiles test uploading and downloading files to swarm
// manifests
func TestClientUploadDownloadFiles(t *testing.T) {
testClientUploadDownloadFiles(false, t)
}
func TestClientUploadDownloadFilesEncrypted(t *testing.T) {
testClientUploadDownloadFiles(true, t)
}
func testClientUploadDownloadFiles(toEncrypt bool, t *testing.T) {
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
client := NewClient(srv.URL)
upload := func(manifest, path string, data []byte) string {
file := &File{
ReadCloser: ioutil.NopCloser(bytes.NewReader(data)),
ManifestEntry: api.ManifestEntry{
Path: path,
ContentType: "text/plain",
Size: int64(len(data)),
},
}
hash, err := client.Upload(file, manifest, toEncrypt)
if err != nil {
t.Fatal(err)
}
return hash
}
checkDownload := func(manifest, path string, expected []byte) {
file, err := client.Download(manifest, path)
if err != nil {
t.Fatal(err)
}
defer file.Close()
if file.Size != int64(len(expected)) {
t.Fatalf("expected downloaded file to be %d bytes, got %d", len(expected), file.Size)
}
if file.ContentType != "text/plain" {
t.Fatalf("expected downloaded file to have type %q, got %q", "text/plain", file.ContentType)
}
data, err := ioutil.ReadAll(file)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data, expected) {
t.Fatalf("expected downloaded data to be %q, got %q", expected, data)
}
}
// upload a file to the root of a manifest
rootData := []byte("some-data")
rootHash := upload("", "", rootData)
// check we can download the root file
checkDownload(rootHash, "", rootData)
// upload another file to the same manifest
otherData := []byte("some-other-data")
newHash := upload(rootHash, "some/other/path", otherData)
// check we can download both files from the new manifest
checkDownload(newHash, "", rootData)
checkDownload(newHash, "some/other/path", otherData)
// replace the root file with different data
newHash = upload(newHash, "", otherData)
// check both files have the other data
checkDownload(newHash, "", otherData)
checkDownload(newHash, "some/other/path", otherData)
}
var testDirFiles = []string{
"file1.txt",
"file2.txt",
"dir1/file3.txt",
"dir1/file4.txt",
"dir2/file5.txt",
"dir2/dir3/file6.txt",
"dir2/dir4/file7.txt",
"dir2/dir4/file8.txt",
}
func newTestDirectory(t *testing.T) string {
dir, err := ioutil.TempDir("", "swarm-client-test")
if err != nil {
t.Fatal(err)
}
for _, file := range testDirFiles {
path := filepath.Join(dir, file)
if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil {
os.RemoveAll(dir)
t.Fatalf("error creating dir for %s: %s", path, err)
}
if err := ioutil.WriteFile(path, []byte(file), 0644); err != nil {
os.RemoveAll(dir)
t.Fatalf("error writing file %s: %s", path, err)
}
}
return dir
}
// TestClientUploadDownloadDirectory tests uploading and downloading a
// directory of files to a swarm manifest
func TestClientUploadDownloadDirectory(t *testing.T) {
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
dir := newTestDirectory(t)
defer os.RemoveAll(dir)
// upload the directory
client := NewClient(srv.URL)
defaultPath := testDirFiles[0]
hash, err := client.UploadDirectory(dir, defaultPath, "", false)
if err != nil {
t.Fatalf("error uploading directory: %s", err)
}
// check the tag was created successfully
tag := srv.Tags.All()[0]
testutil.CheckTag(t, tag, 9, 9, 0, 9)
// check we can download the individual files
checkDownloadFile := func(path string, expected []byte) {
file, err := client.Download(hash, path)
if err != nil {
t.Fatal(err)
}
defer file.Close()
data, err := ioutil.ReadAll(file)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data, expected) {
t.Fatalf("expected data to be %q, got %q", expected, data)
}
}
for _, file := range testDirFiles {
checkDownloadFile(file, []byte(file))
}
// check we can download the default path
checkDownloadFile("", []byte(testDirFiles[0]))
// check we can download the directory
tmp, err := ioutil.TempDir("", "swarm-client-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
if err := client.DownloadDirectory(hash, "", tmp, ""); err != nil {
t.Fatal(err)
}
for _, file := range testDirFiles {
data, err := ioutil.ReadFile(filepath.Join(tmp, file))
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data, []byte(file)) {
t.Fatalf("expected data to be %q, got %q", file, data)
}
}
}
// TestClientFileList tests listing files in a swarm manifest
func TestClientFileList(t *testing.T) {
testClientFileList(false, t)
}
func TestClientFileListEncrypted(t *testing.T) {
testClientFileList(true, t)
}
func testClientFileList(toEncrypt bool, t *testing.T) {
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
dir := newTestDirectory(t)
defer os.RemoveAll(dir)
client := NewClient(srv.URL)
hash, err := client.UploadDirectory(dir, "", "", toEncrypt)
if err != nil {
t.Fatalf("error uploading directory: %s", err)
}
ls := func(prefix string) []string {
list, err := client.List(hash, prefix, "")
if err != nil {
t.Fatal(err)
}
paths := make([]string, 0, len(list.CommonPrefixes)+len(list.Entries))
paths = append(paths, list.CommonPrefixes...)
for _, entry := range list.Entries {
paths = append(paths, entry.Path)
}
sort.Strings(paths)
return paths
}
tests := map[string][]string{
"": {"dir1/", "dir2/", "file1.txt", "file2.txt"},
"file": {"file1.txt", "file2.txt"},
"file1": {"file1.txt"},
"file2.txt": {"file2.txt"},
"file12": {},
"dir": {"dir1/", "dir2/"},
"dir1": {"dir1/"},
"dir1/": {"dir1/file3.txt", "dir1/file4.txt"},
"dir1/file": {"dir1/file3.txt", "dir1/file4.txt"},
"dir1/file3.txt": {"dir1/file3.txt"},
"dir1/file34": {},
"dir2/": {"dir2/dir3/", "dir2/dir4/", "dir2/file5.txt"},
"dir2/file": {"dir2/file5.txt"},
"dir2/dir": {"dir2/dir3/", "dir2/dir4/"},
"dir2/dir3/": {"dir2/dir3/file6.txt"},
"dir2/dir4/": {"dir2/dir4/file7.txt", "dir2/dir4/file8.txt"},
"dir2/dir4/file": {"dir2/dir4/file7.txt", "dir2/dir4/file8.txt"},
"dir2/dir4/file7.txt": {"dir2/dir4/file7.txt"},
"dir2/dir4/file78": {},
}
for prefix, expected := range tests {
actual := ls(prefix)
if !reflect.DeepEqual(actual, expected) {
t.Fatalf("expected prefix %q to return %v, got %v", prefix, expected, actual)
}
}
}
// TestClientMultipartUpload tests uploading files to swarm using a multipart
// upload
func TestClientMultipartUpload(t *testing.T) {
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
defer srv.Close()
// define an uploader which uploads testDirFiles with some data
// note: this test should result in SEEN chunks. assert accordingly
data := []byte("some-data")
uploader := UploaderFunc(func(upload UploadFn) error {
for _, name := range testDirFiles {
file := &File{
ReadCloser: ioutil.NopCloser(bytes.NewReader(data)),
ManifestEntry: api.ManifestEntry{
Path: name,
ContentType: "text/plain",
Size: int64(len(data)),
},
}
if err := upload(file); err != nil {
return err
}
}
return nil
})
// upload the files as a multipart upload
client := NewClient(srv.URL)
hash, err := client.MultipartUpload("", uploader)
if err != nil {
t.Fatal(err)
}
// check the tag was created successfully
tag := srv.Tags.All()[0]
testutil.CheckTag(t, tag, 9, 9, 7, 9)
// check we can download the individual files
checkDownloadFile := func(path string) {
file, err := client.Download(hash, path)
if err != nil {
t.Fatal(err)
}
defer file.Close()
gotData, err := ioutil.ReadAll(file)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(gotData, data) {
t.Fatalf("expected data to be %q, got %q", data, gotData)
}
}
for _, file := range testDirFiles {
checkDownloadFile(file)
}
}
func newTestSigner() (*feed.GenericSigner, error) {
privKey, err := crypto.HexToECDSA("deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef")
if err != nil {
return nil, err
}
return feed.NewGenericSigner(privKey), nil
}
// Test the transparent resolving of feed updates with bzz:// scheme
//
// First upload data to bzz:, and store the Swarm hash to the resulting manifest in a feed update.
// This effectively uses a feed to store a pointer to content rather than the content itself
// Retrieving the update with the Swarm hash should return the manifest pointing directly to the data
// and raw retrieve of that hash should return the data
func TestClientBzzWithFeed(t *testing.T) {
signer, _ := newTestSigner()
// Initialize a Swarm test server
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
swarmClient := NewClient(srv.URL)
defer srv.Close()
// put together some data for our test:
dataBytes := []byte(`
//
// Create some data our manifest will point to. Data that could be very big and wouldn't fit in a feed update.
// So what we are going to do is upload it to Swarm bzz:// and obtain a **manifest hash** pointing to it:
//
// MANIFEST HASH --> DATA
//
// Then, we store that **manifest hash** into a Swarm Feed update. Once we have done this,
// we can use the **feed manifest hash** in bzz:// instead, this way: bzz://feed-manifest-hash.
//
// FEED MANIFEST HASH --> MANIFEST HASH --> DATA
//
// Given that we can update the feed at any time with a new **manifest hash** but the **feed manifest hash**
// stays constant, we have effectively created a fixed address to changing content. (Applause)
//
// FEED MANIFEST HASH (the same) --> MANIFEST HASH(2) --> DATA(2)
//
`)
// Create a virtual File out of memory containing the above data
f := &File{
ReadCloser: ioutil.NopCloser(bytes.NewReader(dataBytes)),
ManifestEntry: api.ManifestEntry{
ContentType: "text/plain",
Mode: 0660,
Size: int64(len(dataBytes)),
},
}
// upload data to bzz:// and retrieve the content-addressed manifest hash, hex-encoded.
manifestAddressHex, err := swarmClient.Upload(f, "", false)
if err != nil {
t.Fatalf("Error creating manifest: %s", err)
}
// convert the hex-encoded manifest hash to a 32-byte slice
manifestAddress := common.FromHex(manifestAddressHex)
if len(manifestAddress) != storage.AddressLength {
t.Fatalf("Something went wrong. Got a hash of an unexpected length. Expected %d bytes. Got %d", storage.AddressLength, len(manifestAddress))
}
// Now create a **feed manifest**. For that, we need a topic:
topic, _ := feed.NewTopic("interesting topic indeed", nil)
// Build a feed request to update data
request := feed.NewFirstRequest(topic)
// Put the 32-byte address of the manifest into the feed update
request.SetData(manifestAddress)
// Sign the update
if err := request.Sign(signer); err != nil {
t.Fatalf("Error signing update: %s", err)
}
// Publish the update and at the same time request a **feed manifest** to be created
feedManifestAddressHex, err := swarmClient.CreateFeedWithManifest(request)
if err != nil {
t.Fatalf("Error creating feed manifest: %s", err)
}
// Check we have received the exact **feed manifest** to be expected
// given the topic and user signing the updates:
correctFeedManifestAddrHex := "747c402e5b9dc715a25a4393147512167bab018a007fad7cdcd9adc7fce1ced2"
if feedManifestAddressHex != correctFeedManifestAddrHex {
t.Fatalf("Response feed manifest mismatch, expected '%s', got '%s'", correctFeedManifestAddrHex, feedManifestAddressHex)
}
// Check we get a not found error when trying to get feed updates with a made-up manifest
_, err = swarmClient.QueryFeed(nil, "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb")
if err != ErrNoFeedUpdatesFound {
t.Fatalf("Expected to receive ErrNoFeedUpdatesFound error. Got: %s", err)
}
// If we query the feed directly we should get **manifest hash** back:
reader, err := swarmClient.QueryFeed(nil, correctFeedManifestAddrHex)
if err != nil {
t.Fatalf("Error retrieving feed updates: %s", err)
}
defer reader.Close()
gotData, err := ioutil.ReadAll(reader)
if err != nil {
t.Fatal(err)
}
//Check that indeed the **manifest hash** is retrieved
if !bytes.Equal(manifestAddress, gotData) {
t.Fatalf("Expected: %v, got %v", manifestAddress, gotData)
}
// Now the final test we were looking for: Use bzz://<feed-manifest> and that should resolve all manifests
// and return the original data directly:
f, err = swarmClient.Download(feedManifestAddressHex, "")
if err != nil {
t.Fatal(err)
}
gotData, err = ioutil.ReadAll(f)
if err != nil {
t.Fatal(err)
}
// Check that we get back the original data:
if !bytes.Equal(dataBytes, gotData) {
t.Fatalf("Expected: %v, got %v", manifestAddress, gotData)
}
}
// TestClientCreateUpdateFeed will check that feeds can be created and updated via the HTTP client.
func TestClientCreateUpdateFeed(t *testing.T) {
signer, _ := newTestSigner()
srv := swarmhttp.NewTestSwarmServer(t, serverFunc, nil)
client := NewClient(srv.URL)
defer srv.Close()
// set raw data for the feed update
databytes := []byte("En un lugar de La Mancha, de cuyo nombre no quiero acordarme...")
// our feed topic name
topic, _ := feed.NewTopic("El Quijote", nil)
createRequest := feed.NewFirstRequest(topic)
createRequest.SetData(databytes)
if err := createRequest.Sign(signer); err != nil {
t.Fatalf("Error signing update: %s", err)
}
feedManifestHash, err := client.CreateFeedWithManifest(createRequest)
if err != nil {
t.Fatal(err)
}
correctManifestAddrHex := "0e9b645ebc3da167b1d56399adc3276f7a08229301b72a03336be0e7d4b71882"
if feedManifestHash != correctManifestAddrHex {
t.Fatalf("Response feed manifest mismatch, expected '%s', got '%s'", correctManifestAddrHex, feedManifestHash)
}
reader, err := client.QueryFeed(nil, correctManifestAddrHex)
if err != nil {
t.Fatalf("Error retrieving feed updates: %s", err)
}
defer reader.Close()
gotData, err := ioutil.ReadAll(reader)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(databytes, gotData) {
t.Fatalf("Expected: %v, got %v", databytes, gotData)
}
// define different data
databytes = []byte("... no ha mucho tiempo que vivía un hidalgo de los de lanza en astillero ...")
updateRequest, err := client.GetFeedRequest(nil, correctManifestAddrHex)
if err != nil {
t.Fatalf("Error retrieving update request template: %s", err)
}
updateRequest.SetData(databytes)
if err := updateRequest.Sign(signer); err != nil {
t.Fatalf("Error signing update: %s", err)
}
if err = client.UpdateFeed(updateRequest); err != nil {
t.Fatalf("Error updating feed: %s", err)
}
reader, err = client.QueryFeed(nil, correctManifestAddrHex)
if err != nil {
t.Fatalf("Error retrieving feed updates: %s", err)
}
defer reader.Close()
gotData, err = ioutil.ReadAll(reader)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(databytes, gotData) {
t.Fatalf("Expected: %v, got %v", databytes, gotData)
}
// now try retrieving feed updates without a manifest
fd := &feed.Feed{
Topic: topic,
User: signer.Address(),
}
lookupParams := feed.NewQueryLatest(fd, lookup.NoClue)
reader, err = client.QueryFeed(lookupParams, "")
if err != nil {
t.Fatalf("Error retrieving feed updates: %s", err)
}
defer reader.Close()
gotData, err = ioutil.ReadAll(reader)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(databytes, gotData) {
t.Fatalf("Expected: %v, got %v", databytes, gotData)
}
}

174
api/config.go Normal file
View File

@ -0,0 +1,174 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package api
import (
"crypto/ecdsa"
"fmt"
"os"
"path/filepath"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethersphere/swarm/contracts/ens"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/pss"
"github.com/ethersphere/swarm/services/swap"
"github.com/ethersphere/swarm/storage"
)
const (
DefaultHTTPListenAddr = "127.0.0.1"
DefaultHTTPPort = "8500"
)
// separate bzz directories
// allow several bzz nodes running in parallel
type Config struct {
// serialised/persisted fields
*storage.FileStoreParams
// LocalStore
ChunkDbPath string
DbCapacity uint64
CacheCapacity uint
BaseKey []byte
*network.HiveParams
Swap *swap.LocalProfile
Pss *pss.PssParams
Contract common.Address
EnsRoot common.Address
EnsAPIs []string
Path string
ListenAddr string
Port string
PublicKey string
BzzKey string
Enode *enode.Node `toml:"-"`
NetworkID uint64
SwapEnabled bool
SyncEnabled bool
SyncingSkipCheck bool
DeliverySkipCheck bool
MaxStreamPeerServers int
LightNodeEnabled bool
BootnodeMode bool
SyncUpdateDelay time.Duration
SwapAPI string
Cors string
BzzAccount string
GlobalStoreAPI string
privateKey *ecdsa.PrivateKey
}
//create a default config with all parameters to set to defaults
func NewConfig() (c *Config) {
c = &Config{
FileStoreParams: storage.NewFileStoreParams(),
HiveParams: network.NewHiveParams(),
Swap: swap.NewDefaultSwapParams(),
Pss: pss.NewPssParams(),
ListenAddr: DefaultHTTPListenAddr,
Port: DefaultHTTPPort,
Path: node.DefaultDataDir(),
EnsAPIs: nil,
EnsRoot: ens.TestNetAddress,
NetworkID: network.DefaultNetworkID,
SwapEnabled: false,
SyncEnabled: true,
SyncingSkipCheck: false,
MaxStreamPeerServers: 10000,
DeliverySkipCheck: true,
SyncUpdateDelay: 15 * time.Second,
SwapAPI: "",
}
return
}
//some config params need to be initialized after the complete
//config building phase is completed (e.g. due to overriding flags)
func (c *Config) Init(prvKey *ecdsa.PrivateKey, nodeKey *ecdsa.PrivateKey) error {
// create swarm dir and record key
err := c.createAndSetPath(c.Path, prvKey)
if err != nil {
return fmt.Errorf("Error creating root swarm data directory: %v", err)
}
c.setKey(prvKey)
// create the new enode record
// signed with the ephemeral node key
enodeParams := &network.EnodeParams{
PrivateKey: prvKey,
EnodeKey: nodeKey,
Lightnode: c.LightNodeEnabled,
Bootnode: c.BootnodeMode,
}
c.Enode, err = network.NewEnode(enodeParams)
if err != nil {
return fmt.Errorf("Error creating enode: %v", err)
}
// initialize components that depend on the swarm instance's private key
if c.SwapEnabled {
c.Swap.Init(c.Contract, prvKey)
}
c.privateKey = prvKey
c.ChunkDbPath = filepath.Join(c.Path, "chunks")
c.BaseKey = common.FromHex(c.BzzKey)
c.Pss = c.Pss.WithPrivateKey(c.privateKey)
return nil
}
func (c *Config) ShiftPrivateKey() (privKey *ecdsa.PrivateKey) {
if c.privateKey != nil {
privKey = c.privateKey
c.privateKey = nil
}
return privKey
}
func (c *Config) setKey(prvKey *ecdsa.PrivateKey) {
bzzkeybytes := network.PrivateKeyToBzzKey(prvKey)
pubkey := crypto.FromECDSAPub(&prvKey.PublicKey)
pubkeyhex := hexutil.Encode(pubkey)
keyhex := hexutil.Encode(bzzkeybytes)
c.privateKey = prvKey
c.PublicKey = pubkeyhex
c.BzzKey = keyhex
}
func (c *Config) createAndSetPath(datadirPath string, prvKey *ecdsa.PrivateKey) error {
address := crypto.PubkeyToAddress(prvKey.PublicKey)
bzzdirPath := filepath.Join(datadirPath, "bzz-"+common.Bytes2Hex(address.Bytes()))
err := os.MkdirAll(bzzdirPath, os.ModePerm)
if err != nil {
return err
}
c.Path = bzzdirPath
return nil
}

66
api/config_test.go Normal file
View File

@ -0,0 +1,66 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package api
import (
"reflect"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
func TestConfig(t *testing.T) {
var hexprvkey = "65138b2aa745041b372153550584587da326ab440576b2a1191dd95cee30039c"
var hexnodekey = "75138b2aa745041b372153550584587da326ab440576b2a1191dd95cee30039c"
prvkey, err := crypto.HexToECDSA(hexprvkey)
if err != nil {
t.Fatalf("failed to load private key: %v", err)
}
nodekey, err := crypto.HexToECDSA(hexnodekey)
if err != nil {
t.Fatalf("failed to load private key: %v", err)
}
one := NewConfig()
two := NewConfig()
if equal := reflect.DeepEqual(one, two); !equal {
t.Fatal("Two default configs are not equal")
}
err = one.Init(prvkey, nodekey)
if err != nil {
t.Fatal(err)
}
//the init function should set the following fields
if one.BzzKey == "" {
t.Fatal("Expected BzzKey to be set")
}
if one.PublicKey == "" {
t.Fatal("Expected PublicKey to be set")
}
if one.Swap.PayProfile.Beneficiary == (common.Address{}) && one.SwapEnabled {
t.Fatal("Failed to correctly initialize SwapParams")
}
if one.ChunkDbPath == one.Path {
t.Fatal("Failed to correctly initialize StoreParams")
}
}

View File

@ -20,8 +20,8 @@ import (
"encoding/binary"
"errors"
"github.com/ethereum/go-ethereum/crypto/sha3"
"github.com/ethereum/go-ethereum/swarm/storage/encryption"
"github.com/ethersphere/swarm/storage/encryption"
"golang.org/x/crypto/sha3"
)
type RefEncryption struct {
@ -39,12 +39,12 @@ func NewRefEncryption(refSize int) *RefEncryption {
}
func (re *RefEncryption) Encrypt(ref []byte, key []byte) ([]byte, error) {
spanEncryption := encryption.New(key, 0, uint32(re.refSize/32), sha3.NewKeccak256)
spanEncryption := encryption.New(key, 0, uint32(re.refSize/32), sha3.NewLegacyKeccak256)
encryptedSpan, err := spanEncryption.Encrypt(re.span)
if err != nil {
return nil, err
}
dataEncryption := encryption.New(key, re.refSize, 0, sha3.NewKeccak256)
dataEncryption := encryption.New(key, re.refSize, 0, sha3.NewLegacyKeccak256)
encryptedData, err := dataEncryption.Encrypt(ref)
if err != nil {
return nil, err
@ -57,7 +57,7 @@ func (re *RefEncryption) Encrypt(ref []byte, key []byte) ([]byte, error) {
}
func (re *RefEncryption) Decrypt(ref []byte, key []byte) ([]byte, error) {
spanEncryption := encryption.New(key, 0, uint32(re.refSize/32), sha3.NewKeccak256)
spanEncryption := encryption.New(key, 0, uint32(re.refSize/32), sha3.NewLegacyKeccak256)
decryptedSpan, err := spanEncryption.Decrypt(ref[:8])
if err != nil {
return nil, err
@ -68,7 +68,7 @@ func (re *RefEncryption) Decrypt(ref []byte, key []byte) ([]byte, error) {
return nil, errors.New("invalid span in encrypted reference")
}
dataEncryption := encryption.New(key, re.refSize, 0, sha3.NewKeccak256)
dataEncryption := encryption.New(key, re.refSize, 0, sha3.NewLegacyKeccak256)
decryptedRef, err := dataEncryption.Decrypt(ref[8:])
if err != nil {
return nil, err

View File

@ -27,8 +27,8 @@ import (
"sync"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/swarm/log"
"github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/storage"
)
const maxParallelFiles = 5

View File

@ -25,13 +25,12 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/storage"
)
var testDownloadDir, _ = ioutil.TempDir(os.TempDir(), "bzz-test")
func testFileSystem(t *testing.T, f func(*FileSystem, bool)) {
testAPI(t, func(api *API, toEncrypt bool) {
testAPI(t, func(api *API, _ *chunk.Tags, toEncrypt bool) {
f(NewFileSystem(api), toEncrypt)
})
}
@ -69,8 +68,13 @@ func TestApiDirUpload0(t *testing.T) {
t.Fatalf("expected error: %v", err)
}
testDownloadDir, err := ioutil.TempDir(os.TempDir(), "bzz-test")
downloadDir := filepath.Join(testDownloadDir, "test0")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
defer os.RemoveAll(downloadDir)
err = fs.Download(bzzhash, downloadDir)
if err != nil {
t.Fatalf("unexpected error: %v", err)

162
api/http/middleware.go Normal file
View File

@ -0,0 +1,162 @@
package http
import (
"fmt"
"net/http"
"runtime/debug"
"strings"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/api"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/sctx"
"github.com/ethersphere/swarm/spancontext"
"github.com/pborman/uuid"
)
// Adapt chains h (main request handler) main handler to adapters (middleware handlers)
// Please note that the order of execution for `adapters` is FIFO (adapters[0] will be executed first)
func Adapt(h http.Handler, adapters ...Adapter) http.Handler {
for i := range adapters {
adapter := adapters[len(adapters)-1-i]
h = adapter(h)
}
return h
}
type Adapter func(http.Handler) http.Handler
func SetRequestID(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
r = r.WithContext(SetRUID(r.Context(), uuid.New()[:8]))
metrics.GetOrRegisterCounter(fmt.Sprintf("http.request.%s", r.Method), nil).Inc(1)
log.Info("created ruid for request", "ruid", GetRUID(r.Context()), "method", r.Method, "url", r.RequestURI)
h.ServeHTTP(w, r)
})
}
func SetRequestHost(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
r = r.WithContext(sctx.SetHost(r.Context(), r.Host))
log.Info("setting request host", "ruid", GetRUID(r.Context()), "host", sctx.GetHost(r.Context()))
h.ServeHTTP(w, r)
})
}
func ParseURI(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
uri, err := api.Parse(strings.TrimLeft(r.URL.Path, "/"))
if err != nil {
w.WriteHeader(http.StatusBadRequest)
respondError(w, r, fmt.Sprintf("invalid URI %q", r.URL.Path), http.StatusBadRequest)
return
}
if uri.Addr != "" && strings.HasPrefix(uri.Addr, "0x") {
uri.Addr = strings.TrimPrefix(uri.Addr, "0x")
msg := fmt.Sprintf(`The requested hash seems to be prefixed with '0x'. You will be redirected to the correct URL within 5 seconds.<br/>
Please click <a href='%[1]s'>here</a> if your browser does not redirect you within 5 seconds.<script>setTimeout("location.href='%[1]s';",5000);</script>`, "/"+uri.String())
w.WriteHeader(http.StatusNotFound)
w.Write([]byte(msg))
return
}
ctx := r.Context()
r = r.WithContext(SetURI(ctx, uri))
log.Debug("parsed request path", "ruid", GetRUID(r.Context()), "method", r.Method, "uri.Addr", uri.Addr, "uri.Path", uri.Path, "uri.Scheme", uri.Scheme)
h.ServeHTTP(w, r)
})
}
func InitLoggingResponseWriter(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
tn := time.Now()
writer := newLoggingResponseWriter(w)
h.ServeHTTP(writer, r)
ts := time.Since(tn)
log.Info("request served", "ruid", GetRUID(r.Context()), "code", writer.statusCode, "time", ts)
metrics.GetOrRegisterResettingTimer(fmt.Sprintf("http.request.%s.time", r.Method), nil).Update(ts)
metrics.GetOrRegisterResettingTimer(fmt.Sprintf("http.request.%s.%d.time", r.Method, writer.statusCode), nil).Update(ts)
})
}
// InitUploadTag creates a new tag for an upload to the local HTTP proxy
// if a tag is not named using the SwarmTagHeaderName, a fallback name will be used
// when the Content-Length header is set, an ETA on chunking will be available since the
// number of chunks to be split is known in advance (not including enclosing manifest chunks)
// the tag can later be accessed using the appropriate identifier in the request context
func InitUploadTag(h http.Handler, tags *chunk.Tags) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var (
tagName string
err error
estimatedTotal int64 = 0
contentType = r.Header.Get("Content-Type")
headerTag = r.Header.Get(SwarmTagHeaderName)
)
if headerTag != "" {
tagName = headerTag
log.Trace("got tag name from http header", "tagName", tagName)
} else {
tagName = fmt.Sprintf("unnamed_tag_%d", time.Now().Unix())
}
if !strings.Contains(contentType, "multipart") && r.ContentLength > 0 {
log.Trace("calculating tag size", "contentType", contentType, "contentLength", r.ContentLength)
uri := GetURI(r.Context())
if uri != nil {
log.Debug("got uri from context")
if uri.Addr == "encrypt" {
estimatedTotal = calculateNumberOfChunks(r.ContentLength, true)
} else {
estimatedTotal = calculateNumberOfChunks(r.ContentLength, false)
}
}
}
log.Trace("creating tag", "tagName", tagName, "estimatedTotal", estimatedTotal)
t, err := tags.New(tagName, estimatedTotal)
if err != nil {
log.Error("error creating tag", "err", err, "tagName", tagName)
}
log.Trace("setting tag id to context", "uid", t.Uid)
ctx := sctx.SetTag(r.Context(), t.Uid)
h.ServeHTTP(w, r.WithContext(ctx))
})
}
func InstrumentOpenTracing(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
uri := GetURI(r.Context())
if uri == nil || r.Method == "" || (uri != nil && uri.Scheme == "") {
h.ServeHTTP(w, r) // soft fail
return
}
spanName := fmt.Sprintf("http.%s.%s", r.Method, uri.Scheme)
ctx, sp := spancontext.StartSpan(r.Context(), spanName)
defer sp.Finish()
h.ServeHTTP(w, r.WithContext(ctx))
})
}
func RecoverPanic(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer func() {
if err := recover(); err != nil {
log.Error("panic recovery!", "stack trace", string(debug.Stack()), "url", r.URL.String(), "headers", r.Header)
}
}()
h.ServeHTTP(w, r)
})
}

View File

@ -26,7 +26,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/swarm/api"
"github.com/ethersphere/swarm/api"
)
var (
@ -79,7 +79,7 @@ func respondTemplate(w http.ResponseWriter, r *http.Request, templateName, msg s
}
func respondError(w http.ResponseWriter, r *http.Request, msg string, code int) {
log.Info("respondError", "ruid", GetRUID(r.Context()), "uri", GetURI(r.Context()), "code", code)
log.Info("respondError", "ruid", GetRUID(r.Context()), "uri", GetURI(r.Context()), "code", code, "msg", msg)
respondTemplate(w, r, "error", msg, code)
}

View File

@ -20,7 +20,7 @@ import (
"fmt"
"net/http"
"github.com/ethereum/go-ethereum/swarm/log"
"github.com/ethersphere/swarm/log"
)
/*
@ -30,7 +30,7 @@ Usage:
import (
"github.com/ethereum/go-ethereum/common/httpclient"
"github.com/ethereum/go-ethereum/swarm/api/http"
"github.com/ethersphere/swarm/api/http"
)
client := httpclient.New()
// for (private) swarm proxy running locally

34
api/http/sctx.go Normal file
View File

@ -0,0 +1,34 @@
package http
import (
"context"
"github.com/ethersphere/swarm/api"
"github.com/ethersphere/swarm/sctx"
)
type uriKey struct{}
func GetRUID(ctx context.Context) string {
v, ok := ctx.Value(sctx.HTTPRequestIDKey{}).(string)
if ok {
return v
}
return "xxxxxxxx"
}
func SetRUID(ctx context.Context, ruid string) context.Context {
return context.WithValue(ctx, sctx.HTTPRequestIDKey{}, ruid)
}
func GetURI(ctx context.Context) *api.URI {
v, ok := ctx.Value(uriKey{}).(*api.URI)
if ok {
return v
}
return nil
}
func SetURI(ctx context.Context, uri *api.URI) context.Context {
return context.WithValue(ctx, uriKey{}, uri)
}

937
api/http/server.go Normal file
View File

@ -0,0 +1,937 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
/*
A simple http server interface to Swarm
*/
package http
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"math"
"mime"
"mime/multipart"
"net/http"
"os"
"path"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/api"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/sctx"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/storage/feed"
"github.com/rs/cors"
)
var (
postRawCount = metrics.NewRegisteredCounter("api.http.post.raw.count", nil)
postRawFail = metrics.NewRegisteredCounter("api.http.post.raw.fail", nil)
postFilesCount = metrics.NewRegisteredCounter("api.http.post.files.count", nil)
postFilesFail = metrics.NewRegisteredCounter("api.http.post.files.fail", nil)
deleteCount = metrics.NewRegisteredCounter("api.http.delete.count", nil)
deleteFail = metrics.NewRegisteredCounter("api.http.delete.fail", nil)
getCount = metrics.NewRegisteredCounter("api.http.get.count", nil)
getFail = metrics.NewRegisteredCounter("api.http.get.fail", nil)
getFileCount = metrics.NewRegisteredCounter("api.http.get.file.count", nil)
getFileNotFound = metrics.NewRegisteredCounter("api.http.get.file.notfound", nil)
getFileFail = metrics.NewRegisteredCounter("api.http.get.file.fail", nil)
getListCount = metrics.NewRegisteredCounter("api.http.get.list.count", nil)
getListFail = metrics.NewRegisteredCounter("api.http.get.list.fail", nil)
)
const SwarmTagHeaderName = "x-swarm-tag"
type methodHandler map[string]http.Handler
func (m methodHandler) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
v, ok := m[r.Method]
if ok {
v.ServeHTTP(rw, r)
return
}
rw.WriteHeader(http.StatusMethodNotAllowed)
}
func NewServer(api *api.API, corsString string) *Server {
var allowedOrigins []string
for _, domain := range strings.Split(corsString, ",") {
allowedOrigins = append(allowedOrigins, strings.TrimSpace(domain))
}
c := cors.New(cors.Options{
AllowedOrigins: allowedOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodPatch, http.MethodPut},
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
server := &Server{api: api}
defaultMiddlewares := []Adapter{
RecoverPanic,
SetRequestID,
SetRequestHost,
InitLoggingResponseWriter,
ParseURI,
InstrumentOpenTracing,
}
tagAdapter := Adapter(func(h http.Handler) http.Handler {
return InitUploadTag(h, api.Tags)
})
defaultPostMiddlewares := append(defaultMiddlewares, tagAdapter)
mux := http.NewServeMux()
mux.Handle("/bzz:/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleBzzGet),
defaultMiddlewares...,
),
"POST": Adapt(
http.HandlerFunc(server.HandlePostFiles),
defaultPostMiddlewares...,
),
"DELETE": Adapt(
http.HandlerFunc(server.HandleDelete),
defaultMiddlewares...,
),
})
mux.Handle("/bzz-raw:/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleGet),
defaultMiddlewares...,
),
"POST": Adapt(
http.HandlerFunc(server.HandlePostRaw),
defaultPostMiddlewares...,
),
})
mux.Handle("/bzz-immutable:/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleBzzGet),
defaultMiddlewares...,
),
})
mux.Handle("/bzz-hash:/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleGet),
defaultMiddlewares...,
),
})
mux.Handle("/bzz-list:/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleGetList),
defaultMiddlewares...,
),
})
mux.Handle("/bzz-feed:/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleGetFeed),
defaultMiddlewares...,
),
"POST": Adapt(
http.HandlerFunc(server.HandlePostFeed),
defaultMiddlewares...,
),
})
mux.Handle("/", methodHandler{
"GET": Adapt(
http.HandlerFunc(server.HandleRootPaths),
SetRequestID,
InitLoggingResponseWriter,
),
})
server.Handler = c.Handler(mux)
return server
}
func (s *Server) ListenAndServe(addr string) error {
s.listenAddr = addr
return http.ListenAndServe(addr, s)
}
// browser API for registering bzz url scheme handlers:
// https://developer.mozilla.org/en/docs/Web-based_protocol_handlers
// electron (chromium) api for registering bzz url scheme handlers:
// https://github.com/atom/electron/blob/master/docs/api/protocol.md
type Server struct {
http.Handler
api *api.API
listenAddr string
}
func (s *Server) HandleBzzGet(w http.ResponseWriter, r *http.Request) {
log.Debug("handleBzzGet", "ruid", GetRUID(r.Context()), "uri", r.RequestURI)
if r.Header.Get("Accept") == "application/x-tar" {
uri := GetURI(r.Context())
_, credentials, _ := r.BasicAuth()
reader, err := s.api.GetDirectoryTar(r.Context(), s.api.Decryptor(r.Context(), credentials), uri)
if err != nil {
if isDecryptError(err) {
w.Header().Set("WWW-Authenticate", fmt.Sprintf("Basic realm=%q", uri.Address().String()))
respondError(w, r, err.Error(), http.StatusUnauthorized)
return
}
respondError(w, r, fmt.Sprintf("Had an error building the tarball: %v", err), http.StatusInternalServerError)
return
}
defer reader.Close()
w.Header().Set("Content-Type", "application/x-tar")
fileName := uri.Addr
if found := path.Base(uri.Path); found != "" && found != "." && found != "/" {
fileName = found
}
w.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=\"%s.tar\"", fileName))
w.WriteHeader(http.StatusOK)
io.Copy(w, reader)
return
}
s.HandleGetFile(w, r)
}
func (s *Server) HandleRootPaths(w http.ResponseWriter, r *http.Request) {
switch r.RequestURI {
case "/":
respondTemplate(w, r, "landing-page", "Swarm: Please request a valid ENS or swarm hash with the appropriate bzz scheme", 200)
return
case "/robots.txt":
w.Header().Set("Last-Modified", time.Now().Format(http.TimeFormat))
fmt.Fprintf(w, "User-agent: *\nDisallow: /")
case "/favicon.ico":
w.WriteHeader(http.StatusOK)
w.Write(faviconBytes)
default:
respondError(w, r, "Not Found", http.StatusNotFound)
}
}
// HandlePostRaw handles a POST request to a raw bzz-raw:/ URI, stores the request
// body in swarm and returns the resulting storage address as a text/plain response
func (s *Server) HandlePostRaw(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
log.Debug("handle.post.raw", "ruid", ruid)
tagUid := sctx.GetTag(r.Context())
tag, err := s.api.Tags.Get(tagUid)
if err != nil {
log.Error("handle post raw got an error retrieving tag for DoneSplit", "tagUid", tagUid, "err", err)
}
postRawCount.Inc(1)
toEncrypt := false
uri := GetURI(r.Context())
if uri.Addr == "encrypt" {
toEncrypt = true
}
if uri.Path != "" {
postRawFail.Inc(1)
respondError(w, r, "raw POST request cannot contain a path", http.StatusBadRequest)
return
}
if uri.Addr != "" && uri.Addr != "encrypt" {
postRawFail.Inc(1)
respondError(w, r, "raw POST request addr can only be empty or \"encrypt\"", http.StatusBadRequest)
return
}
if r.Header.Get("Content-Length") == "" {
postRawFail.Inc(1)
respondError(w, r, "missing Content-Length header in request", http.StatusBadRequest)
return
}
addr, wait, err := s.api.Store(r.Context(), r.Body, r.ContentLength, toEncrypt)
if err != nil {
postRawFail.Inc(1)
respondError(w, r, err.Error(), http.StatusInternalServerError)
return
}
wait(r.Context())
tag.DoneSplit(addr)
log.Debug("stored content", "ruid", ruid, "key", addr)
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, addr)
}
// HandlePostFiles handles a POST request to
// bzz:/<hash>/<path> which contains either a single file or multiple files
// (either a tar archive or multipart form), adds those files either to an
// existing manifest or to a new manifest under <path> and returns the
// resulting manifest hash as a text/plain response
func (s *Server) HandlePostFiles(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
log.Debug("handle.post.files", "ruid", ruid)
postFilesCount.Inc(1)
contentType, params, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
postFilesFail.Inc(1)
respondError(w, r, err.Error(), http.StatusBadRequest)
return
}
toEncrypt := false
uri := GetURI(r.Context())
if uri.Addr == "encrypt" {
toEncrypt = true
}
var addr storage.Address
if uri.Addr != "" && uri.Addr != "encrypt" {
addr, err = s.api.Resolve(r.Context(), uri.Addr)
if err != nil {
postFilesFail.Inc(1)
respondError(w, r, fmt.Sprintf("cannot resolve %s: %s", uri.Addr, err), http.StatusInternalServerError)
return
}
log.Debug("resolved key", "ruid", ruid, "key", addr)
} else {
addr, err = s.api.NewManifest(r.Context(), toEncrypt)
if err != nil {
postFilesFail.Inc(1)
respondError(w, r, err.Error(), http.StatusInternalServerError)
return
}
log.Debug("new manifest", "ruid", ruid, "key", addr)
}
newAddr, err := s.api.UpdateManifest(r.Context(), addr, func(mw *api.ManifestWriter) error {
switch contentType {
case "application/x-tar":
_, err := s.handleTarUpload(r, mw)
if err != nil {
respondError(w, r, fmt.Sprintf("error uploading tarball: %v", err), http.StatusInternalServerError)
return err
}
return nil
case "multipart/form-data":
return s.handleMultipartUpload(r, params["boundary"], mw)
default:
return s.handleDirectUpload(r, mw)
}
})
if err != nil {
postFilesFail.Inc(1)
respondError(w, r, fmt.Sprintf("cannot create manifest: %s", err), http.StatusInternalServerError)
return
}
tagUid := sctx.GetTag(r.Context())
tag, err := s.api.Tags.Get(tagUid)
if err != nil {
log.Error("got an error retrieving tag for DoneSplit", "tagUid", tagUid, "err", err)
}
log.Debug("done splitting, setting tag total", "SPLIT", tag.Get(chunk.StateSplit), "TOTAL", tag.Total())
tag.DoneSplit(newAddr)
log.Debug("stored content", "ruid", ruid, "key", newAddr)
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, newAddr)
}
func (s *Server) handleTarUpload(r *http.Request, mw *api.ManifestWriter) (storage.Address, error) {
log.Debug("handle.tar.upload", "ruid", GetRUID(r.Context()), "tag", sctx.GetTag(r.Context()))
defaultPath := r.URL.Query().Get("defaultpath")
key, err := s.api.UploadTar(r.Context(), r.Body, GetURI(r.Context()).Path, defaultPath, mw)
if err != nil {
return nil, err
}
return key, nil
}
func (s *Server) handleMultipartUpload(r *http.Request, boundary string, mw *api.ManifestWriter) error {
ruid := GetRUID(r.Context())
log.Debug("handle.multipart.upload", "ruid", ruid)
mr := multipart.NewReader(r.Body, boundary)
for {
part, err := mr.NextPart()
if err == io.EOF {
return nil
} else if err != nil {
return fmt.Errorf("error reading multipart form: %s", err)
}
var size int64
var reader io.Reader
if contentLength := part.Header.Get("Content-Length"); contentLength != "" {
size, err = strconv.ParseInt(contentLength, 10, 64)
if err != nil {
return fmt.Errorf("error parsing multipart content length: %s", err)
}
reader = part
} else {
// copy the part to a tmp file to get its size
tmp, err := ioutil.TempFile("", "swarm-multipart")
if err != nil {
return err
}
defer os.Remove(tmp.Name())
defer tmp.Close()
size, err = io.Copy(tmp, part)
if err != nil {
return fmt.Errorf("error copying multipart content: %s", err)
}
if _, err := tmp.Seek(0, io.SeekStart); err != nil {
return fmt.Errorf("error copying multipart content: %s", err)
}
reader = tmp
}
// add the entry under the path from the request
name := part.FileName()
if name == "" {
name = part.FormName()
}
uri := GetURI(r.Context())
path := path.Join(uri.Path, name)
entry := &api.ManifestEntry{
Path: path,
ContentType: part.Header.Get("Content-Type"),
Size: size,
}
log.Debug("adding path to new manifest", "ruid", ruid, "bytes", entry.Size, "path", entry.Path)
contentKey, err := mw.AddEntry(r.Context(), reader, entry)
if err != nil {
return fmt.Errorf("error adding manifest entry from multipart form: %s", err)
}
log.Debug("stored content", "ruid", ruid, "key", contentKey)
}
}
func (s *Server) handleDirectUpload(r *http.Request, mw *api.ManifestWriter) error {
ruid := GetRUID(r.Context())
log.Debug("handle.direct.upload", "ruid", ruid)
key, err := mw.AddEntry(r.Context(), r.Body, &api.ManifestEntry{
Path: GetURI(r.Context()).Path,
ContentType: r.Header.Get("Content-Type"),
Mode: 0644,
Size: r.ContentLength,
})
if err != nil {
return err
}
log.Debug("stored content", "ruid", ruid, "key", key)
return nil
}
// HandleDelete handles a DELETE request to bzz:/<manifest>/<path>, removes
// <path> from <manifest> and returns the resulting manifest hash as a
// text/plain response
func (s *Server) HandleDelete(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
uri := GetURI(r.Context())
log.Debug("handle.delete", "ruid", ruid)
deleteCount.Inc(1)
newKey, err := s.api.Delete(r.Context(), uri.Addr, uri.Path)
if err != nil {
deleteFail.Inc(1)
respondError(w, r, fmt.Sprintf("could not delete from manifest: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, newKey)
}
// Handles feed manifest creation and feed updates
// The POST request admits a JSON structure as defined in the feeds package: `feed.updateRequestJSON`
// The requests can be to a) create a feed manifest, b) update a feed or c) both a+b: create a feed manifest and publish a first update
func (s *Server) HandlePostFeed(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
uri := GetURI(r.Context())
log.Debug("handle.post.feed", "ruid", ruid)
var err error
// Creation and update must send feed.updateRequestJSON JSON structure
body, err := ioutil.ReadAll(r.Body)
if err != nil {
respondError(w, r, err.Error(), http.StatusInternalServerError)
return
}
fd, err := s.api.ResolveFeed(r.Context(), uri, r.URL.Query())
if err != nil { // couldn't parse query string or retrieve manifest
getFail.Inc(1)
httpStatus := http.StatusBadRequest
if err == api.ErrCannotLoadFeedManifest || err == api.ErrCannotResolveFeedURI {
httpStatus = http.StatusNotFound
}
respondError(w, r, fmt.Sprintf("cannot retrieve feed from manifest: %s", err), httpStatus)
return
}
var updateRequest feed.Request
updateRequest.Feed = *fd
query := r.URL.Query()
if err := updateRequest.FromValues(query, body); err != nil { // decodes request from query parameters
respondError(w, r, err.Error(), http.StatusBadRequest)
return
}
switch {
case updateRequest.IsUpdate():
// Verify that the signature is intact and that the signer is authorized
// to update this feed
// Check this early, to avoid creating a feed and then not being able to set its first update.
if err = updateRequest.Verify(); err != nil {
respondError(w, r, err.Error(), http.StatusForbidden)
return
}
_, err = s.api.FeedsUpdate(r.Context(), &updateRequest)
if err != nil {
respondError(w, r, err.Error(), http.StatusInternalServerError)
return
}
fallthrough
case query.Get("manifest") == "1":
// we create a manifest so we can retrieve feed updates with bzz:// later
// this manifest has a special "feed type" manifest, and saves the
// feed identification used to retrieve feed updates later
m, err := s.api.NewFeedManifest(r.Context(), &updateRequest.Feed)
if err != nil {
respondError(w, r, fmt.Sprintf("failed to create feed manifest: %v", err), http.StatusInternalServerError)
return
}
// the key to the manifest will be passed back to the client
// the client can access the feed directly through its Feed member
// the manifest key can be set as content in the resolver of the ENS name
outdata, err := json.Marshal(m)
if err != nil {
respondError(w, r, fmt.Sprintf("failed to create json response: %s", err), http.StatusInternalServerError)
return
}
fmt.Fprint(w, string(outdata))
w.Header().Add("Content-type", "application/json")
default:
respondError(w, r, "Missing signature in feed update request", http.StatusBadRequest)
}
}
// HandleGetFeed retrieves Swarm feeds updates:
// bzz-feed://<manifest address or ENS name> - get latest feed update, given a manifest address
// - or -
// specify user + topic (optional), subtopic name (optional) directly, without manifest:
// bzz-feed://?user=0x...&topic=0x...&name=subtopic name
// topic defaults to 0x000... if not specified.
// name defaults to empty string if not specified.
// thus, empty name and topic refers to the user's default feed.
//
// Optional parameters:
// time=xx - get the latest update before time (in epoch seconds)
// hint.time=xx - hint the lookup algorithm looking for updates at around that time
// hint.level=xx - hint the lookup algorithm looking for updates at around this frequency level
// meta=1 - get feed metadata and status information instead of performing a feed query
// NOTE: meta=1 will be deprecated in the near future
func (s *Server) HandleGetFeed(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
uri := GetURI(r.Context())
log.Debug("handle.get.feed", "ruid", ruid)
var err error
fd, err := s.api.ResolveFeed(r.Context(), uri, r.URL.Query())
if err != nil { // couldn't parse query string or retrieve manifest
getFail.Inc(1)
httpStatus := http.StatusBadRequest
if err == api.ErrCannotLoadFeedManifest || err == api.ErrCannotResolveFeedURI {
httpStatus = http.StatusNotFound
}
respondError(w, r, fmt.Sprintf("cannot retrieve feed information from manifest: %s", err), httpStatus)
return
}
// determine if the query specifies period and version or it is a metadata query
if r.URL.Query().Get("meta") == "1" {
unsignedUpdateRequest, err := s.api.FeedsNewRequest(r.Context(), fd)
if err != nil {
getFail.Inc(1)
respondError(w, r, fmt.Sprintf("cannot retrieve feed metadata for feed=%s: %s", fd.Hex(), err), http.StatusNotFound)
return
}
rawResponse, err := unsignedUpdateRequest.MarshalJSON()
if err != nil {
respondError(w, r, fmt.Sprintf("cannot encode unsigned feed update request: %v", err), http.StatusInternalServerError)
return
}
w.Header().Add("Content-type", "application/json")
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, string(rawResponse))
return
}
lookupParams := &feed.Query{Feed: *fd}
if err = lookupParams.FromValues(r.URL.Query()); err != nil { // parse period, version
respondError(w, r, fmt.Sprintf("invalid feed update request:%s", err), http.StatusBadRequest)
return
}
data, err := s.api.FeedsLookup(r.Context(), lookupParams)
// any error from the switch statement will end up here
if err != nil {
code, err2 := s.translateFeedError(w, r, "feed lookup fail", err)
respondError(w, r, err2.Error(), code)
return
}
// All ok, serve the retrieved update
log.Debug("Found update", "feed", fd.Hex(), "ruid", ruid)
w.Header().Set("Content-Type", api.MimeOctetStream)
http.ServeContent(w, r, "", time.Now(), bytes.NewReader(data))
}
func (s *Server) translateFeedError(w http.ResponseWriter, r *http.Request, supErr string, err error) (int, error) {
code := 0
defaultErr := fmt.Errorf("%s: %v", supErr, err)
rsrcErr, ok := err.(*feed.Error)
if !ok && rsrcErr != nil {
code = rsrcErr.Code()
}
switch code {
case storage.ErrInvalidValue:
return http.StatusBadRequest, defaultErr
case storage.ErrNotFound, storage.ErrNotSynced, storage.ErrNothingToReturn, storage.ErrInit:
return http.StatusNotFound, defaultErr
case storage.ErrUnauthorized, storage.ErrInvalidSignature:
return http.StatusUnauthorized, defaultErr
case storage.ErrDataOverflow:
return http.StatusRequestEntityTooLarge, defaultErr
}
return http.StatusInternalServerError, defaultErr
}
// HandleGet handles a GET request to
// - bzz-raw://<key> and responds with the raw content stored at the
// given storage key
// - bzz-hash://<key> and responds with the hash of the content stored
// at the given storage key as a text/plain response
func (s *Server) HandleGet(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
uri := GetURI(r.Context())
log.Debug("handle.get", "ruid", ruid, "uri", uri)
getCount.Inc(1)
_, pass, _ := r.BasicAuth()
addr, err := s.api.ResolveURI(r.Context(), uri, pass)
if err != nil {
getFail.Inc(1)
respondError(w, r, fmt.Sprintf("cannot resolve %s: %s", uri.Addr, err), http.StatusNotFound)
return
}
w.Header().Set("Cache-Control", "max-age=2147483648, immutable") // url was of type bzz://<hex key>/path, so we are sure it is immutable.
log.Debug("handle.get: resolved", "ruid", ruid, "key", addr)
// if path is set, interpret <key> as a manifest and return the
// raw entry at the given path
etag := common.Bytes2Hex(addr)
noneMatchEtag := r.Header.Get("If-None-Match")
w.Header().Set("ETag", fmt.Sprintf("%q", etag)) // set etag to manifest key or raw entry key.
if noneMatchEtag != "" {
if bytes.Equal(storage.Address(common.Hex2Bytes(noneMatchEtag)), addr) {
w.WriteHeader(http.StatusNotModified)
return
}
}
switch {
case uri.Raw():
// check the root chunk exists by retrieving the file's size
reader, isEncrypted := s.api.Retrieve(r.Context(), addr)
if _, err := reader.Size(r.Context(), nil); err != nil {
getFail.Inc(1)
respondError(w, r, fmt.Sprintf("root chunk not found %s: %s", addr, err), http.StatusNotFound)
return
}
w.Header().Set("X-Decrypted", fmt.Sprintf("%v", isEncrypted))
// allow the request to overwrite the content type using a query
// parameter
if typ := r.URL.Query().Get("content_type"); typ != "" {
w.Header().Set("Content-Type", typ)
}
http.ServeContent(w, r, "", time.Now(), reader)
case uri.Hash():
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
fmt.Fprint(w, addr)
}
}
// HandleGetList handles a GET request to bzz-list:/<manifest>/<path> and returns
// a list of all files contained in <manifest> under <path> grouped into
// common prefixes using "/" as a delimiter
func (s *Server) HandleGetList(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
uri := GetURI(r.Context())
_, credentials, _ := r.BasicAuth()
log.Debug("handle.get.list", "ruid", ruid, "uri", uri)
getListCount.Inc(1)
// ensure the root path has a trailing slash so that relative URLs work
if uri.Path == "" && !strings.HasSuffix(r.URL.Path, "/") {
http.Redirect(w, r, r.URL.Path+"/", http.StatusMovedPermanently)
return
}
addr, err := s.api.Resolve(r.Context(), uri.Addr)
if err != nil {
getListFail.Inc(1)
respondError(w, r, fmt.Sprintf("cannot resolve %s: %s", uri.Addr, err), http.StatusNotFound)
return
}
log.Debug("handle.get.list: resolved", "ruid", ruid, "key", addr)
list, err := s.api.GetManifestList(r.Context(), s.api.Decryptor(r.Context(), credentials), addr, uri.Path)
if err != nil {
getListFail.Inc(1)
if isDecryptError(err) {
w.Header().Set("WWW-Authenticate", fmt.Sprintf("Basic realm=%q", addr.String()))
respondError(w, r, err.Error(), http.StatusUnauthorized)
return
}
respondError(w, r, err.Error(), http.StatusInternalServerError)
return
}
// if the client wants HTML (e.g. a browser) then render the list as a
// HTML index with relative URLs
if strings.Contains(r.Header.Get("Accept"), "text/html") {
w.Header().Set("Content-Type", "text/html")
err := TemplatesMap["bzz-list"].Execute(w, &htmlListData{
URI: &api.URI{
Scheme: "bzz",
Addr: uri.Addr,
Path: uri.Path,
},
List: &list,
})
if err != nil {
getListFail.Inc(1)
log.Error(fmt.Sprintf("error rendering list HTML: %s", err))
}
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(&list)
}
// HandleGetFile handles a GET request to bzz://<manifest>/<path> and responds
// with the content of the file at <path> from the given <manifest>
func (s *Server) HandleGetFile(w http.ResponseWriter, r *http.Request) {
ruid := GetRUID(r.Context())
uri := GetURI(r.Context())
_, credentials, _ := r.BasicAuth()
log.Debug("handle.get.file", "ruid", ruid, "uri", r.RequestURI)
getFileCount.Inc(1)
// ensure the root path has a trailing slash so that relative URLs work
if uri.Path == "" && !strings.HasSuffix(r.URL.Path, "/") {
http.Redirect(w, r, r.URL.Path+"/", http.StatusMovedPermanently)
return
}
var err error
manifestAddr := uri.Address()
if manifestAddr == nil {
manifestAddr, err = s.api.Resolve(r.Context(), uri.Addr)
if err != nil {
getFileFail.Inc(1)
respondError(w, r, fmt.Sprintf("cannot resolve %s: %s", uri.Addr, err), http.StatusNotFound)
return
}
} else {
w.Header().Set("Cache-Control", "max-age=2147483648, immutable") // url was of type bzz://<hex key>/path, so we are sure it is immutable.
}
log.Debug("handle.get.file: resolved", "ruid", ruid, "key", manifestAddr)
reader, contentType, status, contentKey, err := s.api.Get(r.Context(), s.api.Decryptor(r.Context(), credentials), manifestAddr, uri.Path)
etag := common.Bytes2Hex(contentKey)
noneMatchEtag := r.Header.Get("If-None-Match")
w.Header().Set("ETag", fmt.Sprintf("%q", etag)) // set etag to actual content key.
if noneMatchEtag != "" {
if bytes.Equal(storage.Address(common.Hex2Bytes(noneMatchEtag)), contentKey) {
w.WriteHeader(http.StatusNotModified)
return
}
}
if err != nil {
if isDecryptError(err) {
w.Header().Set("WWW-Authenticate", fmt.Sprintf("Basic realm=%q", manifestAddr))
respondError(w, r, err.Error(), http.StatusUnauthorized)
return
}
switch status {
case http.StatusNotFound:
getFileNotFound.Inc(1)
respondError(w, r, err.Error(), http.StatusNotFound)
default:
getFileFail.Inc(1)
respondError(w, r, err.Error(), http.StatusInternalServerError)
}
return
}
//the request results in ambiguous files
//e.g. /read with readme.md and readinglist.txt available in manifest
if status == http.StatusMultipleChoices {
list, err := s.api.GetManifestList(r.Context(), s.api.Decryptor(r.Context(), credentials), manifestAddr, uri.Path)
if err != nil {
getFileFail.Inc(1)
if isDecryptError(err) {
w.Header().Set("WWW-Authenticate", fmt.Sprintf("Basic realm=%q", manifestAddr))
respondError(w, r, err.Error(), http.StatusUnauthorized)
return
}
respondError(w, r, err.Error(), http.StatusInternalServerError)
return
}
log.Debug(fmt.Sprintf("Multiple choices! --> %v", list), "ruid", ruid)
//show a nice page links to available entries
ShowMultipleChoices(w, r, list)
return
}
// check the root chunk exists by retrieving the file's size
if _, err := reader.Size(r.Context(), nil); err != nil {
getFileNotFound.Inc(1)
respondError(w, r, fmt.Sprintf("file not found %s: %s", uri, err), http.StatusNotFound)
return
}
if contentType != "" {
w.Header().Set("Content-Type", contentType)
}
fileName := uri.Addr
if found := path.Base(uri.Path); found != "" && found != "." && found != "/" {
fileName = found
}
w.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=\"%s\"", fileName))
http.ServeContent(w, r, fileName, time.Now(), newBufferedReadSeeker(reader, getFileBufferSize))
}
// calculateNumberOfChunks calculates the number of chunks in an arbitrary content length
func calculateNumberOfChunks(contentLength int64, isEncrypted bool) int64 {
if contentLength < 4096 {
return 1
}
branchingFactor := 128
if isEncrypted {
branchingFactor = 64
}
dataChunks := math.Ceil(float64(contentLength) / float64(4096))
totalChunks := dataChunks
intermediate := dataChunks / float64(branchingFactor)
for intermediate > 1 {
totalChunks += math.Ceil(intermediate)
intermediate = intermediate / float64(branchingFactor)
}
return int64(totalChunks) + 1
}
// The size of buffer used for bufio.Reader on LazyChunkReader passed to
// http.ServeContent in HandleGetFile.
// Warning: This value influences the number of chunk requests and chunker join goroutines
// per file request.
// Recommended value is 4 times the io.Copy default buffer value which is 32kB.
const getFileBufferSize = 4 * 32 * 1024
// bufferedReadSeeker wraps bufio.Reader to expose Seek method
// from the provied io.ReadSeeker in newBufferedReadSeeker.
type bufferedReadSeeker struct {
r io.Reader
s io.Seeker
}
// newBufferedReadSeeker creates a new instance of bufferedReadSeeker,
// out of io.ReadSeeker. Argument `size` is the size of the read buffer.
func newBufferedReadSeeker(readSeeker io.ReadSeeker, size int) bufferedReadSeeker {
return bufferedReadSeeker{
r: bufio.NewReaderSize(readSeeker, size),
s: readSeeker,
}
}
func (b bufferedReadSeeker) Read(p []byte) (n int, err error) {
return b.r.Read(p)
}
func (b bufferedReadSeeker) Seek(offset int64, whence int) (int64, error) {
return b.s.Seek(offset, whence)
}
type loggingResponseWriter struct {
http.ResponseWriter
statusCode int
}
func newLoggingResponseWriter(w http.ResponseWriter) *loggingResponseWriter {
return &loggingResponseWriter{w, http.StatusOK}
}
func (lrw *loggingResponseWriter) WriteHeader(code int) {
lrw.statusCode = code
lrw.ResponseWriter.WriteHeader(code)
}
func isDecryptError(err error) bool {
return strings.Contains(err.Error(), api.ErrDecrypt.Error())
}

1409
api/http/server_test.go Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

100
api/http/test_server.go Normal file
View File

@ -0,0 +1,100 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package http
import (
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/ethersphere/swarm/api"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/storage/feed"
"github.com/ethersphere/swarm/storage/localstore"
)
type TestServer interface {
ServeHTTP(http.ResponseWriter, *http.Request)
}
func NewTestSwarmServer(t *testing.T, serverFunc func(*api.API) TestServer, resolver api.Resolver) *TestSwarmServer {
swarmDir, err := ioutil.TempDir("", "swarm-storage-test")
if err != nil {
t.Fatal(err)
}
localStore, err := localstore.New(swarmDir, make([]byte, 32), nil)
if err != nil {
os.RemoveAll(swarmDir)
t.Fatal(err)
}
tags := chunk.NewTags()
fileStore := storage.NewFileStore(localStore, storage.NewFileStoreParams(), tags)
// Swarm feeds test setup
feedsDir, err := ioutil.TempDir("", "swarm-feeds-test")
if err != nil {
t.Fatal(err)
}
feeds, err := feed.NewTestHandler(feedsDir, &feed.HandlerParams{})
if err != nil {
t.Fatal(err)
}
swarmApi := api.NewAPI(fileStore, resolver, feeds.Handler, nil, tags)
apiServer := httptest.NewServer(serverFunc(swarmApi))
tss := &TestSwarmServer{
Server: apiServer,
FileStore: fileStore,
Tags: tags,
dir: swarmDir,
Hasher: storage.MakeHashFunc(storage.DefaultHash)(),
cleanup: func() {
apiServer.Close()
fileStore.Close()
feeds.Close()
os.RemoveAll(swarmDir)
os.RemoveAll(feedsDir)
},
CurrentTime: 42,
}
feed.TimestampProvider = tss
return tss
}
type TestSwarmServer struct {
*httptest.Server
Hasher storage.SwarmHash
FileStore *storage.FileStore
Tags *chunk.Tags
dir string
cleanup func()
CurrentTime uint64
}
func (t *TestSwarmServer) Close() {
t.cleanup()
}
func (t *TestSwarmServer) Now() feed.Timestamp {
return feed.Timestamp{Time: t.CurrentTime}
}

98
api/inspector.go Normal file
View File

@ -0,0 +1,98 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package api
import (
"context"
"fmt"
"strings"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/storage"
)
type Inspector struct {
api *API
hive *network.Hive
netStore *storage.NetStore
}
func NewInspector(api *API, hive *network.Hive, netStore *storage.NetStore) *Inspector {
return &Inspector{api, hive, netStore}
}
// Hive prints the kademlia table
func (i *Inspector) Hive() string {
return i.hive.String()
}
// KademliaInfo returns structured output of the Kademlia state that we can check for equality
func (i *Inspector) KademliaInfo() network.KademliaInfo {
return i.hive.KademliaInfo()
}
func (i *Inspector) IsPullSyncing() bool {
lastReceivedChunksMsg := metrics.GetOrRegisterGauge("network.stream.received_chunks", nil)
// last received chunks msg time
lrct := time.Unix(0, lastReceivedChunksMsg.Value())
// if last received chunks msg time is after now-15sec. (i.e. within the last 15sec.) then we say that the node is still syncing
// technically this is not correct, because this might have been a retrieve request, but for the time being it works for our purposes
// because we know we are not making retrieve requests on the node while checking this
return lrct.After(time.Now().Add(-15 * time.Second))
}
// DeliveriesPerPeer returns the sum of chunks we received from a given peer
func (i *Inspector) DeliveriesPerPeer() map[string]int64 {
res := map[string]int64{}
// iterate connection in kademlia
i.hive.Kademlia.EachConn(nil, 255, func(p *network.Peer, po int) bool {
// get how many chunks we receive for retrieve requests per peer
peermetric := fmt.Sprintf("chunk.delivery.%x", p.Over()[:16])
res[fmt.Sprintf("%x", p.Over()[:16])] = metrics.GetOrRegisterCounter(peermetric, nil).Count()
return true
})
return res
}
// Has checks whether each chunk address is present in the underlying datastore,
// the bool in the returned structs indicates if the underlying datastore has
// the chunk stored with the given address (true), or not (false)
func (i *Inspector) Has(chunkAddresses []storage.Address) string {
hostChunks := []string{}
for _, addr := range chunkAddresses {
has, err := i.netStore.Has(context.Background(), addr)
if err != nil {
log.Error(err.Error())
}
if has {
hostChunks = append(hostChunks, "1")
} else {
hostChunks = append(hostChunks, "0")
}
}
return strings.Join(hostChunks, "")
}

584
api/manifest.go Normal file
View File

@ -0,0 +1,584 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package api
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/ethersphere/swarm/storage/feed"
"github.com/ethereum/go-ethereum/common"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/storage"
)
const (
ManifestType = "application/bzz-manifest+json"
FeedContentType = "application/bzz-feed"
manifestSizeLimit = 5 * 1024 * 1024
)
// Manifest represents a swarm manifest
type Manifest struct {
Entries []ManifestEntry `json:"entries,omitempty"`
}
// ManifestEntry represents an entry in a swarm manifest
type ManifestEntry struct {
Hash string `json:"hash,omitempty"`
Path string `json:"path,omitempty"`
ContentType string `json:"contentType,omitempty"`
Mode int64 `json:"mode,omitempty"`
Size int64 `json:"size,omitempty"`
ModTime time.Time `json:"mod_time,omitempty"`
Status int `json:"status,omitempty"`
Access *AccessEntry `json:"access,omitempty"`
Feed *feed.Feed `json:"feed,omitempty"`
}
// ManifestList represents the result of listing files in a manifest
type ManifestList struct {
CommonPrefixes []string `json:"common_prefixes,omitempty"`
Entries []*ManifestEntry `json:"entries,omitempty"`
}
// NewManifest creates and stores a new, empty manifest
func (a *API) NewManifest(ctx context.Context, toEncrypt bool) (storage.Address, error) {
var manifest Manifest
data, err := json.Marshal(&manifest)
if err != nil {
return nil, err
}
addr, wait, err := a.Store(ctx, bytes.NewReader(data), int64(len(data)), toEncrypt)
if err != nil {
return nil, err
}
err = wait(ctx)
return addr, err
}
// Manifest hack for supporting Swarm feeds from the bzz: scheme
// see swarm/api/api.go:API.Get() for more information
func (a *API) NewFeedManifest(ctx context.Context, feed *feed.Feed) (storage.Address, error) {
var manifest Manifest
entry := ManifestEntry{
Feed: feed,
ContentType: FeedContentType,
}
manifest.Entries = append(manifest.Entries, entry)
data, err := json.Marshal(&manifest)
if err != nil {
return nil, err
}
addr, wait, err := a.Store(ctx, bytes.NewReader(data), int64(len(data)), false)
if err != nil {
return nil, err
}
err = wait(ctx)
return addr, err
}
// ManifestWriter is used to add and remove entries from an underlying manifest
type ManifestWriter struct {
api *API
trie *manifestTrie
quitC chan bool
}
func (a *API) NewManifestWriter(ctx context.Context, addr storage.Address, quitC chan bool) (*ManifestWriter, error) {
trie, err := loadManifest(ctx, a.fileStore, addr, quitC, NOOPDecrypt)
if err != nil {
return nil, fmt.Errorf("error loading manifest %s: %s", addr, err)
}
return &ManifestWriter{a, trie, quitC}, nil
}
// AddEntry stores the given data and adds the resulting address to the manifest
func (m *ManifestWriter) AddEntry(ctx context.Context, data io.Reader, e *ManifestEntry) (addr storage.Address, err error) {
entry := newManifestTrieEntry(e, nil)
if data != nil {
var wait func(context.Context) error
addr, wait, err = m.api.Store(ctx, data, e.Size, m.trie.encrypted)
if err != nil {
return nil, err
}
err = wait(ctx)
if err != nil {
return nil, err
}
entry.Hash = addr.Hex()
}
if entry.Hash == "" {
return addr, errors.New("missing entry hash")
}
m.trie.addEntry(entry, m.quitC)
return addr, nil
}
// RemoveEntry removes the given path from the manifest
func (m *ManifestWriter) RemoveEntry(path string) error {
m.trie.deleteEntry(path, m.quitC)
return nil
}
// Store stores the manifest, returning the resulting storage address
func (m *ManifestWriter) Store() (storage.Address, error) {
return m.trie.ref, m.trie.recalcAndStore()
}
// ManifestWalker is used to recursively walk the entries in the manifest and
// all of its submanifests
type ManifestWalker struct {
api *API
trie *manifestTrie
quitC chan bool
}
func (a *API) NewManifestWalker(ctx context.Context, addr storage.Address, decrypt DecryptFunc, quitC chan bool) (*ManifestWalker, error) {
trie, err := loadManifest(ctx, a.fileStore, addr, quitC, decrypt)
if err != nil {
return nil, fmt.Errorf("error loading manifest %s: %s", addr, err)
}
return &ManifestWalker{a, trie, quitC}, nil
}
// ErrSkipManifest is used as a return value from WalkFn to indicate that the
// manifest should be skipped
var ErrSkipManifest = errors.New("skip this manifest")
// WalkFn is the type of function called for each entry visited by a recursive
// manifest walk
type WalkFn func(entry *ManifestEntry) error
// Walk recursively walks the manifest calling walkFn for each entry in the
// manifest, including submanifests
func (m *ManifestWalker) Walk(walkFn WalkFn) error {
return m.walk(m.trie, "", walkFn)
}
func (m *ManifestWalker) walk(trie *manifestTrie, prefix string, walkFn WalkFn) error {
for _, entry := range &trie.entries {
if entry == nil {
continue
}
entry.Path = prefix + entry.Path
err := walkFn(&entry.ManifestEntry)
if err != nil {
if entry.ContentType == ManifestType && err == ErrSkipManifest {
continue
}
return err
}
if entry.ContentType != ManifestType {
continue
}
if err := trie.loadSubTrie(entry, nil); err != nil {
return err
}
if err := m.walk(entry.subtrie, entry.Path, walkFn); err != nil {
return err
}
}
return nil
}
type manifestTrie struct {
fileStore *storage.FileStore
entries [257]*manifestTrieEntry // indexed by first character of basePath, entries[256] is the empty basePath entry
ref storage.Address // if ref != nil, it is stored
encrypted bool
decrypt DecryptFunc
}
func newManifestTrieEntry(entry *ManifestEntry, subtrie *manifestTrie) *manifestTrieEntry {
return &manifestTrieEntry{
ManifestEntry: *entry,
subtrie: subtrie,
}
}
type manifestTrieEntry struct {
ManifestEntry
subtrie *manifestTrie
}
func loadManifest(ctx context.Context, fileStore *storage.FileStore, addr storage.Address, quitC chan bool, decrypt DecryptFunc) (trie *manifestTrie, err error) { // non-recursive, subtrees are downloaded on-demand
log.Trace("manifest lookup", "addr", addr)
// retrieve manifest via FileStore
manifestReader, isEncrypted := fileStore.Retrieve(ctx, addr)
log.Trace("reader retrieved", "addr", addr)
return readManifest(manifestReader, addr, fileStore, isEncrypted, quitC, decrypt)
}
func readManifest(mr storage.LazySectionReader, addr storage.Address, fileStore *storage.FileStore, isEncrypted bool, quitC chan bool, decrypt DecryptFunc) (trie *manifestTrie, err error) { // non-recursive, subtrees are downloaded on-demand
// TODO check size for oversized manifests
size, err := mr.Size(mr.Context(), quitC)
if err != nil { // size == 0
// can't determine size means we don't have the root chunk
log.Trace("manifest not found", "addr", addr)
err = fmt.Errorf("Manifest not Found")
return
}
if size > manifestSizeLimit {
log.Warn("manifest exceeds size limit", "addr", addr, "size", size, "limit", manifestSizeLimit)
err = fmt.Errorf("Manifest size of %v bytes exceeds the %v byte limit", size, manifestSizeLimit)
return
}
manifestData := make([]byte, size)
read, err := mr.Read(manifestData)
if int64(read) < size {
log.Trace("manifest not found", "addr", addr)
if err == nil {
err = fmt.Errorf("Manifest retrieval cut short: read %v, expect %v", read, size)
}
return
}
log.Debug("manifest retrieved", "addr", addr)
var man struct {
Entries []*manifestTrieEntry `json:"entries"`
}
err = json.Unmarshal(manifestData, &man)
if err != nil {
err = fmt.Errorf("Manifest %v is malformed: %v", addr.Log(), err)
log.Trace("malformed manifest", "addr", addr)
return
}
log.Trace("manifest entries", "addr", addr, "len", len(man.Entries))
trie = &manifestTrie{
fileStore: fileStore,
encrypted: isEncrypted,
decrypt: decrypt,
}
for _, entry := range man.Entries {
err = trie.addEntry(entry, quitC)
if err != nil {
return
}
}
return
}
func (mt *manifestTrie) addEntry(entry *manifestTrieEntry, quitC chan bool) error {
mt.ref = nil // trie modified, hash needs to be re-calculated on demand
if entry.ManifestEntry.Access != nil {
if mt.decrypt == nil {
return errors.New("dont have decryptor")
}
err := mt.decrypt(&entry.ManifestEntry)
if err != nil {
return err
}
}
if len(entry.Path) == 0 {
mt.entries[256] = entry
return nil
}
b := entry.Path[0]
oldentry := mt.entries[b]
if (oldentry == nil) || (oldentry.Path == entry.Path && oldentry.ContentType != ManifestType) {
mt.entries[b] = entry
return nil
}
cpl := 0
for (len(entry.Path) > cpl) && (len(oldentry.Path) > cpl) && (entry.Path[cpl] == oldentry.Path[cpl]) {
cpl++
}
if (oldentry.ContentType == ManifestType) && (cpl == len(oldentry.Path)) {
if mt.loadSubTrie(oldentry, quitC) != nil {
return nil
}
entry.Path = entry.Path[cpl:]
oldentry.subtrie.addEntry(entry, quitC)
oldentry.Hash = ""
return nil
}
commonPrefix := entry.Path[:cpl]
subtrie := &manifestTrie{
fileStore: mt.fileStore,
encrypted: mt.encrypted,
}
entry.Path = entry.Path[cpl:]
oldentry.Path = oldentry.Path[cpl:]
subtrie.addEntry(entry, quitC)
subtrie.addEntry(oldentry, quitC)
mt.entries[b] = newManifestTrieEntry(&ManifestEntry{
Path: commonPrefix,
ContentType: ManifestType,
}, subtrie)
return nil
}
func (mt *manifestTrie) getCountLast() (cnt int, entry *manifestTrieEntry) {
for _, e := range &mt.entries {
if e != nil {
cnt++
entry = e
}
}
return
}
func (mt *manifestTrie) deleteEntry(path string, quitC chan bool) {
mt.ref = nil // trie modified, hash needs to be re-calculated on demand
if len(path) == 0 {
mt.entries[256] = nil
return
}
b := path[0]
entry := mt.entries[b]
if entry == nil {
return
}
if entry.Path == path {
mt.entries[b] = nil
return
}
epl := len(entry.Path)
if (entry.ContentType == ManifestType) && (len(path) >= epl) && (path[:epl] == entry.Path) {
if mt.loadSubTrie(entry, quitC) != nil {
return
}
entry.subtrie.deleteEntry(path[epl:], quitC)
entry.Hash = ""
// remove subtree if it has less than 2 elements
cnt, lastentry := entry.subtrie.getCountLast()
if cnt < 2 {
if lastentry != nil {
lastentry.Path = entry.Path + lastentry.Path
}
mt.entries[b] = lastentry
}
}
}
func (mt *manifestTrie) recalcAndStore() error {
if mt.ref != nil {
return nil
}
var buffer bytes.Buffer
buffer.WriteString(`{"entries":[`)
list := &Manifest{}
for _, entry := range &mt.entries {
if entry != nil {
if entry.Hash == "" { // TODO: paralellize
err := entry.subtrie.recalcAndStore()
if err != nil {
return err
}
entry.Hash = entry.subtrie.ref.Hex()
}
list.Entries = append(list.Entries, entry.ManifestEntry)
}
}
manifest, err := json.Marshal(list)
if err != nil {
return err
}
sr := bytes.NewReader(manifest)
ctx := context.TODO()
addr, wait, err2 := mt.fileStore.Store(ctx, sr, int64(len(manifest)), mt.encrypted)
if err2 != nil {
return err2
}
err2 = wait(ctx)
mt.ref = addr
return err2
}
func (mt *manifestTrie) loadSubTrie(entry *manifestTrieEntry, quitC chan bool) (err error) {
if entry.ManifestEntry.Access != nil {
if mt.decrypt == nil {
return errors.New("dont have decryptor")
}
err := mt.decrypt(&entry.ManifestEntry)
if err != nil {
return err
}
}
if entry.subtrie == nil {
hash := common.Hex2Bytes(entry.Hash)
entry.subtrie, err = loadManifest(context.TODO(), mt.fileStore, hash, quitC, mt.decrypt)
entry.Hash = "" // might not match, should be recalculated
}
return
}
func (mt *manifestTrie) listWithPrefixInt(prefix, rp string, quitC chan bool, cb func(entry *manifestTrieEntry, suffix string)) error {
plen := len(prefix)
var start, stop int
if plen == 0 {
start = 0
stop = 256
} else {
start = int(prefix[0])
stop = start
}
for i := start; i <= stop; i++ {
select {
case <-quitC:
return fmt.Errorf("aborted")
default:
}
entry := mt.entries[i]
if entry != nil {
epl := len(entry.Path)
if entry.ContentType == ManifestType {
l := plen
if epl < l {
l = epl
}
if prefix[:l] == entry.Path[:l] {
err := mt.loadSubTrie(entry, quitC)
if err != nil {
return err
}
err = entry.subtrie.listWithPrefixInt(prefix[l:], rp+entry.Path[l:], quitC, cb)
if err != nil {
return err
}
}
} else {
if (epl >= plen) && (prefix == entry.Path[:plen]) {
cb(entry, rp+entry.Path[plen:])
}
}
}
}
return nil
}
func (mt *manifestTrie) listWithPrefix(prefix string, quitC chan bool, cb func(entry *manifestTrieEntry, suffix string)) (err error) {
return mt.listWithPrefixInt(prefix, "", quitC, cb)
}
func (mt *manifestTrie) findPrefixOf(path string, quitC chan bool) (entry *manifestTrieEntry, pos int) {
log.Trace(fmt.Sprintf("findPrefixOf(%s)", path))
if len(path) == 0 {
return mt.entries[256], 0
}
//see if first char is in manifest entries
b := path[0]
entry = mt.entries[b]
if entry == nil {
return mt.entries[256], 0
}
epl := len(entry.Path)
log.Trace(fmt.Sprintf("path = %v entry.Path = %v epl = %v", path, entry.Path, epl))
if len(path) <= epl {
if entry.Path[:len(path)] == path {
if entry.ContentType == ManifestType {
err := mt.loadSubTrie(entry, quitC)
if err == nil && entry.subtrie != nil {
subentries := entry.subtrie.entries
for i := 0; i < len(subentries); i++ {
sub := subentries[i]
if sub != nil && sub.Path == "" {
return sub, len(path)
}
}
}
entry.Status = http.StatusMultipleChoices
}
pos = len(path)
return
}
return nil, 0
}
if path[:epl] == entry.Path {
log.Trace(fmt.Sprintf("entry.ContentType = %v", entry.ContentType))
//the subentry is a manifest, load subtrie
if entry.ContentType == ManifestType && (strings.Contains(entry.Path, path) || strings.Contains(path, entry.Path)) {
err := mt.loadSubTrie(entry, quitC)
if err != nil {
return nil, 0
}
sub, pos := entry.subtrie.findPrefixOf(path[epl:], quitC)
if sub != nil {
entry = sub
pos += epl
return sub, pos
} else if path == entry.Path {
entry.Status = http.StatusMultipleChoices
}
} else {
//entry is not a manifest, return it
if path != entry.Path {
return nil, 0
}
}
}
return nil, 0
}
// file system manifest always contains regularized paths
// no leading or trailing slashes, only single slashes inside
func RegularSlashes(path string) (res string) {
for i := 0; i < len(path); i++ {
if (path[i] != '/') || ((i > 0) && (path[i-1] != '/')) {
res = res + path[i:i+1]
}
}
if (len(res) > 0) && (res[len(res)-1] == '/') {
res = res[:len(res)-1]
}
return
}
func (mt *manifestTrie) getEntry(spath string) (entry *manifestTrieEntry, fullpath string) {
path := RegularSlashes(spath)
var pos int
quitC := make(chan bool)
entry, pos = mt.findPrefixOf(path, quitC)
return entry, path[:pos]
}

View File

@ -25,7 +25,8 @@ import (
"strings"
"testing"
"github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/storage"
)
func manifest(paths ...string) (manifestReader storage.LazySectionReader) {
@ -42,7 +43,7 @@ func manifest(paths ...string) (manifestReader storage.LazySectionReader) {
func testGetEntry(t *testing.T, path, match string, multiple bool, paths ...string) *manifestTrie {
quitC := make(chan bool)
fileStore := storage.NewFileStore(nil, storage.NewFileStoreParams())
fileStore := storage.NewFileStore(nil, storage.NewFileStoreParams(), chunk.NewTags())
ref := make([]byte, fileStore.HashSize())
trie, err := readManifest(manifest(paths...), ref, fileStore, false, quitC, NOOPDecrypt)
if err != nil {
@ -99,7 +100,7 @@ func TestGetEntry(t *testing.T) {
func TestExactMatch(t *testing.T) {
quitC := make(chan bool)
mf := manifest("shouldBeExactMatch.css", "shouldBeExactMatch.css.map")
fileStore := storage.NewFileStore(nil, storage.NewFileStoreParams())
fileStore := storage.NewFileStore(nil, storage.NewFileStoreParams(), chunk.NewTags())
ref := make([]byte, fileStore.HashSize())
trie, err := readManifest(mf, ref, fileStore, false, quitC, nil)
if err != nil {
@ -132,7 +133,7 @@ func TestAddFileWithManifestPath(t *testing.T) {
reader := &storage.LazyTestSectionReader{
SectionReader: io.NewSectionReader(bytes.NewReader(manifest), 0, int64(len(manifest))),
}
fileStore := storage.NewFileStore(nil, storage.NewFileStoreParams())
fileStore := storage.NewFileStore(nil, storage.NewFileStoreParams(), chunk.NewTags())
ref := make([]byte, fileStore.HashSize())
trie, err := readManifest(reader, ref, fileStore, false, nil, NOOPDecrypt)
if err != nil {

View File

Before

Width:  |  Height:  |  Size: 4.0 KiB

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

@ -23,7 +23,7 @@ import (
"strings"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethersphere/swarm/storage"
)
//matches hex swarm hashes

View File

@ -21,22 +21,20 @@ import (
"reflect"
"testing"
"github.com/ethereum/go-ethereum/swarm/storage"
"github.com/ethersphere/swarm/storage"
)
func TestParseURI(t *testing.T) {
type test struct {
uri string
expectURI *URI
expectErr bool
expectRaw bool
expectImmutable bool
expectList bool
expectHash bool
expectDeprecatedRaw bool
expectDeprecatedImmutable bool
expectValidKey bool
expectAddr storage.Address
uri string
expectURI *URI
expectErr bool
expectRaw bool
expectImmutable bool
expectList bool
expectHash bool
expectValidKey bool
expectAddr storage.Address
}
tests := []test{
{

View File

@ -1,7 +1,11 @@
os: Visual Studio 2015
branches:
only:
- master
# Clone directly into GOPATH.
clone_folder: C:\gopath\src\github.com\ethereum\go-ethereum
clone_folder: C:\gopath\src\github.com\ethersphere\swarm
clone_depth: 5
version: "{branch}.{build}"
environment:
@ -9,12 +13,12 @@ environment:
GOPATH: C:\gopath
CC: gcc.exe
matrix:
- GETH_ARCH: amd64
- APP_ARCH: amd64
MSYS2_ARCH: x86_64
MSYS2_BITS: 64
MSYSTEM: MINGW64
PATH: C:\msys64\mingw64\bin\;C:\Program Files (x86)\NSIS\;%PATH%
- GETH_ARCH: 386
- APP_ARCH: 386
MSYS2_ARCH: i686
MSYS2_BITS: 32
MSYSTEM: MINGW32
@ -23,8 +27,8 @@ environment:
install:
- git submodule update --init
- rmdir C:\go /s /q
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.11.2.windows-%GETH_ARCH%.zip
- 7z x go1.11.2.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
- appveyor DownloadFile https://dl.google.com/go/go1.12.windows-%APP_ARCH%.zip
- 7z x go1.12.windows-%APP_ARCH%.zip -y -oC:\ > NUL
- go version
- gcc --version
@ -32,8 +36,7 @@ build_script:
- go run build\ci.go install
after_build:
- go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
- go run build\ci.go nsis -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
- go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload ethswarm/builds
test_script:
- set CGO_ENABLED=1

View File

@ -61,7 +61,7 @@ const (
)
// BaseHasherFunc is a hash.Hash constructor function used for the base hash of the BMT.
// implemented by Keccak256 SHA3 sha3.NewKeccak256
// implemented by Keccak256 SHA3 sha3.NewLegacyKeccak256
type BaseHasherFunc func() hash.Hash
// Hasher a reusable hasher for fixed maximum size chunks representing a BMT

View File

@ -26,8 +26,8 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/crypto/sha3"
"github.com/ethereum/go-ethereum/swarm/testutil"
"github.com/ethersphere/swarm/testutil"
"golang.org/x/crypto/sha3"
)
// the actual data length generated (could be longer than max datalength of the BMT)
@ -44,7 +44,7 @@ var counts = []int{1, 2, 3, 4, 5, 8, 9, 15, 16, 17, 32, 37, 42, 53, 63, 64, 65,
// calculates the Keccak256 SHA3 hash of the data
func sha3hash(data ...[]byte) []byte {
h := sha3.NewKeccak256()
h := sha3.NewLegacyKeccak256()
return doSum(h, nil, data...)
}
@ -121,7 +121,7 @@ func TestRefHasher(t *testing.T) {
t.Run(fmt.Sprintf("%d_segments_%d_bytes", segmentCount, length), func(t *testing.T) {
data := testutil.RandomBytes(i, length)
expected := x.expected(data)
actual := NewRefHasher(sha3.NewKeccak256, segmentCount).Hash(data)
actual := NewRefHasher(sha3.NewLegacyKeccak256, segmentCount).Hash(data)
if !bytes.Equal(actual, expected) {
t.Fatalf("expected %x, got %x", expected, actual)
}
@ -133,7 +133,7 @@ func TestRefHasher(t *testing.T) {
// tests if hasher responds with correct hash comparing the reference implementation return value
func TestHasherEmptyData(t *testing.T) {
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
var data []byte
for _, count := range counts {
t.Run(fmt.Sprintf("%d_segments", count), func(t *testing.T) {
@ -153,7 +153,7 @@ func TestHasherEmptyData(t *testing.T) {
// tests sequential write with entire max size written in one go
func TestSyncHasherCorrectness(t *testing.T) {
data := testutil.RandomBytes(1, BufferSize)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
size := hasher().Size()
var err error
@ -179,7 +179,7 @@ func TestSyncHasherCorrectness(t *testing.T) {
// tests order-neutral concurrent writes with entire max size written in one go
func TestAsyncCorrectness(t *testing.T) {
data := testutil.RandomBytes(1, BufferSize)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
size := hasher().Size()
whs := []whenHash{first, last, random}
@ -226,7 +226,7 @@ func TestHasherReuse(t *testing.T) {
// tests if bmt reuse is not corrupting result
func testHasherReuse(poolsize int, t *testing.T) {
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
pool := NewTreePool(hasher, segmentCount, poolsize)
defer pool.Drain(0)
bmt := New(pool)
@ -243,7 +243,7 @@ func testHasherReuse(poolsize int, t *testing.T) {
// Tests if pool can be cleanly reused even in concurrent use by several hasher
func TestBMTConcurrentUse(t *testing.T) {
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
pool := NewTreePool(hasher, segmentCount, PoolSize)
defer pool.Drain(0)
cycles := 100
@ -277,7 +277,7 @@ LOOP:
// Tests BMT Hasher io.Writer interface is working correctly
// even multiple short random write buffers
func TestBMTWriterBuffers(t *testing.T) {
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
for _, count := range counts {
t.Run(fmt.Sprintf("%d_segments", count), func(t *testing.T) {
@ -410,7 +410,7 @@ func BenchmarkPool(t *testing.B) {
// benchmarks simple sha3 hash on chunks
func benchmarkSHA3(t *testing.B, n int) {
data := testutil.RandomBytes(1, n)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
h := hasher()
t.ReportAllocs()
@ -426,7 +426,7 @@ func benchmarkSHA3(t *testing.B, n int) {
// the premise is that this is the minimum computation needed for a BMT
// therefore this serves as a theoretical optimum for concurrent implementations
func benchmarkBMTBaseline(t *testing.B, n int) {
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
hashSize := hasher().Size()
data := testutil.RandomBytes(1, hashSize)
@ -453,7 +453,7 @@ func benchmarkBMTBaseline(t *testing.B, n int) {
// benchmarks BMT Hasher
func benchmarkBMT(t *testing.B, n int) {
data := testutil.RandomBytes(1, n)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
pool := NewTreePool(hasher, segmentCount, PoolSize)
bmt := New(pool)
@ -467,7 +467,7 @@ func benchmarkBMT(t *testing.B, n int) {
// benchmarks BMT hasher with asynchronous concurrent segment/section writes
func benchmarkBMTAsync(t *testing.B, n int, wh whenHash, double bool) {
data := testutil.RandomBytes(1, n)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
pool := NewTreePool(hasher, segmentCount, PoolSize)
bmt := New(pool).NewAsyncWriter(double)
idxs, segments := splitAndShuffle(bmt.SectionSize(), data)
@ -485,7 +485,7 @@ func benchmarkBMTAsync(t *testing.B, n int, wh whenHash, double bool) {
// benchmarks 100 concurrent bmt hashes with pool capacity
func benchmarkPool(t *testing.B, poolsize, n int) {
data := testutil.RandomBytes(1, n)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
pool := NewTreePool(hasher, segmentCount, poolsize)
cycles := 100
@ -508,7 +508,7 @@ func benchmarkPool(t *testing.B, poolsize, n int) {
// benchmarks the reference hasher
func benchmarkRefHasher(t *testing.B, n int) {
data := testutil.RandomBytes(1, n)
hasher := sha3.NewKeccak256
hasher := sha3.NewLegacyKeccak256
rbmt := NewRefHasher(hasher, 128)
t.ReportAllocs()

View File

@ -7,27 +7,34 @@ Canonical.
Packages of develop branch commits have suffix -unstable and cannot be installed alongside
the stable version. Switching between release streams requires user intervention.
## Launchpad
The packages are built and served by launchpad.net. We generate a Debian source package
for each distribution and upload it. Their builder picks up the source package, builds it
and installs the new version into the PPA repository. Launchpad requires a valid signature
by a team member for source package uploads. The signing key is stored in an environment
variable which Travis CI makes available to certain builds.
by a team member for source package uploads.
The signing key is stored in an environment variable which Travis CI makes available to
certain builds. Since Travis CI doesn't support FTP, SFTP is used to transfer the
packages. To set this up yourself, you need to create a Launchpad user and add a GPG key
and SSH key to it. Then encode both keys as base64 and configure 'secret' environment
variables `PPA_SIGNING_KEY` and `PPA_SSH_KEY` on Travis.
We want to build go-ethereum with the most recent version of Go, irrespective of the Go
version that is available in the main Ubuntu repository. In order to make this possible,
our PPA depends on the ~gophers/ubuntu/archive PPA. Our source package build-depends on
golang-1.10, which is co-installable alongside the regular golang package. PPA dependencies
golang-1.11, which is co-installable alongside the regular golang package. PPA dependencies
can be edited at https://launchpad.net/%7Eethereum/+archive/ubuntu/ethereum/+edit-dependencies
## Building Packages Locally (for testing)
You need to run Ubuntu to do test packaging.
Add the gophers PPA and install Go 1.10 and Debian packaging tools:
Add the gophers PPA and install Go 1.11 and Debian packaging tools:
$ sudo apt-add-repository ppa:gophers/ubuntu/archive
$ sudo apt-get update
$ sudo apt-get install build-essential golang-1.10 devscripts debhelper
$ sudo apt-get install build-essential golang-1.11 devscripts debhelper python-bzrlib python-paramiko
Create the source packages:

View File

@ -29,9 +29,6 @@ Available commands are:
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -upload dest ] -- archives build artifacts
importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
nsis -- creates a Windows NSIS installer
aar [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an Android archive
xcode [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an iOS XCode framework
xgo [ -alltools ] [ options ] -- cross builds according to options
purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore
@ -41,7 +38,6 @@ For all commands, -n prevents execution of external programs (dry run mode).
package main
import (
"bufio"
"bytes"
"encoding/base64"
"flag"
@ -58,68 +54,17 @@ import (
"strings"
"time"
"github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params"
sv "github.com/ethereum/go-ethereum/swarm/version"
"github.com/ethersphere/swarm/internal/build"
sv "github.com/ethersphere/swarm/version"
)
var (
// Files that end up in the geth*.zip archive.
gethArchiveFiles = []string{
"COPYING",
executablePath("geth"),
}
// Files that end up in the geth-alltools*.zip archive.
allToolsArchiveFiles = []string{
"COPYING",
executablePath("abigen"),
executablePath("bootnode"),
executablePath("evm"),
executablePath("geth"),
executablePath("puppeth"),
executablePath("rlpdump"),
executablePath("wnode"),
}
// Files that end up in the swarm*.zip archive.
swarmArchiveFiles = []string{
"COPYING",
executablePath("swarm"),
}
// A debian package is created for all executables listed here.
debExecutables = []debExecutable{
{
BinaryName: "abigen",
Description: "Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages.",
},
{
BinaryName: "bootnode",
Description: "Ethereum bootnode.",
},
{
BinaryName: "evm",
Description: "Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode.",
},
{
BinaryName: "geth",
Description: "Ethereum CLI client.",
},
{
BinaryName: "puppeth",
Description: "Ethereum private network manager.",
},
{
BinaryName: "rlpdump",
Description: "Developer utility tool that prints RLP structures.",
},
{
BinaryName: "wnode",
Description: "Ethereum Whisper diagnostic tool",
},
}
// A debian package is created for all executables listed here.
debSwarmExecutables = []debExecutable{
{
@ -129,12 +74,6 @@ var (
},
}
debEthereum = debPackage{
Name: "ethereum",
Version: params.Version,
Executables: debExecutables,
}
debSwarm = debPackage{
Name: "ethereum-swarm",
Version: sv.Version,
@ -144,11 +83,10 @@ var (
// Debian meta packages to build and push to Ubuntu PPA
debPackages = []debPackage{
debSwarm,
debEthereum,
}
// Packages to be cross-compiled by the xgo command
allCrossCompiledArchiveFiles = append(allToolsArchiveFiles, swarmArchiveFiles...)
allCrossCompiledArchiveFiles = swarmArchiveFiles
// Distros for which packages are created.
// Note: vivid is unsupported because there is no golang-1.6 package for it.
@ -156,7 +94,7 @@ var (
// Note: yakkety is unsupported because it was officially deprecated on lanchpad.
// Note: zesty is unsupported because it was officially deprecated on lanchpad.
// Note: artful is unsupported because it was officially deprecated on lanchpad.
debDistros = []string{"trusty", "xenial", "bionic", "cosmic"}
debDistros = []string{"trusty", "xenial", "bionic", "cosmic", "disco"}
)
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@ -188,12 +126,6 @@ func main() {
doArchive(os.Args[2:])
case "debsrc":
doDebianSource(os.Args[2:])
case "nsis":
doWindowsInstaller(os.Args[2:])
case "aar":
doAndroidArchive(os.Args[2:])
case "xcode":
doXCodeFramework(os.Args[2:])
case "xgo":
doXgo(os.Args[2:])
case "purge":
@ -384,7 +316,7 @@ func doArchive(cmdline []string) {
arch = flag.String("arch", runtime.GOARCH, "Architecture cross packaging")
atype = flag.String("type", "zip", "Type of archive to write (zip|tar)")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. LINUX_SIGNING_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "ethswarm/builds")`)
ext string
)
flag.CommandLine.Parse(cmdline)
@ -400,24 +332,15 @@ func doArchive(cmdline []string) {
var (
env = build.Env()
basegeth = archiveBasename(*arch, params.ArchiveVersion(env.Commit))
geth = "geth-" + basegeth + ext
alltools = "geth-alltools-" + basegeth + ext
baseswarm = archiveBasename(*arch, sv.ArchiveVersion(env.Commit))
swarm = "swarm-" + baseswarm + ext
)
maybeSkipArchive(env)
if err := build.WriteArchive(geth, gethArchiveFiles); err != nil {
log.Fatal(err)
}
if err := build.WriteArchive(alltools, allToolsArchiveFiles); err != nil {
log.Fatal(err)
}
if err := build.WriteArchive(swarm, swarmArchiveFiles); err != nil {
log.Fatal(err)
}
for _, archive := range []string{geth, alltools, swarm} {
for _, archive := range []string{swarm} {
if err := archiveUpload(archive, *upload, *signer); err != nil {
log.Fatal(err)
}
@ -429,23 +352,14 @@ func archiveBasename(arch string, archiveVersion string) string {
if arch == "arm" {
platform += os.Getenv("GOARM")
}
if arch == "android" {
platform = "android-all"
}
if arch == "ios" {
platform = "ios-all"
}
return platform + "-" + archiveVersion
}
func archiveUpload(archive string, blobstore string, signer string) error {
// If signing was requested, generate the signature files
if signer != "" {
pgpkey, err := base64.StdEncoding.DecodeString(os.Getenv(signer))
if err != nil {
return fmt.Errorf("invalid base64 %s", signer)
}
if err := build.PGPSignFile(archive, archive+".asc", string(pgpkey)); err != nil {
key := getenvBase64(signer)
if err := build.PGPSignFile(archive, archive+".asc", string(key)); err != nil {
return err
}
}
@ -478,7 +392,9 @@ func maybeSkipArchive(env build.Environment) {
log.Printf("skipping because this is a cron job")
os.Exit(0)
}
if env.Branch != "master" && !strings.HasPrefix(env.Tag, "v1.") {
validArchiveTag, _ := regexp.MatchString("v([0-9]+).([0-9]+).([0-9]+)", "v10.2.3")
if env.Branch != "master" && !validArchiveTag {
log.Printf("skipping because branch %q, tag %q is not on the whitelist", env.Branch, env.Tag)
os.Exit(0)
}
@ -488,7 +404,8 @@ func maybeSkipArchive(env build.Environment) {
func doDebianSource(cmdline []string) {
var (
signer = flag.String("signer", "", `Signing key name, also used as package author`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ppa:ethereum/ethereum")`)
upload = flag.String("upload", "", `Where to upload the source package (usually "ethereum/ethereum")`)
sshUser = flag.String("sftp-user", "", `Username for SFTP upload (usually "geth-ci")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
now = time.Now()
)
@ -498,11 +415,7 @@ func doDebianSource(cmdline []string) {
maybeSkipArchive(env)
// Import the signing key.
if b64key := os.Getenv("PPA_SIGNING_KEY"); b64key != "" {
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatal("invalid base64 PPA_SIGNING_KEY")
}
if key := getenvBase64("PPA_SIGNING_KEY"); len(key) > 0 {
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
@ -513,22 +426,58 @@ func doDebianSource(cmdline []string) {
for _, distro := range debDistros {
meta := newDebMetadata(distro, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
pkgdir := stageDebianSource(*workdir, meta)
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc")
debuild := exec.Command("debuild", "-S", "-sa", "-us", "-uc", "-d", "-Zxz")
debuild.Dir = pkgdir
build.MustRun(debuild)
changes := fmt.Sprintf("%s_%s_source.changes", meta.Name(), meta.VersionString())
changes = filepath.Join(*workdir, changes)
var (
basename = fmt.Sprintf("%s_%s", meta.Name(), meta.VersionString())
source = filepath.Join(*workdir, basename+".tar.xz")
dsc = filepath.Join(*workdir, basename+".dsc")
changes = filepath.Join(*workdir, basename+"_source.changes")
)
if *signer != "" {
build.MustRunCommand("debsign", changes)
}
if *upload != "" {
build.MustRunCommand("dput", *upload, changes)
ppaUpload(*workdir, *upload, *sshUser, []string{source, dsc, changes})
}
}
}
}
func ppaUpload(workdir, ppa, sshUser string, files []string) {
p := strings.Split(ppa, "/")
if len(p) != 2 {
log.Fatal("-upload PPA name must contain single /")
}
if sshUser == "" {
sshUser = p[0]
}
incomingDir := fmt.Sprintf("~%s/ubuntu/%s", p[0], p[1])
// Create the SSH identity file if it doesn't exist.
var idfile string
if sshkey := getenvBase64("PPA_SSH_KEY"); len(sshkey) > 0 {
idfile = filepath.Join(workdir, "sshkey")
if _, err := os.Stat(idfile); os.IsNotExist(err) {
ioutil.WriteFile(idfile, sshkey, 0600)
}
}
// Upload
dest := sshUser + "@ppa.launchpad.net"
if err := build.UploadSFTP(idfile, dest, incomingDir, files); err != nil {
log.Fatal(err)
}
}
func getenvBase64(variable string) []byte {
dec, err := base64.StdEncoding.DecodeString(os.Getenv(variable))
if err != nil {
log.Fatal("invalid base64 " + variable)
}
return []byte(dec)
}
func makeWorkdir(wdflag string) string {
var err error
if wdflag != "" {
@ -684,300 +633,6 @@ func stageDebianSource(tmpdir string, meta debMetadata) (pkgdir string) {
return pkgdir
}
// Windows installer
func doWindowsInstaller(cmdline []string) {
// Parse the flags and make skip installer generation on PRs
var (
arch = flag.String("arch", runtime.GOARCH, "Architecture for cross build packaging")
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. WINDOWS_SIGNING_KEY)`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
workdir = flag.String("workdir", "", `Output directory for packages (uses temp dir if unset)`)
)
flag.CommandLine.Parse(cmdline)
*workdir = makeWorkdir(*workdir)
env := build.Env()
maybeSkipArchive(env)
// Aggregate binaries that are included in the installer
var (
devTools []string
allTools []string
gethTool string
)
for _, file := range allToolsArchiveFiles {
if file == "COPYING" { // license, copied later
continue
}
allTools = append(allTools, filepath.Base(file))
if filepath.Base(file) == "geth.exe" {
gethTool = file
} else {
devTools = append(devTools, file)
}
}
// Render NSIS scripts: Installer NSIS contains two installer sections,
// first section contains the geth binary, second section holds the dev tools.
templateData := map[string]interface{}{
"License": "COPYING",
"Geth": gethTool,
"DevTools": devTools,
}
build.Render("build/nsis.geth.nsi", filepath.Join(*workdir, "geth.nsi"), 0644, nil)
build.Render("build/nsis.install.nsh", filepath.Join(*workdir, "install.nsh"), 0644, templateData)
build.Render("build/nsis.uninstall.nsh", filepath.Join(*workdir, "uninstall.nsh"), 0644, allTools)
build.Render("build/nsis.pathupdate.nsh", filepath.Join(*workdir, "PathUpdate.nsh"), 0644, nil)
build.Render("build/nsis.envvarupdate.nsh", filepath.Join(*workdir, "EnvVarUpdate.nsh"), 0644, nil)
build.CopyFile(filepath.Join(*workdir, "SimpleFC.dll"), "build/nsis.simplefc.dll", 0755)
build.CopyFile(filepath.Join(*workdir, "COPYING"), "COPYING", 0755)
// Build the installer. This assumes that all the needed files have been previously
// built (don't mix building and packaging to keep cross compilation complexity to a
// minimum).
version := strings.Split(params.Version, ".")
if env.Commit != "" {
version[2] += "-" + env.Commit[:8]
}
installer, _ := filepath.Abs("geth-" + archiveBasename(*arch, params.ArchiveVersion(env.Commit)) + ".exe")
build.MustRunCommand("makensis.exe",
"/DOUTPUTFILE="+installer,
"/DMAJORVERSION="+version[0],
"/DMINORVERSION="+version[1],
"/DBUILDVERSION="+version[2],
"/DARCH="+*arch,
filepath.Join(*workdir, "geth.nsi"),
)
// Sign and publish installer.
if err := archiveUpload(installer, *upload, *signer); err != nil {
log.Fatal(err)
}
}
// Android archives
func doAndroidArchive(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. ANDROID_SIGNING_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "https://oss.sonatype.org")`)
upload = flag.String("upload", "", `Destination to upload the archive (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
// Sanity check that the SDK and NDK are installed and set
if os.Getenv("ANDROID_HOME") == "" {
log.Fatal("Please ensure ANDROID_HOME points to your Android SDK")
}
if os.Getenv("ANDROID_NDK") == "" {
log.Fatal("Please ensure ANDROID_NDK points to your Android NDK")
}
// Build the Android archive and Maven resources
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init", "--ndk", os.Getenv("ANDROID_NDK")))
build.MustRun(gomobileTool("bind", "-ldflags", "-s -w", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
if *local {
// If we're building locally, copy bundle to build dir and skip Maven
os.Rename("geth.aar", filepath.Join(GOBIN, "geth.aar"))
return
}
meta := newMavenMetadata(env)
build.Render("build/mvn.pom", meta.Package+".pom", 0755, meta)
// Skip Maven deploy and Azure upload for PR builds
maybeSkipArchive(env)
// Sign and upload the archive to Azure
archive := "geth-" + archiveBasename("android", params.ArchiveVersion(env.Commit)) + ".aar"
os.Rename("geth.aar", archive)
if err := archiveUpload(archive, *upload, *signer); err != nil {
log.Fatal(err)
}
// Sign and upload all the artifacts to Maven Central
os.Rename(archive, meta.Package+".aar")
if *signer != "" && *deploy != "" {
// Import the signing key into the local GPG instance
b64key := os.Getenv(*signer)
key, err := base64.StdEncoding.DecodeString(b64key)
if err != nil {
log.Fatalf("invalid base64 %s", *signer)
}
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
keyID, err := build.PGPKeyID(string(key))
if err != nil {
log.Fatal(err)
}
// Upload the artifacts to Sonatype and/or Maven Central
repo := *deploy + "/service/local/staging/deploy/maven2"
if meta.Develop {
repo = *deploy + "/content/repositories/snapshots"
}
build.MustRunCommand("mvn", "gpg:sign-and-deploy-file", "-e", "-X",
"-settings=build/mvn.settings", "-Durl="+repo, "-DrepositoryId=ossrh",
"-Dgpg.keyname="+keyID,
"-DpomFile="+meta.Package+".pom", "-Dfile="+meta.Package+".aar")
}
}
func gomobileTool(subcmd string, args ...string) *exec.Cmd {
cmd := exec.Command(filepath.Join(GOBIN, "gomobile"), subcmd)
cmd.Args = append(cmd.Args, args...)
cmd.Env = []string{
"GOPATH=" + build.GOPATH(),
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
return cmd
}
type mavenMetadata struct {
Version string
Package string
Develop bool
Contributors []mavenContributor
}
type mavenContributor struct {
Name string
Email string
}
func newMavenMetadata(env build.Environment) mavenMetadata {
// Collect the list of authors from the repo root
contribs := []mavenContributor{}
if authors, err := os.Open("AUTHORS"); err == nil {
defer authors.Close()
scanner := bufio.NewScanner(authors)
for scanner.Scan() {
// Skip any whitespace from the authors list
line := strings.TrimSpace(scanner.Text())
if line == "" || line[0] == '#' {
continue
}
// Split the author and insert as a contributor
re := regexp.MustCompile("([^<]+) <(.+)>")
parts := re.FindStringSubmatch(line)
if len(parts) == 3 {
contribs = append(contribs, mavenContributor{Name: parts[1], Email: parts[2]})
}
}
}
// Render the version and package strings
version := params.Version
if isUnstableBuild(env) {
version += "-SNAPSHOT"
}
return mavenMetadata{
Version: version,
Package: "geth-" + version,
Develop: isUnstableBuild(env),
Contributors: contribs,
}
}
// XCode frameworks
func doXCodeFramework(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. IOS_SIGNING_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "trunk")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
// Build the iOS XCode framework
build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind"))
build.MustRun(gomobileTool("init"))
bind := gomobileTool("bind", "-ldflags", "-s -w", "--target", "ios", "--tags", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
if *local {
// If we're building locally, use the build folder and stop afterwards
bind.Dir, _ = filepath.Abs(GOBIN)
build.MustRun(bind)
return
}
archive := "geth-" + archiveBasename("ios", params.ArchiveVersion(env.Commit))
if err := os.Mkdir(archive, os.ModePerm); err != nil {
log.Fatal(err)
}
bind.Dir, _ = filepath.Abs(archive)
build.MustRun(bind)
build.MustRunCommand("tar", "-zcvf", archive+".tar.gz", archive)
// Skip CocoaPods deploy and Azure upload for PR builds
maybeSkipArchive(env)
// Sign and upload the framework to Azure
if err := archiveUpload(archive+".tar.gz", *upload, *signer); err != nil {
log.Fatal(err)
}
// Prepare and upload a PodSpec to CocoaPods
if *deploy != "" {
meta := newPodMetadata(env, archive)
build.Render("build/pod.podspec", "Geth.podspec", 0755, meta)
build.MustRunCommand("pod", *deploy, "push", "Geth.podspec", "--allow-warnings", "--verbose")
}
}
type podMetadata struct {
Version string
Commit string
Archive string
Contributors []podContributor
}
type podContributor struct {
Name string
Email string
}
func newPodMetadata(env build.Environment, archive string) podMetadata {
// Collect the list of authors from the repo root
contribs := []podContributor{}
if authors, err := os.Open("AUTHORS"); err == nil {
defer authors.Close()
scanner := bufio.NewScanner(authors)
for scanner.Scan() {
// Skip any whitespace from the authors list
line := strings.TrimSpace(scanner.Text())
if line == "" || line[0] == '#' {
continue
}
// Split the author and insert as a contributor
re := regexp.MustCompile("([^<]+) <(.+)>")
parts := re.FindStringSubmatch(line)
if len(parts) == 3 {
contribs = append(contribs, podContributor{Name: parts[1], Email: parts[2]})
}
}
}
version := params.Version
if isUnstableBuild(env) {
version += "-unstable." + env.Buildnum
}
return podMetadata{
Archive: archive,
Version: version,
Commit: env.Commit,
Contributors: contribs,
}
}
// Cross compilation
func doXgo(cmdline []string) {
@ -1034,7 +689,7 @@ func xgoTool(args []string) *exec.Cmd {
func doPurge(cmdline []string) {
var (
store = flag.String("store", "", `Destination from where to purge archives (usually "gethstore/builds")`)
store = flag.String("store", "", `Destination from where to purge archives (usually "ethswarm/builds")`)
limit = flag.Int("days", 30, `Age threshold above which to delete unstable archives`)
)
flag.CommandLine.Parse(cmdline)

View File

@ -2,11 +2,11 @@ Source: {{.Name}}
Section: science
Priority: extra
Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), golang-1.10
Build-Depends: debhelper (>= 8.0.0), golang-1.11
Standards-Version: 3.9.5
Homepage: https://ethereum.org
Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum
Homepage: https://swarm.ethereum.org
Vcs-Git: git://github.com/ethersphere/swarm.git
Vcs-Browser: https://github.com/ethersphere/swarm
{{range .Executables}}
Package: {{$.ExeName .}}

View File

@ -4,8 +4,11 @@
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
# Launchpad rejects Go's access to $HOME/.cache, use custom folder
export GOCACHE=/tmp/go-build
override_dh_auto_build:
build/env.sh /usr/lib/go-1.10/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
build/env.sh /usr/lib/go-1.11/bin/go run build/ci.go install -git-commit={{.Env.Commit}} -git-branch={{.Env.Branch}} -git-tag={{.Env.Tag}} -buildnum={{.Env.Buildnum}} -pull-request={{.Env.IsPullRequest}}
override_dh_auto_test:

View File

@ -1,5 +0,0 @@
{{.Name}} ({{.VersionString}}) {{.Distro}}; urgency=low
* git build of {{.Env.Commit}}
-- {{.Author}} {{.Time}}

Some files were not shown because too many files have changed in this diff Show More