Compare commits

..

315 Commits

Author SHA1 Message Date
4044a8cea4 Merge pull request #2258 from obscuren/release/1.3.4
Homestead Release Candidate
2016-02-29 15:05:37 +01:00
61be63bb9b [release/1.3.4] cmd/utils, params: homestead block
(cherry picked from commit e22fd22c97b4f5d4af118dca3fb2cb6292a520a6)

Conflicts:
	cmd/utils/flags.go
2016-02-29 14:59:30 +01:00
5f7e74d5c8 [release/1.3.4] cmd/utils: lower the min accepted gas price for relay and GPO to 20 shannon
(cherry picked from commit ab92678fb3)
2016-02-29 14:59:20 +01:00
2be2842758 [release/1.3.4] eth/downloader: fix premature exit before notifying all part fetchers
(cherry picked from commit 64ee5763ee)
2016-02-29 13:44:34 +01:00
c2df9d356a [release/1.3.4] crypto/secp256k1: remove dependency on libgmp
Turns out we actually don't need it, USE_NUM_NONE works
because we also set USE_FIELD_INV_BUILTIN.

Conflicts:
	Makefile
	crypto/secp256k1/secp256.go
2016-02-29 13:32:26 +01:00
a4f4846fff [release/1.3.4] eth/downloader: fix header download limiting
Fixes #2201
(cherry picked from commit 26e72b2ccd)
2016-02-29 13:24:33 +01:00
7f83e68b13 [release/1.3.4] eth: fixed homestead tx check
When a block is queried for retrieval we should add a check whether the
block falls within the frontier rules. If we'd always use `From`
retrieving transaction might fail. This PR temporarily changes
everything to `FromFrontier` (safe!).

This is a backport of c616391df2
2016-02-29 13:24:33 +01:00
5570b11398 [release/1.3.4] params: settle the Pi vs Tau dispute
This commit increases the artificial gas floor to 4712388
(cherry picked from commit f954a8b666)
2016-02-29 13:24:33 +01:00
e7fb300053 [release/1.3.4] cmd/geth: bump version v1.3.4 2016-02-29 13:24:32 +01:00
4e0fe48e20 [release/1.3.4] xeth: backward fix for messages 2016-02-24 13:46:37 +01:00
bcf565730b [release/1.3.4] core: Added new TD strategy which mitigate the risk for selfish mining
Assuming the following scenario where a miner has 15% of all hashing
power and the ability to exert a moderate control over the network to
the point where if the attacker sees a message A, it can't stop A from
propagating, but what it **can** do is send a message B and ensure that
most nodes see B before A. The attacker can then selfish mine and
augment selfish mining strategy by giving his own blocks an advantage.

This change makes the time at which a block is received less relevant
and so the level of control an attacker has over the network no longer
makes a difference.

This change changes the current td algorithm `B_td > C_td` to the new
algorithm `B_td > C_td || B_td == C_td && rnd < 0.5`.
2016-02-24 13:46:33 +01:00
587bafaa9f [release/1.3.4] core, core/vm, crypto: fixes for homestead
* Removed some strange code that didn't apply state reverting properly
* Refactored code setting from vm & state transition to the executioner
* Updated tests

Conflicts:
	common/registrar/ethreg/api.go
	core/tx_pool.go
	core/vm/jit_test.go
2016-02-24 13:46:30 +01:00
7bb496f737 [release/1.3.4] tests: updated homestead tests 2016-02-24 13:46:23 +01:00
61404979ed [release/1.3.4] parmas, crypto, core, core/vm: homestead consensus protocol changes
* change gas cost for contract creating txs
* invalidate signature with s value greater than secp256k1 N / 2
* OOG contract creation if not enough gas to store code
* new difficulty adjustment algorithm
* new DELEGATECALL op code

Conflicts:
	core/vm/environment.go
	crypto/crypto.go
	crypto/secp256k1/secp256.go
	eth/api.go
2016-02-24 13:46:11 +01:00
300f1e2abf [release/1.3.4] core, core/types, miner: fix transaction nonce-price combo sort 2016-02-24 13:46:06 +01:00
4ce7970340 [release/1.3.4] p2p/discover: fix Windows-specific issue for larger-than-buffer packets
On Windows, UDPConn.ReadFrom returns an error for packets larger
than the receive buffer. The error is not marked temporary, causing
our loop to exit when the first oversized packet arrived. The fix
is to treat this particular error as temporary.

Fixes: #1579, #2087
Updates: #2082

Conflicts:
	p2p/discover/udp_test.go
2016-02-24 13:46:03 +01:00
0c17be92fb [release/1.3.4] p2p: backward fix for S256 curve 2016-02-24 13:45:59 +01:00
b5a0cf488c [release/1.3.4] p2p: EIP-8 changes
Conflicts:
	p2p/rlpx.go
2016-02-24 13:45:56 +01:00
b25da7c3f4 [release/1.3.4] p2p/discover: EIP-8 changes
Conflicts:
	p2p/discover/udp_test.go
2016-02-24 13:45:53 +01:00
93077941d3 [release/1.3.4] rlp: add "tail" struct tag 2016-02-24 13:44:56 +01:00
8cb69b9e9b [release/1.3.4] crypto/ecies: make authenticated shared data work
The s2 parameter was not actually written to the MAC.
2016-02-24 13:44:37 +01:00
c541b38fb3 VERSION, cmd/geth: bumped version number 1.3.3 2016-01-05 13:02:07 +01:00
336a4d7b8d core: fix transaction reorg issues within the tx pool 2016-01-05 13:02:07 +01:00
8938768f75 core, eth/downloader: ensure state presence in ancestor lookup 2016-01-05 12:31:45 +01:00
5490437942 Merge branch 'develop' into release/1.3.2
Conflicts:
	VERSION
	cmd/geth/main.go
2015-11-24 13:48:47 +01:00
b0fb48c389 Merge pull request #1988 from bas-vk/issue1971
miner: bugfix where blockhash in receipts and logs is left empty
2015-11-24 10:55:07 +01:00
ae9e9efa31 Merge pull request #1991 from Gustav-Simonsson/common_tests
Update common test files
2015-11-23 10:26:21 +01:00
6bb29aebee Merge pull request #1666 from obscuren/create-transaction
rpc/api, xeth: added signTransaction method
2015-11-20 21:36:56 +01:00
314c031ff2 Merge pull request #1995 from karalabe/parametrize-crosscompile-go
Makefile: individual platforms, configurable Go runtime
2015-11-20 21:10:21 +01:00
fea819f74f Makefile: individual platforms, configurable Go runtime 2015-11-20 16:06:35 +02:00
220b0bf6e5 Update common test files 2015-11-20 12:53:36 +01:00
f16fab91c8 Merge pull request #1953 from karalabe/switch-to-fast-peers
eth/downloader: fetch data proportionally to peer capacity
2015-11-19 18:48:53 +01:00
98cbe1356e miner: bugfix were blockhash in receipts and logs is left empty 2015-11-19 16:02:49 +01:00
b6f5523bdc eth/downloader: fetch data proportionally to peer capacity 2015-11-19 17:01:39 +02:00
4c2933ad82 Merge pull request #1993 from obscuren/remove-legalese
cmd/geth, cmd/utils: removed legalese
2015-11-19 15:29:49 +01:00
7399b138a8 Merge pull request #1923 from karalabe/cleanup-receipt-data-access
core, eth, miner, xeth: clean up tx/receipt db accessors
2015-11-19 15:28:15 +01:00
65bb07fb4e Merge pull request #1980 from fjl/downloader-deliver-hang
eth/downloader: don't hang for spurious deliveries
2015-11-19 15:19:21 +01:00
e86e0ecdc8 core, eth, miner, xeth: clean up tx/receipt db accessors 2015-11-19 16:03:32 +02:00
dd09af27af eth/downloader: run tests in parallel 2015-11-19 14:18:35 +01:00
b7b62d4b3c eth/downloader: also drain stateCh, receiptCh in eth/61 mode
State and receipt deliveries from a previous eth/62+ sync can hang if
the downloader has moved on to syncing with eth/61. Fix this by also
draining the eth/63 channels while waiting for eth/61 data.

A nicer solution would be to take care of the channels in a central
place, but that would involve a major rewrite.
2015-11-19 14:18:35 +01:00
db52a6a0ff eth: remove workaround for asynchronous processing in the downloader 2015-11-19 14:18:34 +01:00
900da3d800 eth/downloader: don't hang for spurious deliveries
Unexpected deliveries could block indefinitely if they arrived at the
right time. The fix is to ensure that the cancellation channel is
always closed when the sync ends, unblocking any deliveries. Also remove
the atomic check for whether a sync is currently running because it
doesn't help and can be misleading.

Cancelling always seems to break the tests though. The downloader
spawned d.process whenever new data arrived, making it somewhat hard to
track when block processing was actually done. Fix this by running
d.process in a dedicated goroutine that is tied to the lifecycle of the
sync. d.process gets notified of new work by the queue instead of being
invoked all the time. This removes a ton of weird workaround code,
including a hairy use of atomic CAS.
2015-11-19 14:18:34 +01:00
1c63d08ed1 cmd/geth, cmd/utils: removed legalese
Removed the legalese confirmation dialog. This closes #1992
2015-11-19 12:03:33 +01:00
ae37a8013d Merge pull request #1917 from obscuren/validator-interface
core, eth, rpc: split out block validator and state processor
2015-11-19 10:57:00 +01:00
23f42d9463 Merge pull request #1964 from obscuren/evm-runtime
core/vm/runtime: added simple execution runtime
2015-11-18 17:39:27 +01:00
1372b991c3 core/vm/runtime: added simple execution runtime
The runtime environment can be used for simple basic execution of
contract code without the requirement of setting up a full stack and
operates fully in memory.
2015-11-18 16:50:20 +01:00
a1d9ef48c5 core, eth, rpc: split out block validator and state processor
This removes the burden on a single object to take care of all
validation and state processing. Now instead the validation is done by
the `core.BlockValidator` (`types.Validator`) that takes care of both
header and uncle validation through the `ValidateBlock` method and state
validation through the `ValidateState` method. The state processing is
done by a new object `core.StateProcessor` (`types.Processor`) and
accepts a new state as input and uses that to process the given block's
transactions (and uncles for rewords) to calculate the state root for
the next block (P_n + 1).
2015-11-18 14:24:42 +01:00
4a938406dc Merge pull request #1990 from karalabe/fix-whisper-autocomplete
rpc/api: fix #1986, newIdentity autocomplete
2015-11-18 12:32:39 +01:00
53f28e71dc rpc/api: fix #1986, newIdentity autocomplete 2015-11-18 13:03:20 +02:00
10475f444c Merge pull request #1984 from fjl/secp256k1-recover-id-verify
crypto/secp256k1: verify recovery ID before calling libsecp256k1
2015-11-17 19:39:42 +01:00
6ea05f5a54 rpc/api, xeth: added signTransaction method
SignTransaction creates a transaction but does submit it to the
network. SignTransaction returns a structure which includes the
transaction object details as well as the RLP encoded transaction that
could possibly be submitted by the SendRawTransaction method.
2015-11-17 17:51:05 +01:00
e344e1d490 crypto/secp256k1: drop pkgsrc paths from CFLAGS
They cause compiler warnings for people who don't have these
directories. People with pkgsrc can add the directory through CGO_CFLAGS
instead.
2015-11-17 09:53:10 +01:00
5159f8f649 crypto/secp256k1: raise internal errors as recoverable Go panic 2015-11-17 09:53:10 +01:00
1b29aed128 crypto/secp256k1: verify recovery ID before calling libsecp256k1
The C library treats the recovery ID as trusted input and crashes
the process for invalid values, so it needs to be verified before
calling into C. This will inhibit the crash in #1983.

Also remove VerifySignature because we don't use it.
2015-11-17 09:51:59 +01:00
9422eec554 Merge pull request #1976 from karalabe/enable-light-kdf
cmd/geth, cmd/utils: surface the light KDF flag to the CLI
2015-11-11 12:45:30 +01:00
9aa77a3769 cmd/geth, cmd/utils: surface the light KDF flag to the CLI 2015-11-10 15:47:19 +02:00
da6696862e Merge pull request #1974 from karalabe/fix-rpc-regression
rpc/api: fix #1972 api regression (nil eth panic) in attach
2015-11-06 13:10:28 +01:00
6e5349880e rpc/api: fix #1972 api regression (nil eth panic) in attach 2015-11-06 11:47:57 +02:00
6d09468cab Merge pull request #1967 from karalabe/fix-filter-test-datarace
event/filter: fix data race in the test
2015-11-05 20:31:11 +01:00
2334ee97d0 Merge pull request #1963 from karalabe/fix-database-regression
eth: fix error casting regression during database open
2015-11-05 20:27:11 +01:00
5d89bbdda1 eth: fix error casting regression during database open 2015-11-05 16:59:16 +02:00
8e2bf42c46 event/filter: fix data race in the test 2015-11-05 16:55:53 +02:00
76390ef892 Merge pull request #1966 from karalabe/fix-recover-noparam-panic
cmd/geth: fix recover command crash if no param is supplied
2015-11-05 13:15:23 +01:00
636f67f232 Merge pull request #1969 from karalabe/fix-whisper-tests-datarace
whisper: fix datarace in expiration test
2015-11-05 13:03:36 +01:00
eb11c0e597 Merge pull request #1965 from karalabe/fix-natto-test
jsre: fix #1876, sleep too short on a slow test server
2015-11-05 13:03:06 +01:00
ca4f6d0fdd Merge pull request #1968 from karalabe/fix-json-tests-datarace
tests: fix data race in bad-block-report disabling during tests
2015-11-05 12:59:07 +01:00
60e0abb595 whisper: fix datarace in expiration test 2015-11-05 13:36:25 +02:00
9dc5de51a2 tests: fix data race in bad-block-report disabling during tests 2015-11-05 13:29:50 +02:00
90655736ed cmd/geth: fix recover command crash if no param is supplied 2015-11-05 11:53:50 +02:00
bddb13d436 jsre: fix #1876, sleep too short on a slow test server 2015-11-05 11:36:10 +02:00
e3f36d9728 Merge pull request #1960 from karalabe/fix-peer-ignore-list
eth/downloader: fix dysfunctional ignore list hidden by generic set
2015-11-04 14:43:58 +01:00
b658a73ed5 eth/downloader: fix dysfunctional ignore list hidden by generic set 2015-11-04 13:11:52 +02:00
e165c2d23c Merge pull request #1934 from karalabe/polish-protocol-infos
eth, p2p, rpc/api: polish protocol info gathering
2015-11-04 11:59:31 +01:00
dda3bf3ce7 Merge pull request #1943 from obscuren/abi-fixes
accounts/abi: ABI fixes & added types
2015-11-03 15:22:37 +01:00
6dfbbc3e11 Merge pull request #1948 from bas-vk/rpcfix
Infinite loop in filters
2015-11-03 15:22:02 +01:00
5ff0814b1f VERSION, cmd/geth: bumped version 1.4.0 2015-11-03 11:48:18 +01:00
e5532154a5 Merge branch 'release/1.3.0'
Conflicts:
	VERSION
	cmd/geth/main.go
2015-11-03 11:47:07 +01:00
001684fb11 Merge pull request #1958 from fjl/secp256k1-pkgsrc
crypto/secp256k1: add C compiler flags for pkgsrc
2015-11-03 10:46:04 +01:00
16b0bc7c3b crypto/secp256k1: add C compiler flags for pkgsrc
pkgsrc is a cross-platform package manager that also
supports OS X.
2015-11-03 10:33:31 +01:00
f75becc264 cmd/geth, VERSION: bumped version 1.3.1 2015-10-31 20:00:58 +01:00
c841e39476 Merge pull request #1954 from obscuren/regression-miner
miner: synchronise start / stop
2015-10-31 20:00:34 +01:00
8c38f8d815 miner: synchronise start / stop
This PR fixes an issue where the remote worker was stopped twice and not
properly handled. This adds a synchronised running check to the start
and stop methods preventing closing of a channel more than once.
2015-10-31 02:18:41 +01:00
016ad3e962 Merge pull request #1952 from obscuren/testnet-peers
eth: added new testnet peers
2015-10-30 11:00:31 +01:00
98b036ddb6 Merge pull request #1949 from karalabe/update-command-usage
cmd/geth, cmd/utils, eth: group CLI flags by purpose
2015-10-30 10:59:36 +01:00
3c6e285d3b cmd/geth, cmd/utils, eth: group CLI flags by purpose 2015-10-30 11:33:12 +02:00
1bc789553a eth: added new testnet peers 2015-10-30 10:01:19 +01:00
1abbe05e93 Merge pull request #1951 from fjl/godeps-upgrade-goupnp
Godeps: upgrade github.com/huin/goupnp
2015-10-30 00:14:15 +01:00
f570b68ed1 p2p/nat: add docs for discover 2015-10-29 22:54:44 +01:00
bf11a47f22 Godeps: upgrade github.com/huin/goupnp to 90f71cb5 2015-10-29 22:53:59 +01:00
1f72952f04 accounts/abi: ABI fixes & added types
Changed field `input` to new `inputs`. Addad Hash and Address as input
types.

Added bytes[N] and N validation
2015-10-29 21:40:18 +01:00
fc46cf337a Merge pull request #1946 from fjl/xeth-oom
Fix for xeth OOM issue
2015-10-29 17:42:55 +01:00
76410df6a2 rpc: return an unsupported error when "pending" was used to create a filter 2015-10-29 17:35:43 +01:00
fbdb44dcc1 cmd/utils, rpc/comms: stop XEth when IPC connection ends
There are a bunch of changes required to make this work:

- in miner: allow unregistering agents, fix RemoteAgent.Stop
- in eth/filters: make FilterSystem.Stop not crash
- in rpc/comms: move listen loop to platform-independent code

Fixes #1930. I ran the shell loop there for a few minutes and didn't see
any changes in the memory profile.
2015-10-29 17:26:26 +01:00
fd27f074fe Merge pull request #1945 from bas-vk/rpcparsing
Argument parsing can lead to panic in rpc channel
2015-10-29 16:55:51 +01:00
8202bae070 Merge pull request #1939 from karalabe/fix-blocked-sync-goroutines
eth: don't block sync goroutines that short circuit
2015-10-29 16:54:43 +01:00
c3c5f8b654 rpc: fixed params parsing problem which could lead to a panic
check argument type before parsing params
 recover from panic in ipc channel
2015-10-29 09:23:03 +01:00
56f8699a6c Merge pull request #1940 from wildfjre/lightkdfflag
cmd/utils, crypto: add --lightkdf flag for lighter KDF
2015-10-28 18:54:50 +01:00
05ea8926c3 cmd/utils, crypto: add --lightkdf flag for lighter KDF 2015-10-28 18:46:39 +01:00
667987e7d0 core: only reset head header/fastblock if stale 2015-10-28 17:40:24 +02:00
2019ed71b4 eth: don't block sync goroutines that short circuit 2015-10-28 16:41:01 +02:00
6b5a42a15c Merge pull request #1937 from karalabe/make-ldflags
makefile: fix evm ld flags, pass them to xgo too
2015-10-28 11:46:23 +01:00
e46ab3bdcd eth, p2p, rpc/api: polish protocol info gathering 2015-10-28 12:44:15 +02:00
e655626268 makefile: dump mist leftover, add phony targets 2015-10-28 12:34:40 +02:00
04f8d05bd4 makefile: fix evm ld flags, pass them to xgo too 2015-10-28 12:31:20 +02:00
05f74077fb Merge pull request #1919 from ethersphere/getnatspec
rpc api: eth_getNatSpec
2015-10-28 10:49:53 +01:00
2e4fdce743 Merge pull request #1932 from fjl/gpo-defootgunize
eth, xeth: fix GasPriceOracle goroutine leak
2015-10-28 10:32:35 +01:00
ae1b5b3ff2 eth, xeth: fix GasPriceOracle goroutine leak
XEth.gpo was being initialized as needed. WithState copies the XEth
struct including the gpo field. If gpo was nil at the time of the copy
and Call or Transact were invoked on it, an additional GPO listenLoop
would be spawned.

Move the lazy initialization to GasPriceOracle instead so the same GPO
instance is shared among all created XEths.

Fixes #1317
Might help with #1930
2015-10-27 18:43:47 +01:00
57ab147388 update Dockerfile, remove supervisord and unattended-upgrades 2015-10-26 18:06:19 -04:00
4d005a2c1d rpc api: eth_getNatSpec
* xeth, rpc: implement eth_getNatSpec for tx confirmations
* rename silly docserver -> httpclient
* eth/backend: httpclient now accessible via eth.Ethereum init-d via config.DocRoot
* cmd: introduce separate CLI flag for DocRoot (defaults to homedir)
* common/path: delete unused assetpath func, separate HomeDir func
2015-10-26 22:24:09 +01:00
3b4ffacd0c bump VERSION to 1.3.0 2015-10-25 17:49:01 -04:00
491dd49419 Merge pull request #1928 from Gustav-Simonsson/common_tests
tests: update JSON files, add new wrappers
2015-10-25 06:17:19 -07:00
c43db8a2ee Merge pull request #1924 from fjl/eth-status-timeout
eth: time out status message exchange after 5s
2015-10-23 16:43:10 -07:00
0aeab5fd83 Merge pull request #1929 from ethersphere/develop
fix console history, lines with leadning whitespace NOT included
2015-10-23 16:18:52 -07:00
6b5d077c09 fix console history, lines with leadning whitespace NOT included 2015-10-23 20:37:12 +02:00
145366c07e tests: update JSON files, add new wrappers 2015-10-23 14:25:18 +02:00
3cf74336c9 eth: time out status message exchange after 5s 2015-10-22 22:22:04 +02:00
77878f76a9 Merge pull request #1922 from karalabe/fix-receipt-storage-regression
core: fix #1921, decode all receipt field, not just consensus
2015-10-22 10:17:15 -07:00
dce503779b Merge pull request #1840 from ethersphere/console
console, cli, api fixes
2015-10-22 09:27:05 -07:00
28c7b54d68 core: fix #1921, decode all receipt field, not just consensus 2015-10-22 13:09:30 +03:00
8b81ad1fc4 console:
* lines with leading space are ommitted from history
* exit processed even with whitespace around
* all whitespace lines (not only empty ones) are ignored

add 7 missing commands to admin api autocomplete

registrar: methods now return proper error if reg addresses are not set. fixes #1457

rpc/console: fix personal.newAccount() regression. Now all comms accept interactive password

registrar: add registrar tests for errors

crypto: catch AES decryption error on presale wallet import + fix error msg format. fixes #1580

CLI: improve error message when starting a second instance of geth. fixes #1564

cli/accounts: unlock multiple accounts. fixes #1785
* make unlocking multiple accounts work with inline <() fd
* passwdfile now correctly read only once
* improve logs
* fix CLI help text for unlocking

fix regression with docRoot / admin API
* docRoot/jspath passed to rpc/api ParseApis, which passes onto adminApi
* docRoot field for JS console in order to pass when RPC is (re)started
* improve flag desc for jspath

common/docserver: catch http errors from response

fix rpc/api tests

common/natspec: fix end to end test (skipped because takes 8s)

registrar: fix major regression:
* deploy registrars on frontier
* register HashsReg and UrlHint in GlobalRegistrar.
* set all 3 contract addresses in code
* zero out addresses first in tests
2015-10-22 00:22:39 +02:00
58d0752fdd Merge pull request #1883 from obscuren/jit-vm-optimisations
core/vm: JIT segmentation
2015-10-21 12:34:32 -07:00
0467a6ceec Merge pull request #1889 from karalabe/fast-sync-rebase
eth/63 fast synchronization algorithm
2015-10-21 11:44:22 -07:00
5b0ee8ec30 core, eth, trie: fix data races and merge/review issues 2015-10-21 16:49:55 +03:00
dba15d9c36 Merge pull request #1918 from obscuren/get-hash-fix
core, tests: get_hash fix
2015-10-21 03:54:59 -07:00
80f26086ee core, tests: get_hash fix
Make sure that we're fetching the hash from the current chain and not
the canonical chain.
2015-10-21 02:31:46 +02:00
796952a49a Merge pull request #1758 from fjl/coinbase
core, core/state: move gas tracking out of core/state
2015-10-20 03:31:36 -07:00
aa0538db0b eth: clean out light node notions from eth 2015-10-19 10:03:10 +03:00
a9d8dfc8e7 core, eth: roll back uncertain headers in failed fast syncs 2015-10-19 10:03:10 +03:00
b97e34a8e4 eth/downloader: concurrent receipt and state processing 2015-10-19 10:03:10 +03:00
ab27bee25a core, eth, trie: direct state trie synchronization 2015-10-19 10:03:09 +03:00
832b37c822 core, eth: receipt chain reconstruction 2015-10-19 10:03:09 +03:00
42c8afd440 core: differentiate receipt concensus and storage decoding 2015-10-19 10:03:09 +03:00
b99fe27f8b core: fix block canonical mark / content write race 2015-10-19 10:03:09 +03:00
f186b39018 eth/downloader: add fast and light sync strategies 2015-10-19 10:03:09 +03:00
c33cc382b3 core: support inserting pure header chains 2015-10-19 10:03:09 +03:00
92f9a3e5fa cmd, eth: support switching client modes of operation 2015-10-19 10:03:09 +03:00
de8d5aaa92 core, core/state: move gas tracking out of core/state
The amount of gas available for tx execution was tracked in the
StateObject representing the coinbase account. This commit makes the gas
counter a separate type in package core, which avoids unintended
consequences of intertwining the counter with state logic.
2015-10-17 10:24:34 +02:00
8c85532412 core/vm: added parsing utilities 2015-10-16 22:30:42 +02:00
b196278044 core/vm: added JIT segmenting / optimisations
* multi-push segments
* static jumps segments
2015-10-16 22:30:42 +02:00
9d61d78de6 core/vm: abstracted instruction execution away from JIT
Moved the execution of instructions to the instruction it self. This
will allow for specialised instructions (e.g. segments) to be execution
in the same manner as regular instructions.
2015-10-16 22:17:35 +02:00
10ed107ba2 Merge pull request #1899 from obscuren/mipmap-bloom
core, eth/filters, miner, xeth: Optimised log filtering
2015-10-16 12:35:24 -07:00
6dc14788a2 core, eth/filters, miner, xeth: Optimised log filtering
Log filtering is now using a MIPmap like approach where addresses of
logs are added to a mapped bloom bin. The current levels for the MIP are
in ranges of 1.000.000, 500.000, 100.000, 50.000, 1.000. Logs are
therefor filtered in batches of 1.000.
2015-10-16 21:28:59 +02:00
c5ef2afda5 Merge pull request #1907 from Gustav-Simonsson/ethash_godep
godeps: update ethash following GPU miner merge
2015-10-16 07:39:42 -07:00
d5f56ad5c5 godeps: update ethash following GPU miner merge 2015-10-16 16:27:51 +02:00
d5327ddc5f Merge pull request #1869 from Gustav-Simonsson/gpu_miner
all: Add GPU mining, disabled by default
2015-10-16 06:25:33 -07:00
b747754009 Merge pull request #1881 from Gustav-Simonsson/state_new_error
core/state, core, miner: handle missing root error from state.New
2015-10-16 06:18:41 -07:00
1b1f293082 core/state, core, miner: handle missing root error from state.New 2015-10-16 02:22:06 +02:00
f466243417 Merge pull request #1853 from Gustav-Simonsson/libsecp256k1_update
Update libsecp256k1, Go wrapper and tests
2015-10-15 10:46:57 -07:00
30f057aaf9 eth/filters: added benchmark 2015-10-15 19:45:44 +02:00
cefe5c80b1 Merge pull request #1898 from karalabe/eventmux-post-race
core, eth, event, miner, xeth: fix event post / subscription race
2015-10-15 10:44:30 -07:00
2f1f2e4811 Merge pull request #1887 from Gustav-Simonsson/icap
common, crypto: add ICAP functions
2015-10-15 10:32:05 -07:00
2db9798646 common, crypto: add ICAP functions 2015-10-13 17:44:14 +02:00
402fd6e8c6 core, eth, event, miner, xeth: fix event post / subscription race 2015-10-12 16:22:03 +03:00
0de9b16b11 Merge pull request #1896 from karalabe/fix-vm-stack-logs
core/vm: copy stack element to prevent overwrites
2015-10-12 05:32:45 -07:00
af9afb686b core/vm: copy stack element to prevent overwrites 2015-10-12 00:14:35 +03:00
f32fa075f1 core/secp256k1: update libsecp256k1 Go wrapper and tests 2015-10-09 14:47:55 +02:00
315a422ba7 Merge pull request #1888 from obscuren/testnet
cmd, core, eth: added official testnet
2015-10-09 01:31:37 -07:00
1de796f101 cmd, core, eth: added official testnet 2015-10-08 22:01:39 +02:00
9e91579105 Merge pull request #1885 from karalabe/olympic-fix
cmd: properly initialize Olympic for all subcommands
2015-10-08 11:33:28 -07:00
bba4dcb72f Merge pull request #1880 from Gustav-Simonsson/core_transfer
core, core/vm, cmd/evm: remove redundant balance check
2015-10-08 11:32:30 -07:00
37abbcb54b Merge pull request #1833 from Gustav-Simonsson/crypto_tests
crypto: correct sig validation, add missing unit tests of exported functions
2015-10-08 11:31:12 -07:00
2547c9c9b7 cmd: properly initialize Olympic for all subcommands 2015-10-07 18:25:33 +03:00
ec6a548ee3 all: Add GPU mining, disabled by default 2015-10-07 13:19:30 +02:00
27528ad3d2 Merge pull request #1851 from bas-vk/historyfile
console/history respect datadir
2015-10-07 01:51:13 -07:00
f8786defd0 Merge pull request #1850 from karalabe/genesis-block-receipts
core: fix #1848, block receipts db entry for the genesis too
2015-10-07 01:50:37 -07:00
e1616f77c7 core, core/vm, cmd/evm: remove redundant balance check 2015-10-06 12:42:34 +02:00
44fd395141 Merge pull request #1879 from obscuren/versioning
cmd/geth: dev version number
2015-10-05 12:35:42 -07:00
7b44b8aece cmd/geth: dev version number 2015-10-05 21:11:39 +02:00
13699e2dd9 Merge pull request #1877 from obscuren/head-write
core: fixed head write on block insertion
2015-10-05 11:00:57 -07:00
20ab29f885 core: fixed head write on block insertion
Due to a rebase this probably got overlooked / ignored. This fixes the
issue of a block insertion never writing the last block.
2015-10-05 17:00:59 +02:00
5b34fa538e Merge pull request #1756 from obscuren/core-refactor
core, core/vm: refactor
2015-10-05 07:14:01 -07:00
7c7692933c cmd/geth, cmd/utils, core, rpc: renamed to blockchain
* Renamed ChainManager to BlockChain
* Checkpointing is no longer required and never really properly worked
when the state was corrupted.
2015-10-04 01:13:56 +02:00
361082ec4b cmd/evm, core/vm, test: refactored VM and core
* Moved `vm.Transfer` to `core` package and changed execution to call
`env.Transfer` instead of `core.Transfer` directly.
* core/vm: byte code VM moved to jump table instead of switch
* Moved `vm.Transfer` to `core` package and changed execution to call
  `env.Transfer` instead of `core.Transfer` directly.
* Byte code VM now shares the same code as the JITVM
* Renamed Context to Contract
* Changed initialiser of state transition & unexported methods
* Removed the Execution object and refactor `Call`, `CallCode` &
  `Create` in to their own functions instead of being methods.
* Removed the hard dep on the state for the VM. The VM now
  depends on a Database interface returned by the environment. In the
  process the core now depends less on the statedb by usage of the env
* Moved `Log` from package `core/state` to package `core/vm`.
2015-10-04 01:13:54 +02:00
f7a71996fb core, event/filter, xeth: refactored filter system
Moved the filtering system from `event` to `eth/filters` package and
removed the `core.Filter` object. The `filters.Filter` object now
requires a `common.Database` rather than a `eth.Backend` and invokes the
`core.GetBlockByX` directly rather than thru a "manager".
2015-10-02 22:47:43 +02:00
8b865fa9bf Merge pull request #1866 from karalabe/honor-eth-capabilities
eth/downloader: match capabilities when querying idle peers
2015-10-02 03:49:56 -07:00
0d78f96205 Merge pull request #1865 from obscuren/deadlock-chainmanager-fix
core: deadlock in chainmanager after posting RemovedTransactionEvent
2015-10-02 03:39:43 -07:00
47f62a67aa eth/downloader: match capabilities when querying idle peers 2015-10-02 13:20:41 +03:00
a6cc02f68f core: deadlock in chainmanager after posting RemovedTransactionEvent
This PR solves an issue with the chain manager posting a
`RemovedTransactionEvent`, the tx pool will try to
acquire the chainmanager lock which has previously been locked prior to
posting `RemovedTransactionEvent`. This results in a deadlock in the
core.
2015-10-02 12:20:18 +02:00
49ae538506 Merge pull request #1405 from fjl/lean-trie
core, trie: new trie
2015-10-01 04:34:38 -07:00
581c0901af Merge pull request #1856 from karalabe/andorid-path-fix
common: fix #1818, secondary datadir paths to fall back to
2015-10-01 04:03:04 -07:00
74578ab22b common: fix #1818, secondary datadir paths to fall back to 2015-10-01 12:26:19 +03:00
9666db2a44 VERSION, cmd/geth: bumped version 1.2.1 2015-10-01 10:38:43 +02:00
e3ac56d502 Merge pull request #1859 from fjl/fix-discover-refresh-race
p2p/discover: fix race involving the seed node iterator
2015-09-30 08:21:40 -07:00
32dda97602 p2p/discover: ignore packet version numbers
The strict matching can get in the way of protocol upgrades.
2015-09-30 16:23:03 +02:00
631bf36102 p2p/discover: remove unused lastLookup field 2015-09-30 16:23:03 +02:00
b4374436f3 p2p/discover: fix race involving the seed node iterator
nodeDB.querySeeds was not safe for concurrent use but could be called
concurrenty on multiple goroutines in the following case:

- the table was empty
- a timed refresh started
- a lookup was started and initiated refresh

These conditions are unlikely to coincide during normal use, but are
much more likely to occur all at once when the user's machine just woke
from sleep. The root cause of the issue is that querySeeds reused the
same leveldb iterator until it was exhausted.

This commit moves the refresh scheduling logic into its own goroutine
(so only one refresh is ever active) and changes querySeeds to not use
a persistent iterator. The seed node selection is now more random and
ignores nodes that have not been contacted in the last 5 days.
2015-09-30 16:23:03 +02:00
46ad5a5f5b Merge pull request #1852 from obscuren/filter-nil-fix
xeth: fixed nil pointer of filter retrieval
2015-09-30 03:06:36 -07:00
9b94076717 Merge pull request #1854 from karalabe/badhasherror-formatting-loop
core: fix a formatting loop in BadHashError
2015-09-29 02:26:01 -07:00
b8b996be74 core: fix a formatting loop in BadHashError 2015-09-29 09:11:38 +03:00
1d20b0247c Update libsecp256k1 2015-09-28 17:46:38 +02:00
b9359981f4 xeth: fixed nil pointer of filter retrieval
This fix addresses an issue with filters that were (possibly) not yet
added to the filter queues but were expected. I've added additional nil
checks making sure it doesn't crash and swapped the installation of the
filter around so it's installed before use.

Closes #1665
2015-09-25 13:56:53 +02:00
7977e87ce1 Merge pull request #1843 from karalabe/cleanup-downloader-channel
eth/downloader: always send termination wakes, clean leftover
2015-09-25 04:34:59 -07:00
8636f0e1c3 console/history respect datadir 2015-09-25 13:08:48 +02:00
830ddcee60 core: fix #1848, block receipts db entry for the genesis too 2015-09-24 19:38:59 +03:00
69d86442a5 Merge pull request #1803 from Gustav-Simonsson/badhashes
core: Add BadHashErr and test for BadHashes handling
2015-09-23 11:10:25 -07:00
36f46a61a7 Merge pull request #1844 from obscuren/version-file
VERSION: added version
2015-09-23 05:48:00 -07:00
6e1dc321f4 VERSION: added version 2015-09-23 14:47:20 +02:00
7a2a918067 Merge pull request #1842 from fjl/rpc-fix-unknown-block
rpc/api: don't crash for unknown blocks
2015-09-23 12:57:33 +02:00
f459a3f0ae eth/downloader: always send termination wakes, clean leftover 2015-09-23 12:39:17 +03:00
e456f27795 Merge pull request #1827 from Gustav-Simonsson/common_tests
tests: add test for StateTests/stCallCodes.json
2015-09-23 02:12:59 -07:00
90cd8ae9f2 rpc/api: don't crash for unknown blocks
Most eth RPC calls that work with blocks crashed when the block was not
found because they called Hash on a nil block. This is a regression
introduced in cdc2662c40 (#1779).

While here, remove the insane conversions in get*CountBy*. There is no
need to construct a complete BlockRes and converting
int->int64->*big.Int->[]byte->hexnum->string to format the length of a
slice as hex.
2015-09-22 23:59:26 +02:00
70b6174748 cmd/geth, core: make "geth blocktest" work again
The test genesis block was not written properly, block insertion failed
immediately.

While here, fix the panic when shutting down "geth blocktest" with
Ctrl+C. The signal handler is now installed automatically, causing
ethereum.Stop to crash because everything is already stopped.
2015-09-22 23:55:31 +02:00
c1a352c108 trie: add merkle proof functions 2015-09-22 22:57:37 +02:00
a2d5a60418 core, core/state: batch-based state sync 2015-09-22 22:57:37 +02:00
565d9f2306 core, trie: new trie 2015-09-22 22:53:49 +02:00
6b91a4abe5 trie: improve benchmarks 2015-09-22 22:49:27 +02:00
bfde1a4305 core: Add BadHashErr and test for BadHashes handling 2015-09-22 18:02:26 +02:00
3340b56593 crypto: correct sig validation, add more unit tests 2015-09-22 17:33:39 +02:00
e56cbc225e Merge pull request #1835 from karalabe/make-cross
makefile: built in cross compilation targets
2015-09-21 11:47:10 -07:00
7bf8e949e7 Merge pull request #1669 from obscuren/tx-pool-auto-resend
core, xeth: chain reorg move missing transactions to transaction pool
2015-09-21 11:45:59 -07:00
6a05c569f2 makefile: built in cross compilation targets 2015-09-21 21:36:01 +03:00
eaa4473dbd core, core/types: readd transactions after chain re-org
Added a `Difference` method to `types.Transactions` which sets the
receiver to the difference of a to b (NOTE: not a **and** b).

Transaction pool subscribes to RemovedTransactionEvent adding back to
those potential missing from the chain.

When a chain re-org occurs remove any transactions that were removed
from the canonical chain during the re-org as well as the receipts that
were generated in the process.

Closes #1746
2015-09-21 20:33:28 +02:00
be76a68aea cmd/geth: changed version number to 1.2.0
Changed the version number of geth to 1.2.0 so that dev builds are now properly build (instead of master). Note to self; increase version number to 1.2.1 for our next actual release.
2015-09-21 16:13:07 +02:00
12c0afe4fe Merge pull request #1822 from karalabe/contain-pow
core: separate and contain POW verifier, extensive tests
2015-09-21 06:52:11 -07:00
5621308949 tests: add test for StateTests/stCallCodes.json 2015-09-21 11:34:02 +02:00
399c920380 core: separate and contain POW verifier, extensive tests 2015-09-21 10:24:49 +03:00
e40b447fea Merge pull request #1814 from Gustav-Simonsson/common_tests
tests: update common test wrappers and test files
2015-09-18 16:34:54 -07:00
b94b9b0158 Merge pull request #1817 from obscuren/nonce-fix
core: transaction nonce recovery
2015-09-18 15:56:10 -07:00
47ca6904b3 tests: use lastblockhash field to validate reorgs and block headers 2015-09-18 17:48:31 +02:00
075815e5ff tests: update common test wrappers and test files 2015-09-18 13:08:36 +02:00
b60a27627b core: transaction nonce recovery fix
When the transaction state recovery kicked in it assigned the last
(incorrect) nonce to the pending state which caused transactions with
the same nonce to occur.

Added test for nonce recovery
2015-09-18 11:59:21 +02:00
216c486a3a Merge pull request #1815 from karalabe/chain-maker-timer
core: allow modifying test-chain block times
2015-09-18 11:23:31 +02:00
ac6248ed7a Merge pull request #1793 from jeffallen/typo
common: Update README.md for the current package name
2015-09-17 19:26:49 +02:00
bdf4fd6091 Merge pull request #1813 from kobigurk/develop
cmd/geth: extradata is correcly initialized with console
2015-09-17 19:25:32 +02:00
69f48e4689 Merge pull request #1811 from bas-vk/timer-clearinterval
timer bugfix when clearInterval was called from within the callback
2015-09-17 19:21:49 +02:00
6f3cb12924 core: allow modifying test-chain block times 2015-09-17 13:43:52 +03:00
58fbcaa750 Merge pull request #1810 from karalabe/pure-header-verifications-2
core, eth, miner: use pure header validation
2015-09-16 14:21:12 -07:00
1a1a1ee4ff cmd/geth: extradata is correcly initialized with console 2015-09-16 21:01:21 +03:00
985b5f29ed Merge pull request #1801 from fjl/ethdb
all: move common.Database to ethdb and add NewBatch
2015-09-16 07:50:14 -07:00
2f65ddc501 jsre: timer bugfix when clearInterval was called from within the callback 2015-09-16 11:57:33 +02:00
1cc2f08041 Merge pull request #1784 from karalabe/standard-sync-stats
eth, rpc: standardize the chain sync progress counters
2015-09-16 02:31:58 -07:00
821619e1c3 core, eth, miner: use pure header validation 2015-09-16 10:46:28 +03:00
e9a80518c7 Merge pull request #1744 from kobigurk/develop
adds extradata flag
2015-09-15 13:56:10 -07:00
321733ab23 cmd/geth: adds extradata flag 2015-09-15 23:35:36 +03:00
d4d3fc6a70 jsre, rpc/api: pull in new web3 and use hex numbers 2015-09-15 17:05:12 +03:00
99b62f36b6 eth/downloader: header-chain order and ancestry check 2015-09-15 14:45:53 +03:00
0a7d059b6a eth, rpc: standardize the chain sync progress counters 2015-09-15 14:45:53 +03:00
55bdcfaeac Merge pull request #1806 from ethersphere/solc2
new solc api - late fixes
2015-09-15 01:08:30 -07:00
3a5e7ed9a6 new solc api:
* use legacy version matcher
* optimise just a boolean flag
* skipf for messages in tests
2015-09-15 00:35:22 +02:00
b252589960 ethdb: remove Flush 2015-09-14 23:36:30 +02:00
d581dfee5f ethdb: copy stored memdb values
Storing a value in LevelDB copies the bytes, modifying the value
afterwards does not affect the content of the database. This commit
ensures that MemDatabase satisfies the same property.
2015-09-14 23:36:30 +02:00
8b32f10f16 ethdb: add NewBatch 2015-09-14 23:36:30 +02:00
8c4dab77ba all: move common.Database to package ethdb 2015-09-14 23:36:30 +02:00
071e2cd08e Merge pull request #1786 from ethersphere/solc
common/compiler: new solc API
2015-09-14 23:32:40 +02:00
47b9c640f5 Merge pull request #1797 from karalabe/ensure-ipcpath-exists
rpc/comms: fix #1795, ensure IPC path exists before binding
2015-09-14 14:45:11 +02:00
a9c809b441 Merge pull request #1792 from jeffallen/uuid
Change go-uuid to use the current supported repository.
2015-09-14 12:06:59 +02:00
0d40727775 Change go-uuid to use the current supported repository. 2015-09-12 16:49:24 +06:00
17b729759b Solidity Compiler - solc new API
* adapt to new compiler versioning
* use compiler version as language version
* implement new solc API for versions >= 0.1.[2-9][0-9]* fixes #1770
* add optimize=1 to options
* backward compatibility (for now) for <= 0.1.1, and old versions (0.[2-9][0-9]*.[0-9]+)
* introduce compilerOptions to ContractInfo
* clean up flair, include full version string to version line and ContractInfo
2015-09-12 10:52:52 +02:00
55ed8d108d Merge pull request #1789 from Gustav-Simonsson/core_remove_unused_functions
core, core/vm, core/state: remove unused functions
2015-09-11 15:29:27 -07:00
f1a4b330dd Merge pull request #1796 from karalabe/ethash-android-support
godeps: pull in ethash android fix
2015-09-11 15:26:01 -07:00
0eac601b5b Merge pull request #1779 from karalabe/split-block-storage-3000
core: split the db blocks into components, move TD out top level
2015-09-11 08:10:37 -07:00
cdc2662c40 core: split out TD from database and all internals 2015-09-11 17:42:25 +03:00
2b339cbbd8 core, eth: split the db blocks into headers and bodies 2015-09-11 17:42:25 +03:00
3e6964b841 rpc/comms: fix #1795, ensure IPC path exists before binding 2015-09-11 17:03:31 +03:00
c6013725a8 godeps: pull in ethash android fix 2015-09-11 15:53:23 +03:00
4e075e4013 Merge pull request #1773 from obscuren/dev-mode
cmd/geth, cmd/utils, eth: added dev mode flag
2015-09-10 21:15:33 +02:00
b81a6e6ab8 core, core/vm, core/state: remove unused functions 2015-09-10 21:10:58 +02:00
62bbf8a09e Merge pull request #1778 from fjl/rlp-trie-changes
rlp: precursor changes for trie, p2p
2015-09-10 12:02:16 -07:00
4ce3dfe9c8 common: Update README.md for the current package name 2015-09-10 23:59:38 +06:00
fc8b246109 rlp: move ListSize to raw.go 2015-09-10 19:41:51 +02:00
24bb68e7cf rlp: add RawValue 2015-09-10 19:41:51 +02:00
bc17dba8fb rlp: add Split functions
These functions allow destructuring of raw rlp-encoded bytes
without the overhead of reflection or copying.
2015-09-10 19:41:51 +02:00
ac32f52ca6 rlp: fix encReader returning nil buffers to the pool
The bug can cause crashes if Read is called after EOF has been returned.
No code performs such calls right now, but hitting the bug gets more
likely as rlp.EncodeToReader gets used in more places.
2015-09-10 19:12:32 +02:00
90f1fe0ed2 Merge pull request #1781 from Gustav-Simonsson/state_object_copy
core/state: deleted field in StateObject Copy() and unit test
2015-09-09 18:42:36 +02:00
28b13a4d1e Merge pull request #1780 from bas-vk/miner-crash
agent/miner Prevent the CpuAgent to be started multiple times
2015-09-09 04:49:28 -07:00
f04b3a6f29 cmd/geth, cmd/utils, eth: added dev mode flag
Dev mode enabled some debugging flags such as:

* VM debugging mode
* Simpler proof of work
* Whisper enabled by default
* Datadir to a tmp datadir
* Maxpeers set to 0
* Gas price of 0
* Random listen port
2015-09-09 08:53:05 +02:00
bf879ef230 core/state: test formatting adhering to Go convention 2015-09-09 00:26:18 +02:00
004ed786b4 core/state: deleted field in StateObject Copy() and unit test 2015-09-08 15:56:11 +02:00
652eea71fe put unlock after lock 2015-09-08 12:42:29 +02:00
618065895b agent/miner Prevent the CpuAgent to be started multiple times 2015-09-08 11:27:55 +02:00
edaea69817 Merge pull request #1777 from hectorchu/develop
rpc/comms: fix bug attaching the console over http
2015-09-08 11:02:09 +03:00
6fe46cc743 Merge pull request #1774 from bas-vk/console-crash
cmd/geth Autocompletion bugfix which let the console crash
2015-09-08 10:33:09 +03:00
4ea81f170a rpc/comms: fix bug attaching the console over http 2015-09-07 15:09:59 +01:00
f69121357d cmd/geth Autocompletion bugfix which let the console crash 2015-09-06 16:25:55 +02:00
e2d7c1a523 Merge pull request #1752 from karalabe/fix-eth61-test
eth/downloader: fix race causing occasional test failure
2015-09-03 15:52:18 +02:00
ebbe25ee71 Merge pull request #1764 from kobigurk/honor_ipc_datadir
honors datadir when attaching
2015-09-03 10:48:23 +03:00
1a86adc5a2 cmd/geth: honor datadir when attaching 2015-09-03 10:28:30 +03:00
e98854588b Merge pull request #1761 from CJentzsch/patch-3
fix block time issue
2015-09-02 15:13:14 -07:00
0fda4c4e15 fix block time issue
currently, under normal circumstances, you always set the timestamp to previous.Time() + 1.
credits to https://www.reddit.com/r/ethereum/comments/3jcs5r/code_avg_block_time_vs_difficulty_adjustment/cuoi4op

style
2015-09-03 00:05:05 +02:00
b2c17a5a63 Merge pull request #1726 from Gustav-Simonsson/update_tests
Add TestBcForkUncle tests & update JSON files
2015-09-02 22:02:44 +02:00
e9b031b88b Merge pull request #1755 from fjl/coinbase
core: improve block gas tracking
2015-09-01 23:36:05 +02:00
00b45acb9e core: improve block gas tracking 2015-09-01 23:11:03 +02:00
1ffc5b0cfd Merge pull request #1751 from maran/fix_filters
core: Filter on addresses should work as an OR not an AND.
2015-09-01 20:10:27 +02:00
5e4cd599eb Merge pull request #1745 from mrdomino/obsd-build-master
Pull in ethash and go-isatty updates
2015-09-01 20:06:13 +02:00
1f1d73ab74 eth/downloader: fix race causing occasional test failure 2015-09-01 16:11:14 +03:00
67225de255 Filter on addresses should work as an OR not an AND. 2015-09-01 09:19:45 +02:00
540eb3d02d Pull in ethash and go-isatty updates
Fixes build on OpenBSD.
2015-08-31 12:14:32 -04:00
fe8093b71f Add TestBcForkUncleTests and update JSON files 2015-08-31 16:45:00 +02:00
9dc23ce284 Merge pull request #1742 from fjl/rpc-receipt-root
rpc: add receiptRoot to getBlock* responses
2015-08-31 14:50:21 +02:00
1801748ccd Merge pull request #1734 from fjl/ldflags-warning-go1.5
build: avoid -X separator warning with Go >= 1.5
2015-08-31 14:49:50 +02:00
8b12bcc0ac rpc: add receiptRoot to getBlock* responses
Fixes #1679
2015-08-29 11:12:01 +02:00
e1037bd0cf Merge pull request #1724 from Gustav-Simonsson/get_work
rpc: return error code for eth_getWork when no work ready
2015-08-29 10:54:10 +02:00
2d1ced8759 Merge pull request #1739 from bas-vk/empty-password
rpc/api allow empty password
2015-08-28 13:14:51 +02:00
39e9560600 rpc/api allow empty password 2015-08-28 12:49:41 +02:00
d9addf79fa Improve error string and remove unneeded else clause 2015-08-28 03:42:01 +02:00
cfd84a6ad9 build: avoid -X separator warning with Go >= 1.5 2015-08-27 13:26:13 +02:00
6ec13e7e2b Merge pull request #1701 from karalabe/eth62-sync-rebase
eth: implement eth/62 synchronization logic
2015-08-27 00:03:59 +02:00
79b644c7a3 Merge pull request #1717 from karalabe/forward-solidity-errors
common/compiler: fix #1598, expose solidity errors
2015-08-26 19:00:11 +02:00
14370a2260 Merge pull request #1718 from caktux/develop
add missing shh_getMessages to RPC mappings
2015-08-26 18:55:51 +02:00
3df6f3fc14 Merge pull request #1721 from bas-vk/console-error-parsing
Improved console error handling
2015-08-26 18:55:31 +02:00
847794a321 Merge pull request #1722 from bas-vk/remote-deleteaccount
Remove personal.deleteAccount from RPC interface
2015-08-26 18:02:51 +02:00
829201382b rpc: return error code for eth_getWork when no work ready 2015-08-26 12:46:50 +02:00
5dd2462816 rpc/api - remove personal.deleteAccount from RPC interface 2015-08-26 11:39:43 +02:00
f448310eef bugfix console error handling 2015-08-26 11:33:02 +02:00
101418b275 common/compiler: fix #1598, expose solidity errors 2015-08-26 10:04:23 +03:00
a1d8015817 add missing shh_getMessages to RPC mappings 2015-08-25 14:42:57 -04:00
17f65cd1e5 eth: update metrics collection to handle eth/62 algos 2015-08-25 17:48:47 +03:00
47a7fe5d22 eth: port the synchronisation algo to eth/62 2015-08-25 17:48:47 +03:00
abce09954b Merge pull request #1711 from Gustav-Simonsson/timestamp_big_int
Add tests for uncle timestamps and refactor timestamp type
2015-08-25 15:49:36 +02:00
a219159e7e Merge pull request #1710 from bas-vk/useragent
user agent messages were dumped in some cases
2015-08-25 12:23:25 +02:00
7324176f70 Add tests for uncle timestamps and refactor timestamp type 2015-08-25 04:46:11 +02:00
ca88e18f59 eth: kill off protocol eth/60 in preparation for eth/62 2015-08-24 17:57:28 +03:00
42f44dda54 eth, eth/downloader: handle header requests, table driven proto tests 2015-08-24 17:57:28 +03:00
d910148a96 Set ipc channel as user agent client 2015-08-24 12:41:34 +02:00
c51e153b5c eth, metrics, p2p: prepare metrics and net packets to eth/62 2015-08-21 10:30:57 +03:00
497 changed files with 277025 additions and 24873 deletions

View File

@ -20,10 +20,6 @@ env:
global:
- secure: "U2U1AmkU4NJBgKR/uUAebQY87cNL0+1JHjnLOmmXwxYYyj5ralWb1aSuSH3qSXiT93qLBmtaUkuv9fberHVqrbAeVlztVdUsKAq7JMQH+M99iFkC9UiRMqHmtjWJ0ok4COD1sRYixxi21wb/JrMe3M1iL4QJVS61iltjHhVdM64="
sudo: false
addons:
apt:
packages:
- libgmp3-dev
notifications:
webhooks:
urls:

21
Godeps/Godeps.json generated
View File

@ -5,15 +5,10 @@
"./..."
],
"Deps": [
{
"ImportPath": "code.google.com/p/go-uuid/uuid",
"Comment": "null-12",
"Rev": "7dda39b2e7d5e265014674c5af696ba4186679e9"
},
{
"ImportPath": "github.com/codegangsta/cli",
"Comment": "1.2.0-95-g9b2bd2b",
"Rev": "9b2bd2b3489748d4d0a204fa4eb2ee9e89e0ebc6"
"Comment": "1.2.0-161-gf445c89",
"Rev": "f445c894402839580d30de47551cedc152dad814"
},
{
"ImportPath": "github.com/davecgh/go-spew/spew",
@ -21,8 +16,8 @@
},
{
"ImportPath": "github.com/ethereum/ethash",
"Comment": "v23.1-227-g8f6ccaa",
"Rev": "8f6ccaaef9b418553807a73a95cb5f49cd3ea39f"
"Comment": "v23.1-238-g9401881",
"Rev": "9401881ab040d1a3b0ae9e4780a115bc284a8a1a"
},
{
"ImportPath": "github.com/fatih/color",
@ -39,7 +34,7 @@
},
{
"ImportPath": "github.com/huin/goupnp",
"Rev": "5cff77a69fb22f5f1774c4451ea2aab63d4d2f20"
"Rev": "90f71cb5dd6d4606388666d2cda4ce2f563d2185"
},
{
"ImportPath": "github.com/jackpal/go-nat-pmp",
@ -51,7 +46,7 @@
},
{
"ImportPath": "github.com/mattn/go-isatty",
"Rev": "fdbe02a1b44e75977b2690062b83cf507d70c013"
"Rev": "7fcbc72f853b92b5720db4a6b8482be612daef24"
},
{
"ImportPath": "github.com/mattn/go-runewidth",
@ -62,6 +57,10 @@
"ImportPath": "github.com/nsf/termbox-go",
"Rev": "675ffd907b7401b8a709a5ef2249978af5616bb2"
},
{
"ImportPath": "github.com/pborman/uuid",
"Rev": "cccd189d45f7ac3368a0d127efb7f4d08ae0b655"
},
{
"ImportPath": "github.com/peterh/liner",
"Rev": "29f6a646557d83e2b6e9ba05c45fbea9c006dbe8"

View File

@ -0,0 +1,26 @@
/*
Package cl provides a binding to the OpenCL api. It's mostly a low-level
wrapper that avoids adding functionality while still making the interface
a little more friendly and easy to use.
Resource life-cycle management:
For any CL object that gets created (buffer, queue, kernel, etc..) you should
call object.Release() when finished with it to free the CL resources. This
explicitely calls the related clXXXRelease method for the type. However,
as a fallback there is a finalizer set for every resource item that takes
care of it (eventually) if Release isn't called. In this way you can have
better control over the life cycle of resources while having a fall back
to avoid leaks. This is similar to how file handles and such are handled
in the Go standard packages.
*/
package cl
// #include "headers/1.2/opencl.h"
// #cgo CFLAGS: -Iheaders/1.2
// #cgo darwin LDFLAGS: -framework OpenCL
// #cgo linux LDFLAGS: -lOpenCL
import "C"
import "errors"
var ErrUnsupported = errors.New("cl: unsupported")

View File

@ -0,0 +1,254 @@
package cl
import (
"math/rand"
"reflect"
"strings"
"testing"
)
var kernelSource = `
__kernel void square(
__global float* input,
__global float* output,
const unsigned int count)
{
int i = get_global_id(0);
if(i < count)
output[i] = input[i] * input[i];
}
`
func getObjectStrings(object interface{}) map[string]string {
v := reflect.ValueOf(object)
t := reflect.TypeOf(object)
strs := make(map[string]string)
numMethods := t.NumMethod()
for i := 0; i < numMethods; i++ {
method := t.Method(i)
if method.Type.NumIn() == 1 && method.Type.NumOut() == 1 && method.Type.Out(0).Kind() == reflect.String {
// this is a string-returning method with (presumably) only a pointer receiver parameter
// call it
outs := v.Method(i).Call([]reflect.Value{})
// put the result in our map
strs[method.Name] = (outs[0].Interface()).(string)
}
}
return strs
}
func TestPlatformStringsContainNoNULs(t *testing.T) {
platforms, err := GetPlatforms()
if err != nil {
t.Fatalf("Failed to get platforms: %+v", err)
}
for _, p := range platforms {
for key, value := range getObjectStrings(p) {
if strings.Contains(value, "\x00") {
t.Fatalf("platform string %q = %+q contains NUL", key, value)
}
}
}
}
func TestDeviceStringsContainNoNULs(t *testing.T) {
platforms, err := GetPlatforms()
if err != nil {
t.Fatalf("Failed to get platforms: %+v", err)
}
for _, p := range platforms {
devs, err := p.GetDevices(DeviceTypeAll)
if err != nil {
t.Fatalf("Failed to get devices for platform %q: %+v", p.Name(), err)
}
for _, d := range devs {
for key, value := range getObjectStrings(d) {
if strings.Contains(value, "\x00") {
t.Fatalf("device string %q = %+q contains NUL", key, value)
}
}
}
}
}
func TestHello(t *testing.T) {
var data [1024]float32
for i := 0; i < len(data); i++ {
data[i] = rand.Float32()
}
platforms, err := GetPlatforms()
if err != nil {
t.Fatalf("Failed to get platforms: %+v", err)
}
for i, p := range platforms {
t.Logf("Platform %d:", i)
t.Logf(" Name: %s", p.Name())
t.Logf(" Vendor: %s", p.Vendor())
t.Logf(" Profile: %s", p.Profile())
t.Logf(" Version: %s", p.Version())
t.Logf(" Extensions: %s", p.Extensions())
}
platform := platforms[0]
devices, err := platform.GetDevices(DeviceTypeAll)
if err != nil {
t.Fatalf("Failed to get devices: %+v", err)
}
if len(devices) == 0 {
t.Fatalf("GetDevices returned no devices")
}
deviceIndex := -1
for i, d := range devices {
if deviceIndex < 0 && d.Type() == DeviceTypeGPU {
deviceIndex = i
}
t.Logf("Device %d (%s): %s", i, d.Type(), d.Name())
t.Logf(" Address Bits: %d", d.AddressBits())
t.Logf(" Available: %+v", d.Available())
// t.Logf(" Built-In Kernels: %s", d.BuiltInKernels())
t.Logf(" Compiler Available: %+v", d.CompilerAvailable())
t.Logf(" Double FP Config: %s", d.DoubleFPConfig())
t.Logf(" Driver Version: %s", d.DriverVersion())
t.Logf(" Error Correction Supported: %+v", d.ErrorCorrectionSupport())
t.Logf(" Execution Capabilities: %s", d.ExecutionCapabilities())
t.Logf(" Extensions: %s", d.Extensions())
t.Logf(" Global Memory Cache Type: %s", d.GlobalMemCacheType())
t.Logf(" Global Memory Cacheline Size: %d KB", d.GlobalMemCachelineSize()/1024)
t.Logf(" Global Memory Size: %d MB", d.GlobalMemSize()/(1024*1024))
t.Logf(" Half FP Config: %s", d.HalfFPConfig())
t.Logf(" Host Unified Memory: %+v", d.HostUnifiedMemory())
t.Logf(" Image Support: %+v", d.ImageSupport())
t.Logf(" Image2D Max Dimensions: %d x %d", d.Image2DMaxWidth(), d.Image2DMaxHeight())
t.Logf(" Image3D Max Dimenionns: %d x %d x %d", d.Image3DMaxWidth(), d.Image3DMaxHeight(), d.Image3DMaxDepth())
// t.Logf(" Image Max Buffer Size: %d", d.ImageMaxBufferSize())
// t.Logf(" Image Max Array Size: %d", d.ImageMaxArraySize())
// t.Logf(" Linker Available: %+v", d.LinkerAvailable())
t.Logf(" Little Endian: %+v", d.EndianLittle())
t.Logf(" Local Mem Size Size: %d KB", d.LocalMemSize()/1024)
t.Logf(" Local Mem Type: %s", d.LocalMemType())
t.Logf(" Max Clock Frequency: %d", d.MaxClockFrequency())
t.Logf(" Max Compute Units: %d", d.MaxComputeUnits())
t.Logf(" Max Constant Args: %d", d.MaxConstantArgs())
t.Logf(" Max Constant Buffer Size: %d KB", d.MaxConstantBufferSize()/1024)
t.Logf(" Max Mem Alloc Size: %d KB", d.MaxMemAllocSize()/1024)
t.Logf(" Max Parameter Size: %d", d.MaxParameterSize())
t.Logf(" Max Read-Image Args: %d", d.MaxReadImageArgs())
t.Logf(" Max Samplers: %d", d.MaxSamplers())
t.Logf(" Max Work Group Size: %d", d.MaxWorkGroupSize())
t.Logf(" Max Work Item Dimensions: %d", d.MaxWorkItemDimensions())
t.Logf(" Max Work Item Sizes: %d", d.MaxWorkItemSizes())
t.Logf(" Max Write-Image Args: %d", d.MaxWriteImageArgs())
t.Logf(" Memory Base Address Alignment: %d", d.MemBaseAddrAlign())
t.Logf(" Native Vector Width Char: %d", d.NativeVectorWidthChar())
t.Logf(" Native Vector Width Short: %d", d.NativeVectorWidthShort())
t.Logf(" Native Vector Width Int: %d", d.NativeVectorWidthInt())
t.Logf(" Native Vector Width Long: %d", d.NativeVectorWidthLong())
t.Logf(" Native Vector Width Float: %d", d.NativeVectorWidthFloat())
t.Logf(" Native Vector Width Double: %d", d.NativeVectorWidthDouble())
t.Logf(" Native Vector Width Half: %d", d.NativeVectorWidthHalf())
t.Logf(" OpenCL C Version: %s", d.OpenCLCVersion())
// t.Logf(" Parent Device: %+v", d.ParentDevice())
t.Logf(" Profile: %s", d.Profile())
t.Logf(" Profiling Timer Resolution: %d", d.ProfilingTimerResolution())
t.Logf(" Vendor: %s", d.Vendor())
t.Logf(" Version: %s", d.Version())
}
if deviceIndex < 0 {
deviceIndex = 0
}
device := devices[deviceIndex]
t.Logf("Using device %d", deviceIndex)
context, err := CreateContext([]*Device{device})
if err != nil {
t.Fatalf("CreateContext failed: %+v", err)
}
// imageFormats, err := context.GetSupportedImageFormats(0, MemObjectTypeImage2D)
// if err != nil {
// t.Fatalf("GetSupportedImageFormats failed: %+v", err)
// }
// t.Logf("Supported image formats: %+v", imageFormats)
queue, err := context.CreateCommandQueue(device, 0)
if err != nil {
t.Fatalf("CreateCommandQueue failed: %+v", err)
}
program, err := context.CreateProgramWithSource([]string{kernelSource})
if err != nil {
t.Fatalf("CreateProgramWithSource failed: %+v", err)
}
if err := program.BuildProgram(nil, ""); err != nil {
t.Fatalf("BuildProgram failed: %+v", err)
}
kernel, err := program.CreateKernel("square")
if err != nil {
t.Fatalf("CreateKernel failed: %+v", err)
}
for i := 0; i < 3; i++ {
name, err := kernel.ArgName(i)
if err == ErrUnsupported {
break
} else if err != nil {
t.Errorf("GetKernelArgInfo for name failed: %+v", err)
break
} else {
t.Logf("Kernel arg %d: %s", i, name)
}
}
input, err := context.CreateEmptyBuffer(MemReadOnly, 4*len(data))
if err != nil {
t.Fatalf("CreateBuffer failed for input: %+v", err)
}
output, err := context.CreateEmptyBuffer(MemReadOnly, 4*len(data))
if err != nil {
t.Fatalf("CreateBuffer failed for output: %+v", err)
}
if _, err := queue.EnqueueWriteBufferFloat32(input, true, 0, data[:], nil); err != nil {
t.Fatalf("EnqueueWriteBufferFloat32 failed: %+v", err)
}
if err := kernel.SetArgs(input, output, uint32(len(data))); err != nil {
t.Fatalf("SetKernelArgs failed: %+v", err)
}
local, err := kernel.WorkGroupSize(device)
if err != nil {
t.Fatalf("WorkGroupSize failed: %+v", err)
}
t.Logf("Work group size: %d", local)
size, _ := kernel.PreferredWorkGroupSizeMultiple(nil)
t.Logf("Preferred Work Group Size Multiple: %d", size)
global := len(data)
d := len(data) % local
if d != 0 {
global += local - d
}
if _, err := queue.EnqueueNDRangeKernel(kernel, nil, []int{global}, []int{local}, nil); err != nil {
t.Fatalf("EnqueueNDRangeKernel failed: %+v", err)
}
if err := queue.Finish(); err != nil {
t.Fatalf("Finish failed: %+v", err)
}
results := make([]float32, len(data))
if _, err := queue.EnqueueReadBufferFloat32(output, true, 0, results, nil); err != nil {
t.Fatalf("EnqueueReadBufferFloat32 failed: %+v", err)
}
correct := 0
for i, v := range data {
if results[i] == v*v {
correct++
}
}
if correct != len(data) {
t.Fatalf("%d/%d correct values", correct, len(data))
}
}

View File

@ -0,0 +1,161 @@
package cl
// #include <stdlib.h>
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import (
"runtime"
"unsafe"
)
const maxImageFormats = 256
type Context struct {
clContext C.cl_context
devices []*Device
}
type MemObject struct {
clMem C.cl_mem
size int
}
func releaseContext(c *Context) {
if c.clContext != nil {
C.clReleaseContext(c.clContext)
c.clContext = nil
}
}
func releaseMemObject(b *MemObject) {
if b.clMem != nil {
C.clReleaseMemObject(b.clMem)
b.clMem = nil
}
}
func newMemObject(mo C.cl_mem, size int) *MemObject {
memObject := &MemObject{clMem: mo, size: size}
runtime.SetFinalizer(memObject, releaseMemObject)
return memObject
}
func (b *MemObject) Release() {
releaseMemObject(b)
}
// TODO: properties
func CreateContext(devices []*Device) (*Context, error) {
deviceIds := buildDeviceIdList(devices)
var err C.cl_int
clContext := C.clCreateContext(nil, C.cl_uint(len(devices)), &deviceIds[0], nil, nil, &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
if clContext == nil {
return nil, ErrUnknown
}
context := &Context{clContext: clContext, devices: devices}
runtime.SetFinalizer(context, releaseContext)
return context, nil
}
func (ctx *Context) GetSupportedImageFormats(flags MemFlag, imageType MemObjectType) ([]ImageFormat, error) {
var formats [maxImageFormats]C.cl_image_format
var nFormats C.cl_uint
if err := C.clGetSupportedImageFormats(ctx.clContext, C.cl_mem_flags(flags), C.cl_mem_object_type(imageType), maxImageFormats, &formats[0], &nFormats); err != C.CL_SUCCESS {
return nil, toError(err)
}
fmts := make([]ImageFormat, nFormats)
for i, f := range formats[:nFormats] {
fmts[i] = ImageFormat{
ChannelOrder: ChannelOrder(f.image_channel_order),
ChannelDataType: ChannelDataType(f.image_channel_data_type),
}
}
return fmts, nil
}
func (ctx *Context) CreateCommandQueue(device *Device, properties CommandQueueProperty) (*CommandQueue, error) {
var err C.cl_int
clQueue := C.clCreateCommandQueue(ctx.clContext, device.id, C.cl_command_queue_properties(properties), &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
if clQueue == nil {
return nil, ErrUnknown
}
commandQueue := &CommandQueue{clQueue: clQueue, device: device}
runtime.SetFinalizer(commandQueue, releaseCommandQueue)
return commandQueue, nil
}
func (ctx *Context) CreateProgramWithSource(sources []string) (*Program, error) {
cSources := make([]*C.char, len(sources))
for i, s := range sources {
cs := C.CString(s)
cSources[i] = cs
defer C.free(unsafe.Pointer(cs))
}
var err C.cl_int
clProgram := C.clCreateProgramWithSource(ctx.clContext, C.cl_uint(len(sources)), &cSources[0], nil, &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
if clProgram == nil {
return nil, ErrUnknown
}
program := &Program{clProgram: clProgram, devices: ctx.devices}
runtime.SetFinalizer(program, releaseProgram)
return program, nil
}
func (ctx *Context) CreateBufferUnsafe(flags MemFlag, size int, dataPtr unsafe.Pointer) (*MemObject, error) {
var err C.cl_int
clBuffer := C.clCreateBuffer(ctx.clContext, C.cl_mem_flags(flags), C.size_t(size), dataPtr, &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
if clBuffer == nil {
return nil, ErrUnknown
}
return newMemObject(clBuffer, size), nil
}
func (ctx *Context) CreateEmptyBuffer(flags MemFlag, size int) (*MemObject, error) {
return ctx.CreateBufferUnsafe(flags, size, nil)
}
func (ctx *Context) CreateEmptyBufferFloat32(flags MemFlag, size int) (*MemObject, error) {
return ctx.CreateBufferUnsafe(flags, 4*size, nil)
}
func (ctx *Context) CreateBuffer(flags MemFlag, data []byte) (*MemObject, error) {
return ctx.CreateBufferUnsafe(flags, len(data), unsafe.Pointer(&data[0]))
}
//float64
func (ctx *Context) CreateBufferFloat32(flags MemFlag, data []float32) (*MemObject, error) {
return ctx.CreateBufferUnsafe(flags, 4*len(data), unsafe.Pointer(&data[0]))
}
func (ctx *Context) CreateUserEvent() (*Event, error) {
var err C.cl_int
clEvent := C.clCreateUserEvent(ctx.clContext, &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
return newEvent(clEvent), nil
}
func (ctx *Context) Release() {
releaseContext(ctx)
}
// http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/clCreateSubBuffer.html
// func (memObject *MemObject) CreateSubBuffer(flags MemFlag, bufferCreateType BufferCreateType, )

View File

@ -0,0 +1,510 @@
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #include "cl_ext.h"
// #endif
import "C"
import (
"strings"
"unsafe"
)
const maxDeviceCount = 64
type DeviceType uint
const (
DeviceTypeCPU DeviceType = C.CL_DEVICE_TYPE_CPU
DeviceTypeGPU DeviceType = C.CL_DEVICE_TYPE_GPU
DeviceTypeAccelerator DeviceType = C.CL_DEVICE_TYPE_ACCELERATOR
DeviceTypeDefault DeviceType = C.CL_DEVICE_TYPE_DEFAULT
DeviceTypeAll DeviceType = C.CL_DEVICE_TYPE_ALL
)
type FPConfig int
const (
FPConfigDenorm FPConfig = C.CL_FP_DENORM // denorms are supported
FPConfigInfNaN FPConfig = C.CL_FP_INF_NAN // INF and NaNs are supported
FPConfigRoundToNearest FPConfig = C.CL_FP_ROUND_TO_NEAREST // round to nearest even rounding mode supported
FPConfigRoundToZero FPConfig = C.CL_FP_ROUND_TO_ZERO // round to zero rounding mode supported
FPConfigRoundToInf FPConfig = C.CL_FP_ROUND_TO_INF // round to positive and negative infinity rounding modes supported
FPConfigFMA FPConfig = C.CL_FP_FMA // IEEE754-2008 fused multiply-add is supported
FPConfigSoftFloat FPConfig = C.CL_FP_SOFT_FLOAT // Basic floating-point operations (such as addition, subtraction, multiplication) are implemented in software
)
var fpConfigNameMap = map[FPConfig]string{
FPConfigDenorm: "Denorm",
FPConfigInfNaN: "InfNaN",
FPConfigRoundToNearest: "RoundToNearest",
FPConfigRoundToZero: "RoundToZero",
FPConfigRoundToInf: "RoundToInf",
FPConfigFMA: "FMA",
FPConfigSoftFloat: "SoftFloat",
}
func (c FPConfig) String() string {
var parts []string
for bit, name := range fpConfigNameMap {
if c&bit != 0 {
parts = append(parts, name)
}
}
if parts == nil {
return ""
}
return strings.Join(parts, "|")
}
func (dt DeviceType) String() string {
var parts []string
if dt&DeviceTypeCPU != 0 {
parts = append(parts, "CPU")
}
if dt&DeviceTypeGPU != 0 {
parts = append(parts, "GPU")
}
if dt&DeviceTypeAccelerator != 0 {
parts = append(parts, "Accelerator")
}
if dt&DeviceTypeDefault != 0 {
parts = append(parts, "Default")
}
if parts == nil {
parts = append(parts, "None")
}
return strings.Join(parts, "|")
}
type Device struct {
id C.cl_device_id
}
func buildDeviceIdList(devices []*Device) []C.cl_device_id {
deviceIds := make([]C.cl_device_id, len(devices))
for i, d := range devices {
deviceIds[i] = d.id
}
return deviceIds
}
// Obtain the list of devices available on a platform. 'platform' refers
// to the platform returned by GetPlatforms or can be nil. If platform
// is nil, the behavior is implementation-defined.
func GetDevices(platform *Platform, deviceType DeviceType) ([]*Device, error) {
var deviceIds [maxDeviceCount]C.cl_device_id
var numDevices C.cl_uint
var platformId C.cl_platform_id
if platform != nil {
platformId = platform.id
}
if err := C.clGetDeviceIDs(platformId, C.cl_device_type(deviceType), C.cl_uint(maxDeviceCount), &deviceIds[0], &numDevices); err != C.CL_SUCCESS {
return nil, toError(err)
}
if numDevices > maxDeviceCount {
numDevices = maxDeviceCount
}
devices := make([]*Device, numDevices)
for i := 0; i < int(numDevices); i++ {
devices[i] = &Device{id: deviceIds[i]}
}
return devices, nil
}
func (d *Device) nullableId() C.cl_device_id {
if d == nil {
return nil
}
return d.id
}
func (d *Device) GetInfoString(param C.cl_device_info, panicOnError bool) (string, error) {
var strC [1024]C.char
var strN C.size_t
if err := C.clGetDeviceInfo(d.id, param, 1024, unsafe.Pointer(&strC), &strN); err != C.CL_SUCCESS {
if panicOnError {
panic("Should never fail")
}
return "", toError(err)
}
// OpenCL strings are NUL-terminated, and the terminator is included in strN
// Go strings aren't NUL-terminated, so subtract 1 from the length
return C.GoStringN((*C.char)(unsafe.Pointer(&strC)), C.int(strN-1)), nil
}
func (d *Device) getInfoUint(param C.cl_device_info, panicOnError bool) (uint, error) {
var val C.cl_uint
if err := C.clGetDeviceInfo(d.id, param, C.size_t(unsafe.Sizeof(val)), unsafe.Pointer(&val), nil); err != C.CL_SUCCESS {
if panicOnError {
panic("Should never fail")
}
return 0, toError(err)
}
return uint(val), nil
}
func (d *Device) getInfoSize(param C.cl_device_info, panicOnError bool) (int, error) {
var val C.size_t
if err := C.clGetDeviceInfo(d.id, param, C.size_t(unsafe.Sizeof(val)), unsafe.Pointer(&val), nil); err != C.CL_SUCCESS {
if panicOnError {
panic("Should never fail")
}
return 0, toError(err)
}
return int(val), nil
}
func (d *Device) getInfoUlong(param C.cl_device_info, panicOnError bool) (int64, error) {
var val C.cl_ulong
if err := C.clGetDeviceInfo(d.id, param, C.size_t(unsafe.Sizeof(val)), unsafe.Pointer(&val), nil); err != C.CL_SUCCESS {
if panicOnError {
panic("Should never fail")
}
return 0, toError(err)
}
return int64(val), nil
}
func (d *Device) getInfoBool(param C.cl_device_info, panicOnError bool) (bool, error) {
var val C.cl_bool
if err := C.clGetDeviceInfo(d.id, param, C.size_t(unsafe.Sizeof(val)), unsafe.Pointer(&val), nil); err != C.CL_SUCCESS {
if panicOnError {
panic("Should never fail")
}
return false, toError(err)
}
return val == C.CL_TRUE, nil
}
func (d *Device) Name() string {
str, _ := d.GetInfoString(C.CL_DEVICE_NAME, true)
return str
}
func (d *Device) Vendor() string {
str, _ := d.GetInfoString(C.CL_DEVICE_VENDOR, true)
return str
}
func (d *Device) Extensions() string {
str, _ := d.GetInfoString(C.CL_DEVICE_EXTENSIONS, true)
return str
}
func (d *Device) OpenCLCVersion() string {
str, _ := d.GetInfoString(C.CL_DEVICE_OPENCL_C_VERSION, true)
return str
}
func (d *Device) Profile() string {
str, _ := d.GetInfoString(C.CL_DEVICE_PROFILE, true)
return str
}
func (d *Device) Version() string {
str, _ := d.GetInfoString(C.CL_DEVICE_VERSION, true)
return str
}
func (d *Device) DriverVersion() string {
str, _ := d.GetInfoString(C.CL_DRIVER_VERSION, true)
return str
}
// The default compute device address space size specified as an
// unsigned integer value in bits. Currently supported values are 32 or 64 bits.
func (d *Device) AddressBits() int {
val, _ := d.getInfoUint(C.CL_DEVICE_ADDRESS_BITS, true)
return int(val)
}
// Size of global memory cache line in bytes.
func (d *Device) GlobalMemCachelineSize() int {
val, _ := d.getInfoUint(C.CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE, true)
return int(val)
}
// Maximum configured clock frequency of the device in MHz.
func (d *Device) MaxClockFrequency() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_CLOCK_FREQUENCY, true)
return int(val)
}
// The number of parallel compute units on the OpenCL device.
// A work-group executes on a single compute unit. The minimum value is 1.
func (d *Device) MaxComputeUnits() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_COMPUTE_UNITS, true)
return int(val)
}
// Max number of arguments declared with the __constant qualifier in a kernel.
// The minimum value is 8 for devices that are not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) MaxConstantArgs() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_CONSTANT_ARGS, true)
return int(val)
}
// Max number of simultaneous image objects that can be read by a kernel.
// The minimum value is 128 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) MaxReadImageArgs() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_READ_IMAGE_ARGS, true)
return int(val)
}
// Maximum number of samplers that can be used in a kernel. The minimum
// value is 16 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE. (Also see sampler_t.)
func (d *Device) MaxSamplers() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_SAMPLERS, true)
return int(val)
}
// Maximum dimensions that specify the global and local work-item IDs used
// by the data parallel execution model. (Refer to clEnqueueNDRangeKernel).
// The minimum value is 3 for devices that are not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) MaxWorkItemDimensions() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS, true)
return int(val)
}
// Max number of simultaneous image objects that can be written to by a
// kernel. The minimum value is 8 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) MaxWriteImageArgs() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MAX_WRITE_IMAGE_ARGS, true)
return int(val)
}
// The minimum value is the size (in bits) of the largest OpenCL built-in
// data type supported by the device (long16 in FULL profile, long16 or
// int16 in EMBEDDED profile) for devices that are not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) MemBaseAddrAlign() int {
val, _ := d.getInfoUint(C.CL_DEVICE_MEM_BASE_ADDR_ALIGN, true)
return int(val)
}
func (d *Device) NativeVectorWidthChar() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR, true)
return int(val)
}
func (d *Device) NativeVectorWidthShort() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT, true)
return int(val)
}
func (d *Device) NativeVectorWidthInt() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_INT, true)
return int(val)
}
func (d *Device) NativeVectorWidthLong() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG, true)
return int(val)
}
func (d *Device) NativeVectorWidthFloat() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_FLOAT, true)
return int(val)
}
func (d *Device) NativeVectorWidthDouble() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_DOUBLE, true)
return int(val)
}
func (d *Device) NativeVectorWidthHalf() int {
val, _ := d.getInfoUint(C.CL_DEVICE_NATIVE_VECTOR_WIDTH_HALF, true)
return int(val)
}
// Max height of 2D image in pixels. The minimum value is 8192
// if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) Image2DMaxHeight() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE2D_MAX_HEIGHT, true)
return int(val)
}
// Max width of 2D image or 1D image not created from a buffer object in
// pixels. The minimum value is 8192 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) Image2DMaxWidth() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE2D_MAX_WIDTH, true)
return int(val)
}
// Max depth of 3D image in pixels. The minimum value is 2048 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) Image3DMaxDepth() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE3D_MAX_DEPTH, true)
return int(val)
}
// Max height of 3D image in pixels. The minimum value is 2048 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) Image3DMaxHeight() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE3D_MAX_HEIGHT, true)
return int(val)
}
// Max width of 3D image in pixels. The minimum value is 2048 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) Image3DMaxWidth() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE3D_MAX_WIDTH, true)
return int(val)
}
// Max size in bytes of the arguments that can be passed to a kernel. The
// minimum value is 1024 for devices that are not of type CL_DEVICE_TYPE_CUSTOM.
// For this minimum value, only a maximum of 128 arguments can be passed to a kernel.
func (d *Device) MaxParameterSize() int {
val, _ := d.getInfoSize(C.CL_DEVICE_MAX_PARAMETER_SIZE, true)
return int(val)
}
// Maximum number of work-items in a work-group executing a kernel on a
// single compute unit, using the data parallel execution model. (Refer
// to clEnqueueNDRangeKernel). The minimum value is 1.
func (d *Device) MaxWorkGroupSize() int {
val, _ := d.getInfoSize(C.CL_DEVICE_MAX_WORK_GROUP_SIZE, true)
return int(val)
}
// Describes the resolution of device timer. This is measured in nanoseconds.
func (d *Device) ProfilingTimerResolution() int {
val, _ := d.getInfoSize(C.CL_DEVICE_PROFILING_TIMER_RESOLUTION, true)
return int(val)
}
// Size of local memory arena in bytes. The minimum value is 32 KB for
// devices that are not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) LocalMemSize() int64 {
val, _ := d.getInfoUlong(C.CL_DEVICE_LOCAL_MEM_SIZE, true)
return val
}
// Max size in bytes of a constant buffer allocation. The minimum value is
// 64 KB for devices that are not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) MaxConstantBufferSize() int64 {
val, _ := d.getInfoUlong(C.CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE, true)
return val
}
// Max size of memory object allocation in bytes. The minimum value is max
// (1/4th of CL_DEVICE_GLOBAL_MEM_SIZE, 128*1024*1024) for devices that are
// not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) MaxMemAllocSize() int64 {
val, _ := d.getInfoUlong(C.CL_DEVICE_MAX_MEM_ALLOC_SIZE, true)
return val
}
// Size of global device memory in bytes.
func (d *Device) GlobalMemSize() int64 {
val, _ := d.getInfoUlong(C.CL_DEVICE_GLOBAL_MEM_SIZE, true)
return val
}
func (d *Device) Available() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_AVAILABLE, true)
return val
}
func (d *Device) CompilerAvailable() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_COMPILER_AVAILABLE, true)
return val
}
func (d *Device) EndianLittle() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_ENDIAN_LITTLE, true)
return val
}
// Is CL_TRUE if the device implements error correction for all
// accesses to compute device memory (global and constant). Is
// CL_FALSE if the device does not implement such error correction.
func (d *Device) ErrorCorrectionSupport() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_ERROR_CORRECTION_SUPPORT, true)
return val
}
func (d *Device) HostUnifiedMemory() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_HOST_UNIFIED_MEMORY, true)
return val
}
func (d *Device) ImageSupport() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_IMAGE_SUPPORT, true)
return val
}
func (d *Device) Type() DeviceType {
var deviceType C.cl_device_type
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_TYPE, C.size_t(unsafe.Sizeof(deviceType)), unsafe.Pointer(&deviceType), nil); err != C.CL_SUCCESS {
panic("Failed to get device type")
}
return DeviceType(deviceType)
}
// Describes double precision floating-point capability of the OpenCL device
func (d *Device) DoubleFPConfig() FPConfig {
var fpConfig C.cl_device_fp_config
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_DOUBLE_FP_CONFIG, C.size_t(unsafe.Sizeof(fpConfig)), unsafe.Pointer(&fpConfig), nil); err != C.CL_SUCCESS {
panic("Failed to get double FP config")
}
return FPConfig(fpConfig)
}
// Describes the OPTIONAL half precision floating-point capability of the OpenCL device
func (d *Device) HalfFPConfig() FPConfig {
var fpConfig C.cl_device_fp_config
err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_HALF_FP_CONFIG, C.size_t(unsafe.Sizeof(fpConfig)), unsafe.Pointer(&fpConfig), nil)
if err != C.CL_SUCCESS {
return FPConfig(0)
}
return FPConfig(fpConfig)
}
// Type of local memory supported. This can be set to CL_LOCAL implying dedicated
// local memory storage such as SRAM, or CL_GLOBAL. For custom devices, CL_NONE
// can also be returned indicating no local memory support.
func (d *Device) LocalMemType() LocalMemType {
var memType C.cl_device_local_mem_type
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_LOCAL_MEM_TYPE, C.size_t(unsafe.Sizeof(memType)), unsafe.Pointer(&memType), nil); err != C.CL_SUCCESS {
return LocalMemType(C.CL_NONE)
}
return LocalMemType(memType)
}
// Describes the execution capabilities of the device. The mandated minimum capability is CL_EXEC_KERNEL.
func (d *Device) ExecutionCapabilities() ExecCapability {
var execCap C.cl_device_exec_capabilities
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_EXECUTION_CAPABILITIES, C.size_t(unsafe.Sizeof(execCap)), unsafe.Pointer(&execCap), nil); err != C.CL_SUCCESS {
panic("Failed to get execution capabilities")
}
return ExecCapability(execCap)
}
func (d *Device) GlobalMemCacheType() MemCacheType {
var memType C.cl_device_mem_cache_type
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_GLOBAL_MEM_CACHE_TYPE, C.size_t(unsafe.Sizeof(memType)), unsafe.Pointer(&memType), nil); err != C.CL_SUCCESS {
return MemCacheType(C.CL_NONE)
}
return MemCacheType(memType)
}
// Maximum number of work-items that can be specified in each dimension of the work-group to clEnqueueNDRangeKernel.
//
// Returns n size_t entries, where n is the value returned by the query for CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS.
//
// The minimum value is (1, 1, 1) for devices that are not of type CL_DEVICE_TYPE_CUSTOM.
func (d *Device) MaxWorkItemSizes() []int {
dims := d.MaxWorkItemDimensions()
sizes := make([]C.size_t, dims)
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_MAX_WORK_ITEM_SIZES, C.size_t(int(unsafe.Sizeof(sizes[0]))*dims), unsafe.Pointer(&sizes[0]), nil); err != C.CL_SUCCESS {
panic("Failed to get max work item sizes")
}
intSizes := make([]int, dims)
for i, s := range sizes {
intSizes[i] = int(s)
}
return intSizes
}

View File

@ -0,0 +1,51 @@
// +build cl12
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import "unsafe"
const FPConfigCorrectlyRoundedDivideSqrt FPConfig = C.CL_FP_CORRECTLY_ROUNDED_DIVIDE_SQRT
func init() {
fpConfigNameMap[FPConfigCorrectlyRoundedDivideSqrt] = "CorrectlyRoundedDivideSqrt"
}
func (d *Device) BuiltInKernels() string {
str, _ := d.getInfoString(C.CL_DEVICE_BUILT_IN_KERNELS, true)
return str
}
// Is CL_FALSE if the implementation does not have a linker available. Is CL_TRUE if the linker is available. This can be CL_FALSE for the embedded platform profile only. This must be CL_TRUE if CL_DEVICE_COMPILER_AVAILABLE is CL_TRUE
func (d *Device) LinkerAvailable() bool {
val, _ := d.getInfoBool(C.CL_DEVICE_LINKER_AVAILABLE, true)
return val
}
func (d *Device) ParentDevice() *Device {
var deviceId C.cl_device_id
if err := C.clGetDeviceInfo(d.id, C.CL_DEVICE_PARENT_DEVICE, C.size_t(unsafe.Sizeof(deviceId)), unsafe.Pointer(&deviceId), nil); err != C.CL_SUCCESS {
panic("ParentDevice failed")
}
if deviceId == nil {
return nil
}
return &Device{id: deviceId}
}
// Max number of pixels for a 1D image created from a buffer object. The minimum value is 65536 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE.
func (d *Device) ImageMaxBufferSize() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE_MAX_BUFFER_SIZE, true)
return int(val)
}
// Max number of images in a 1D or 2D image array. The minimum value is 2048 if CL_DEVICE_IMAGE_SUPPORT is CL_TRUE
func (d *Device) ImageMaxArraySize() int {
val, _ := d.getInfoSize(C.CL_DEVICE_IMAGE_MAX_ARRAY_SIZE, true)
return int(val)
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,315 @@
/*******************************************************************************
* Copyright (c) 2008-2013 The Khronos Group Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and/or associated documentation files (the
* "Materials"), to deal in the Materials without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Materials, and to
* permit persons to whom the Materials are furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Materials.
*
* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
* MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
******************************************************************************/
/* $Revision: 11928 $ on $Date: 2010-07-13 09:04:56 -0700 (Tue, 13 Jul 2010) $ */
/* cl_ext.h contains OpenCL extensions which don't have external */
/* (OpenGL, D3D) dependencies. */
#ifndef __CL_EXT_H
#define __CL_EXT_H
#ifdef __cplusplus
extern "C" {
#endif
#ifdef __APPLE__
#include <AvailabilityMacros.h>
#endif
#include <cl.h>
/* cl_khr_fp16 extension - no extension #define since it has no functions */
#define CL_DEVICE_HALF_FP_CONFIG 0x1033
/* Memory object destruction
*
* Apple extension for use to manage externally allocated buffers used with cl_mem objects with CL_MEM_USE_HOST_PTR
*
* Registers a user callback function that will be called when the memory object is deleted and its resources
* freed. Each call to clSetMemObjectCallbackFn registers the specified user callback function on a callback
* stack associated with memobj. The registered user callback functions are called in the reverse order in
* which they were registered. The user callback functions are called and then the memory object is deleted
* and its resources freed. This provides a mechanism for the application (and libraries) using memobj to be
* notified when the memory referenced by host_ptr, specified when the memory object is created and used as
* the storage bits for the memory object, can be reused or freed.
*
* The application may not call CL api's with the cl_mem object passed to the pfn_notify.
*
* Please check for the "cl_APPLE_SetMemObjectDestructor" extension using clGetDeviceInfo(CL_DEVICE_EXTENSIONS)
* before using.
*/
#define cl_APPLE_SetMemObjectDestructor 1
cl_int CL_API_ENTRY clSetMemObjectDestructorAPPLE( cl_mem /* memobj */,
void (* /*pfn_notify*/)( cl_mem /* memobj */, void* /*user_data*/),
void * /*user_data */ ) CL_EXT_SUFFIX__VERSION_1_0;
/* Context Logging Functions
*
* The next three convenience functions are intended to be used as the pfn_notify parameter to clCreateContext().
* Please check for the "cl_APPLE_ContextLoggingFunctions" extension using clGetDeviceInfo(CL_DEVICE_EXTENSIONS)
* before using.
*
* clLogMessagesToSystemLog fowards on all log messages to the Apple System Logger
*/
#define cl_APPLE_ContextLoggingFunctions 1
extern void CL_API_ENTRY clLogMessagesToSystemLogAPPLE( const char * /* errstr */,
const void * /* private_info */,
size_t /* cb */,
void * /* user_data */ ) CL_EXT_SUFFIX__VERSION_1_0;
/* clLogMessagesToStdout sends all log messages to the file descriptor stdout */
extern void CL_API_ENTRY clLogMessagesToStdoutAPPLE( const char * /* errstr */,
const void * /* private_info */,
size_t /* cb */,
void * /* user_data */ ) CL_EXT_SUFFIX__VERSION_1_0;
/* clLogMessagesToStderr sends all log messages to the file descriptor stderr */
extern void CL_API_ENTRY clLogMessagesToStderrAPPLE( const char * /* errstr */,
const void * /* private_info */,
size_t /* cb */,
void * /* user_data */ ) CL_EXT_SUFFIX__VERSION_1_0;
/************************
* cl_khr_icd extension *
************************/
#define cl_khr_icd 1
/* cl_platform_info */
#define CL_PLATFORM_ICD_SUFFIX_KHR 0x0920
/* Additional Error Codes */
#define CL_PLATFORM_NOT_FOUND_KHR -1001
extern CL_API_ENTRY cl_int CL_API_CALL
clIcdGetPlatformIDsKHR(cl_uint /* num_entries */,
cl_platform_id * /* platforms */,
cl_uint * /* num_platforms */);
typedef CL_API_ENTRY cl_int (CL_API_CALL *clIcdGetPlatformIDsKHR_fn)(
cl_uint /* num_entries */,
cl_platform_id * /* platforms */,
cl_uint * /* num_platforms */);
/* Extension: cl_khr_image2D_buffer
*
* This extension allows a 2D image to be created from a cl_mem buffer without a copy.
* The type associated with a 2D image created from a buffer in an OpenCL program is image2d_t.
* Both the sampler and sampler-less read_image built-in functions are supported for 2D images
* and 2D images created from a buffer. Similarly, the write_image built-ins are also supported
* for 2D images created from a buffer.
*
* When the 2D image from buffer is created, the client must specify the width,
* height, image format (i.e. channel order and channel data type) and optionally the row pitch
*
* The pitch specified must be a multiple of CL_DEVICE_IMAGE_PITCH_ALIGNMENT pixels.
* The base address of the buffer must be aligned to CL_DEVICE_IMAGE_BASE_ADDRESS_ALIGNMENT pixels.
*/
/*************************************
* cl_khr_initalize_memory extension *
*************************************/
#define CL_CONTEXT_MEMORY_INITIALIZE_KHR 0x200E
/**************************************
* cl_khr_terminate_context extension *
**************************************/
#define CL_DEVICE_TERMINATE_CAPABILITY_KHR 0x200F
#define CL_CONTEXT_TERMINATE_KHR 0x2010
#define cl_khr_terminate_context 1
extern CL_API_ENTRY cl_int CL_API_CALL clTerminateContextKHR(cl_context /* context */) CL_EXT_SUFFIX__VERSION_1_2;
typedef CL_API_ENTRY cl_int (CL_API_CALL *clTerminateContextKHR_fn)(cl_context /* context */) CL_EXT_SUFFIX__VERSION_1_2;
/*
* Extension: cl_khr_spir
*
* This extension adds support to create an OpenCL program object from a
* Standard Portable Intermediate Representation (SPIR) instance
*/
#define CL_DEVICE_SPIR_VERSIONS 0x40E0
#define CL_PROGRAM_BINARY_TYPE_INTERMEDIATE 0x40E1
/******************************************
* cl_nv_device_attribute_query extension *
******************************************/
/* cl_nv_device_attribute_query extension - no extension #define since it has no functions */
#define CL_DEVICE_COMPUTE_CAPABILITY_MAJOR_NV 0x4000
#define CL_DEVICE_COMPUTE_CAPABILITY_MINOR_NV 0x4001
#define CL_DEVICE_REGISTERS_PER_BLOCK_NV 0x4002
#define CL_DEVICE_WARP_SIZE_NV 0x4003
#define CL_DEVICE_GPU_OVERLAP_NV 0x4004
#define CL_DEVICE_KERNEL_EXEC_TIMEOUT_NV 0x4005
#define CL_DEVICE_INTEGRATED_MEMORY_NV 0x4006
/*********************************
* cl_amd_device_attribute_query *
*********************************/
#define CL_DEVICE_PROFILING_TIMER_OFFSET_AMD 0x4036
/*********************************
* cl_arm_printf extension
*********************************/
#define CL_PRINTF_CALLBACK_ARM 0x40B0
#define CL_PRINTF_BUFFERSIZE_ARM 0x40B1
#ifdef CL_VERSION_1_1
/***********************************
* cl_ext_device_fission extension *
***********************************/
#define cl_ext_device_fission 1
extern CL_API_ENTRY cl_int CL_API_CALL
clReleaseDeviceEXT( cl_device_id /*device*/ ) CL_EXT_SUFFIX__VERSION_1_1;
typedef CL_API_ENTRY cl_int
(CL_API_CALL *clReleaseDeviceEXT_fn)( cl_device_id /*device*/ ) CL_EXT_SUFFIX__VERSION_1_1;
extern CL_API_ENTRY cl_int CL_API_CALL
clRetainDeviceEXT( cl_device_id /*device*/ ) CL_EXT_SUFFIX__VERSION_1_1;
typedef CL_API_ENTRY cl_int
(CL_API_CALL *clRetainDeviceEXT_fn)( cl_device_id /*device*/ ) CL_EXT_SUFFIX__VERSION_1_1;
typedef cl_ulong cl_device_partition_property_ext;
extern CL_API_ENTRY cl_int CL_API_CALL
clCreateSubDevicesEXT( cl_device_id /*in_device*/,
const cl_device_partition_property_ext * /* properties */,
cl_uint /*num_entries*/,
cl_device_id * /*out_devices*/,
cl_uint * /*num_devices*/ ) CL_EXT_SUFFIX__VERSION_1_1;
typedef CL_API_ENTRY cl_int
( CL_API_CALL * clCreateSubDevicesEXT_fn)( cl_device_id /*in_device*/,
const cl_device_partition_property_ext * /* properties */,
cl_uint /*num_entries*/,
cl_device_id * /*out_devices*/,
cl_uint * /*num_devices*/ ) CL_EXT_SUFFIX__VERSION_1_1;
/* cl_device_partition_property_ext */
#define CL_DEVICE_PARTITION_EQUALLY_EXT 0x4050
#define CL_DEVICE_PARTITION_BY_COUNTS_EXT 0x4051
#define CL_DEVICE_PARTITION_BY_NAMES_EXT 0x4052
#define CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN_EXT 0x4053
/* clDeviceGetInfo selectors */
#define CL_DEVICE_PARENT_DEVICE_EXT 0x4054
#define CL_DEVICE_PARTITION_TYPES_EXT 0x4055
#define CL_DEVICE_AFFINITY_DOMAINS_EXT 0x4056
#define CL_DEVICE_REFERENCE_COUNT_EXT 0x4057
#define CL_DEVICE_PARTITION_STYLE_EXT 0x4058
/* error codes */
#define CL_DEVICE_PARTITION_FAILED_EXT -1057
#define CL_INVALID_PARTITION_COUNT_EXT -1058
#define CL_INVALID_PARTITION_NAME_EXT -1059
/* CL_AFFINITY_DOMAINs */
#define CL_AFFINITY_DOMAIN_L1_CACHE_EXT 0x1
#define CL_AFFINITY_DOMAIN_L2_CACHE_EXT 0x2
#define CL_AFFINITY_DOMAIN_L3_CACHE_EXT 0x3
#define CL_AFFINITY_DOMAIN_L4_CACHE_EXT 0x4
#define CL_AFFINITY_DOMAIN_NUMA_EXT 0x10
#define CL_AFFINITY_DOMAIN_NEXT_FISSIONABLE_EXT 0x100
/* cl_device_partition_property_ext list terminators */
#define CL_PROPERTIES_LIST_END_EXT ((cl_device_partition_property_ext) 0)
#define CL_PARTITION_BY_COUNTS_LIST_END_EXT ((cl_device_partition_property_ext) 0)
#define CL_PARTITION_BY_NAMES_LIST_END_EXT ((cl_device_partition_property_ext) 0 - 1)
/*********************************
* cl_qcom_ext_host_ptr extension
*********************************/
#define CL_MEM_EXT_HOST_PTR_QCOM (1 << 29)
#define CL_DEVICE_EXT_MEM_PADDING_IN_BYTES_QCOM 0x40A0
#define CL_DEVICE_PAGE_SIZE_QCOM 0x40A1
#define CL_IMAGE_ROW_ALIGNMENT_QCOM 0x40A2
#define CL_IMAGE_SLICE_ALIGNMENT_QCOM 0x40A3
#define CL_MEM_HOST_UNCACHED_QCOM 0x40A4
#define CL_MEM_HOST_WRITEBACK_QCOM 0x40A5
#define CL_MEM_HOST_WRITETHROUGH_QCOM 0x40A6
#define CL_MEM_HOST_WRITE_COMBINING_QCOM 0x40A7
typedef cl_uint cl_image_pitch_info_qcom;
extern CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceImageInfoQCOM(cl_device_id device,
size_t image_width,
size_t image_height,
const cl_image_format *image_format,
cl_image_pitch_info_qcom param_name,
size_t param_value_size,
void *param_value,
size_t *param_value_size_ret);
typedef struct _cl_mem_ext_host_ptr
{
/* Type of external memory allocation. */
/* Legal values will be defined in layered extensions. */
cl_uint allocation_type;
/* Host cache policy for this external memory allocation. */
cl_uint host_cache_policy;
} cl_mem_ext_host_ptr;
/*********************************
* cl_qcom_ion_host_ptr extension
*********************************/
#define CL_MEM_ION_HOST_PTR_QCOM 0x40A8
typedef struct _cl_mem_ion_host_ptr
{
/* Type of external memory allocation. */
/* Must be CL_MEM_ION_HOST_PTR_QCOM for ION allocations. */
cl_mem_ext_host_ptr ext_host_ptr;
/* ION file descriptor */
int ion_filedesc;
/* Host pointer to the ION allocated memory */
void* ion_hostptr;
} cl_mem_ion_host_ptr;
#endif /* CL_VERSION_1_1 */
#ifdef __cplusplus
}
#endif
#endif /* __CL_EXT_H */

View File

@ -0,0 +1,158 @@
/**********************************************************************************
* Copyright (c) 2008 - 2012 The Khronos Group Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and/or associated documentation files (the
* "Materials"), to deal in the Materials without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Materials, and to
* permit persons to whom the Materials are furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Materials.
*
* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
* MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
**********************************************************************************/
#ifndef __OPENCL_CL_GL_H
#define __OPENCL_CL_GL_H
#include <cl.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef cl_uint cl_gl_object_type;
typedef cl_uint cl_gl_texture_info;
typedef cl_uint cl_gl_platform_info;
typedef struct __GLsync *cl_GLsync;
/* cl_gl_object_type = 0x2000 - 0x200F enum values are currently taken */
#define CL_GL_OBJECT_BUFFER 0x2000
#define CL_GL_OBJECT_TEXTURE2D 0x2001
#define CL_GL_OBJECT_TEXTURE3D 0x2002
#define CL_GL_OBJECT_RENDERBUFFER 0x2003
#define CL_GL_OBJECT_TEXTURE2D_ARRAY 0x200E
#define CL_GL_OBJECT_TEXTURE1D 0x200F
#define CL_GL_OBJECT_TEXTURE1D_ARRAY 0x2010
#define CL_GL_OBJECT_TEXTURE_BUFFER 0x2011
/* cl_gl_texture_info */
#define CL_GL_TEXTURE_TARGET 0x2004
#define CL_GL_MIPMAP_LEVEL 0x2005
#define CL_GL_NUM_SAMPLES 0x2012
extern CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromGLBuffer(cl_context /* context */,
cl_mem_flags /* flags */,
cl_GLuint /* bufobj */,
int * /* errcode_ret */) CL_API_SUFFIX__VERSION_1_0;
extern CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromGLTexture(cl_context /* context */,
cl_mem_flags /* flags */,
cl_GLenum /* target */,
cl_GLint /* miplevel */,
cl_GLuint /* texture */,
cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_1_2;
extern CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromGLRenderbuffer(cl_context /* context */,
cl_mem_flags /* flags */,
cl_GLuint /* renderbuffer */,
cl_int * /* errcode_ret */) CL_API_SUFFIX__VERSION_1_0;
extern CL_API_ENTRY cl_int CL_API_CALL
clGetGLObjectInfo(cl_mem /* memobj */,
cl_gl_object_type * /* gl_object_type */,
cl_GLuint * /* gl_object_name */) CL_API_SUFFIX__VERSION_1_0;
extern CL_API_ENTRY cl_int CL_API_CALL
clGetGLTextureInfo(cl_mem /* memobj */,
cl_gl_texture_info /* param_name */,
size_t /* param_value_size */,
void * /* param_value */,
size_t * /* param_value_size_ret */) CL_API_SUFFIX__VERSION_1_0;
extern CL_API_ENTRY cl_int CL_API_CALL
clEnqueueAcquireGLObjects(cl_command_queue /* command_queue */,
cl_uint /* num_objects */,
const cl_mem * /* mem_objects */,
cl_uint /* num_events_in_wait_list */,
const cl_event * /* event_wait_list */,
cl_event * /* event */) CL_API_SUFFIX__VERSION_1_0;
extern CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReleaseGLObjects(cl_command_queue /* command_queue */,
cl_uint /* num_objects */,
const cl_mem * /* mem_objects */,
cl_uint /* num_events_in_wait_list */,
const cl_event * /* event_wait_list */,
cl_event * /* event */) CL_API_SUFFIX__VERSION_1_0;
/* Deprecated OpenCL 1.1 APIs */
extern CL_API_ENTRY CL_EXT_PREFIX__VERSION_1_1_DEPRECATED cl_mem CL_API_CALL
clCreateFromGLTexture2D(cl_context /* context */,
cl_mem_flags /* flags */,
cl_GLenum /* target */,
cl_GLint /* miplevel */,
cl_GLuint /* texture */,
cl_int * /* errcode_ret */) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED;
extern CL_API_ENTRY CL_EXT_PREFIX__VERSION_1_1_DEPRECATED cl_mem CL_API_CALL
clCreateFromGLTexture3D(cl_context /* context */,
cl_mem_flags /* flags */,
cl_GLenum /* target */,
cl_GLint /* miplevel */,
cl_GLuint /* texture */,
cl_int * /* errcode_ret */) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED;
/* cl_khr_gl_sharing extension */
#define cl_khr_gl_sharing 1
typedef cl_uint cl_gl_context_info;
/* Additional Error Codes */
#define CL_INVALID_GL_SHAREGROUP_REFERENCE_KHR -1000
/* cl_gl_context_info */
#define CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR 0x2006
#define CL_DEVICES_FOR_GL_CONTEXT_KHR 0x2007
/* Additional cl_context_properties */
#define CL_GL_CONTEXT_KHR 0x2008
#define CL_EGL_DISPLAY_KHR 0x2009
#define CL_GLX_DISPLAY_KHR 0x200A
#define CL_WGL_HDC_KHR 0x200B
#define CL_CGL_SHAREGROUP_KHR 0x200C
extern CL_API_ENTRY cl_int CL_API_CALL
clGetGLContextInfoKHR(const cl_context_properties * /* properties */,
cl_gl_context_info /* param_name */,
size_t /* param_value_size */,
void * /* param_value */,
size_t * /* param_value_size_ret */) CL_API_SUFFIX__VERSION_1_0;
typedef CL_API_ENTRY cl_int (CL_API_CALL *clGetGLContextInfoKHR_fn)(
const cl_context_properties * properties,
cl_gl_context_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret);
#ifdef __cplusplus
}
#endif
#endif /* __OPENCL_CL_GL_H */

View File

@ -0,0 +1,65 @@
/**********************************************************************************
* Copyright (c) 2008-2012 The Khronos Group Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and/or associated documentation files (the
* "Materials"), to deal in the Materials without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Materials, and to
* permit persons to whom the Materials are furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Materials.
*
* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
* MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
**********************************************************************************/
/* $Revision: 11708 $ on $Date: 2010-06-13 23:36:24 -0700 (Sun, 13 Jun 2010) $ */
/* cl_gl_ext.h contains vendor (non-KHR) OpenCL extensions which have */
/* OpenGL dependencies. */
#ifndef __OPENCL_CL_GL_EXT_H
#define __OPENCL_CL_GL_EXT_H
#ifdef __cplusplus
extern "C" {
#endif
#include <cl_gl.h>
/*
* For each extension, follow this template
* cl_VEN_extname extension */
/* #define cl_VEN_extname 1
* ... define new types, if any
* ... define new tokens, if any
* ... define new APIs, if any
*
* If you need GLtypes here, mirror them with a cl_GLtype, rather than including a GL header
* This allows us to avoid having to decide whether to include GL headers or GLES here.
*/
/*
* cl_khr_gl_event extension
* See section 9.9 in the OpenCL 1.1 spec for more information
*/
#define CL_COMMAND_GL_FENCE_SYNC_OBJECT_KHR 0x200D
extern CL_API_ENTRY cl_event CL_API_CALL
clCreateEventFromGLsyncKHR(cl_context /* context */,
cl_GLsync /* cl_GLsync */,
cl_int * /* errcode_ret */) CL_EXT_SUFFIX__VERSION_1_1;
#ifdef __cplusplus
}
#endif
#endif /* __OPENCL_CL_GL_EXT_H */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,43 @@
/*******************************************************************************
* Copyright (c) 2008-2012 The Khronos Group Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and/or associated documentation files (the
* "Materials"), to deal in the Materials without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Materials, and to
* permit persons to whom the Materials are furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Materials.
*
* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
* MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
******************************************************************************/
/* $Revision: 11708 $ on $Date: 2010-06-13 23:36:24 -0700 (Sun, 13 Jun 2010) $ */
#ifndef __OPENCL_H
#define __OPENCL_H
#ifdef __cplusplus
extern "C" {
#endif
#include <cl.h>
#include <cl_gl.h>
#include <cl_gl_ext.h>
#include <cl_ext.h>
#ifdef __cplusplus
}
#endif
#endif /* __OPENCL_H */

View File

@ -0,0 +1,83 @@
// +build cl12
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import (
"image"
"unsafe"
)
func (ctx *Context) CreateImage(flags MemFlag, imageFormat ImageFormat, imageDesc ImageDescription, data []byte) (*MemObject, error) {
format := imageFormat.toCl()
desc := imageDesc.toCl()
var dataPtr unsafe.Pointer
if data != nil {
dataPtr = unsafe.Pointer(&data[0])
}
var err C.cl_int
clBuffer := C.clCreateImage(ctx.clContext, C.cl_mem_flags(flags), &format, &desc, dataPtr, &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
if clBuffer == nil {
return nil, ErrUnknown
}
return newMemObject(clBuffer, len(data)), nil
}
func (ctx *Context) CreateImageSimple(flags MemFlag, width, height int, channelOrder ChannelOrder, channelDataType ChannelDataType, data []byte) (*MemObject, error) {
format := ImageFormat{channelOrder, channelDataType}
desc := ImageDescription{
Type: MemObjectTypeImage2D,
Width: width,
Height: height,
}
return ctx.CreateImage(flags, format, desc, data)
}
func (ctx *Context) CreateImageFromImage(flags MemFlag, img image.Image) (*MemObject, error) {
switch m := img.(type) {
case *image.Gray:
format := ImageFormat{ChannelOrderIntensity, ChannelDataTypeUNormInt8}
desc := ImageDescription{
Type: MemObjectTypeImage2D,
Width: m.Bounds().Dx(),
Height: m.Bounds().Dy(),
RowPitch: m.Stride,
}
return ctx.CreateImage(flags, format, desc, m.Pix)
case *image.RGBA:
format := ImageFormat{ChannelOrderRGBA, ChannelDataTypeUNormInt8}
desc := ImageDescription{
Type: MemObjectTypeImage2D,
Width: m.Bounds().Dx(),
Height: m.Bounds().Dy(),
RowPitch: m.Stride,
}
return ctx.CreateImage(flags, format, desc, m.Pix)
}
b := img.Bounds()
w := b.Dx()
h := b.Dy()
data := make([]byte, w*h*4)
dataOffset := 0
for y := 0; y < h; y++ {
for x := 0; x < w; x++ {
c := img.At(x+b.Min.X, y+b.Min.Y)
r, g, b, a := c.RGBA()
data[dataOffset] = uint8(r >> 8)
data[dataOffset+1] = uint8(g >> 8)
data[dataOffset+2] = uint8(b >> 8)
data[dataOffset+3] = uint8(a >> 8)
dataOffset += 4
}
}
return ctx.CreateImageSimple(flags, w, h, ChannelOrderRGBA, ChannelDataTypeUNormInt8, data)
}

View File

@ -0,0 +1,127 @@
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import (
"fmt"
"unsafe"
)
type ErrUnsupportedArgumentType struct {
Index int
Value interface{}
}
func (e ErrUnsupportedArgumentType) Error() string {
return fmt.Sprintf("cl: unsupported argument type for index %d: %+v", e.Index, e.Value)
}
type Kernel struct {
clKernel C.cl_kernel
name string
}
type LocalBuffer int
func releaseKernel(k *Kernel) {
if k.clKernel != nil {
C.clReleaseKernel(k.clKernel)
k.clKernel = nil
}
}
func (k *Kernel) Release() {
releaseKernel(k)
}
func (k *Kernel) SetArgs(args ...interface{}) error {
for index, arg := range args {
if err := k.SetArg(index, arg); err != nil {
return err
}
}
return nil
}
func (k *Kernel) SetArg(index int, arg interface{}) error {
switch val := arg.(type) {
case uint8:
return k.SetArgUint8(index, val)
case int8:
return k.SetArgInt8(index, val)
case uint32:
return k.SetArgUint32(index, val)
case uint64:
return k.SetArgUint64(index, val)
case int32:
return k.SetArgInt32(index, val)
case float32:
return k.SetArgFloat32(index, val)
case *MemObject:
return k.SetArgBuffer(index, val)
case LocalBuffer:
return k.SetArgLocal(index, int(val))
default:
return ErrUnsupportedArgumentType{Index: index, Value: arg}
}
}
func (k *Kernel) SetArgBuffer(index int, buffer *MemObject) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(buffer.clMem)), unsafe.Pointer(&buffer.clMem))
}
func (k *Kernel) SetArgFloat32(index int, val float32) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(val)), unsafe.Pointer(&val))
}
func (k *Kernel) SetArgInt8(index int, val int8) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(val)), unsafe.Pointer(&val))
}
func (k *Kernel) SetArgUint8(index int, val uint8) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(val)), unsafe.Pointer(&val))
}
func (k *Kernel) SetArgInt32(index int, val int32) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(val)), unsafe.Pointer(&val))
}
func (k *Kernel) SetArgUint32(index int, val uint32) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(val)), unsafe.Pointer(&val))
}
func (k *Kernel) SetArgUint64(index int, val uint64) error {
return k.SetArgUnsafe(index, int(unsafe.Sizeof(val)), unsafe.Pointer(&val))
}
func (k *Kernel) SetArgLocal(index int, size int) error {
return k.SetArgUnsafe(index, size, nil)
}
func (k *Kernel) SetArgUnsafe(index, argSize int, arg unsafe.Pointer) error {
//fmt.Println("FUNKY: ", index, argSize)
return toError(C.clSetKernelArg(k.clKernel, C.cl_uint(index), C.size_t(argSize), arg))
}
func (k *Kernel) PreferredWorkGroupSizeMultiple(device *Device) (int, error) {
var size C.size_t
err := C.clGetKernelWorkGroupInfo(k.clKernel, device.nullableId(), C.CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE, C.size_t(unsafe.Sizeof(size)), unsafe.Pointer(&size), nil)
return int(size), toError(err)
}
func (k *Kernel) WorkGroupSize(device *Device) (int, error) {
var size C.size_t
err := C.clGetKernelWorkGroupInfo(k.clKernel, device.nullableId(), C.CL_KERNEL_WORK_GROUP_SIZE, C.size_t(unsafe.Sizeof(size)), unsafe.Pointer(&size), nil)
return int(size), toError(err)
}
func (k *Kernel) NumArgs() (int, error) {
var num C.cl_uint
err := C.clGetKernelInfo(k.clKernel, C.CL_KERNEL_NUM_ARGS, C.size_t(unsafe.Sizeof(num)), unsafe.Pointer(&num), nil)
return int(num), toError(err)
}

View File

@ -0,0 +1,7 @@
// +build !cl12
package cl
func (k *Kernel) ArgName(index int) (string, error) {
return "", ErrUnsupported
}

View File

@ -0,0 +1,20 @@
// +build cl12
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import "unsafe"
func (k *Kernel) ArgName(index int) (string, error) {
var strC [1024]byte
var strN C.size_t
if err := C.clGetKernelArgInfo(k.clKernel, C.cl_uint(index), C.CL_KERNEL_ARG_NAME, 1024, unsafe.Pointer(&strC[0]), &strN); err != C.CL_SUCCESS {
return "", toError(err)
}
return string(strC[:strN]), nil
}

View File

@ -0,0 +1,83 @@
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import "unsafe"
const maxPlatforms = 32
type Platform struct {
id C.cl_platform_id
}
// Obtain the list of platforms available.
func GetPlatforms() ([]*Platform, error) {
var platformIds [maxPlatforms]C.cl_platform_id
var nPlatforms C.cl_uint
if err := C.clGetPlatformIDs(C.cl_uint(maxPlatforms), &platformIds[0], &nPlatforms); err != C.CL_SUCCESS {
return nil, toError(err)
}
platforms := make([]*Platform, nPlatforms)
for i := 0; i < int(nPlatforms); i++ {
platforms[i] = &Platform{id: platformIds[i]}
}
return platforms, nil
}
func (p *Platform) GetDevices(deviceType DeviceType) ([]*Device, error) {
return GetDevices(p, deviceType)
}
func (p *Platform) getInfoString(param C.cl_platform_info) (string, error) {
var strC [2048]byte
var strN C.size_t
if err := C.clGetPlatformInfo(p.id, param, 2048, unsafe.Pointer(&strC[0]), &strN); err != C.CL_SUCCESS {
return "", toError(err)
}
return string(strC[:(strN - 1)]), nil
}
func (p *Platform) Name() string {
if str, err := p.getInfoString(C.CL_PLATFORM_NAME); err != nil {
panic("Platform.Name() should never fail")
} else {
return str
}
}
func (p *Platform) Vendor() string {
if str, err := p.getInfoString(C.CL_PLATFORM_VENDOR); err != nil {
panic("Platform.Vendor() should never fail")
} else {
return str
}
}
func (p *Platform) Profile() string {
if str, err := p.getInfoString(C.CL_PLATFORM_PROFILE); err != nil {
panic("Platform.Profile() should never fail")
} else {
return str
}
}
func (p *Platform) Version() string {
if str, err := p.getInfoString(C.CL_PLATFORM_VERSION); err != nil {
panic("Platform.Version() should never fail")
} else {
return str
}
}
func (p *Platform) Extensions() string {
if str, err := p.getInfoString(C.CL_PLATFORM_EXTENSIONS); err != nil {
panic("Platform.Extensions() should never fail")
} else {
return str
}
}

View File

@ -0,0 +1,105 @@
package cl
// #include <stdlib.h>
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import (
"fmt"
"runtime"
"unsafe"
)
type BuildError struct {
Message string
Device *Device
}
func (e BuildError) Error() string {
if e.Device != nil {
return fmt.Sprintf("cl: build error on %q: %s", e.Device.Name(), e.Message)
} else {
return fmt.Sprintf("cl: build error: %s", e.Message)
}
}
type Program struct {
clProgram C.cl_program
devices []*Device
}
func releaseProgram(p *Program) {
if p.clProgram != nil {
C.clReleaseProgram(p.clProgram)
p.clProgram = nil
}
}
func (p *Program) Release() {
releaseProgram(p)
}
func (p *Program) BuildProgram(devices []*Device, options string) error {
var cOptions *C.char
if options != "" {
cOptions = C.CString(options)
defer C.free(unsafe.Pointer(cOptions))
}
var deviceList []C.cl_device_id
var deviceListPtr *C.cl_device_id
numDevices := C.cl_uint(len(devices))
if devices != nil && len(devices) > 0 {
deviceList = buildDeviceIdList(devices)
deviceListPtr = &deviceList[0]
}
if err := C.clBuildProgram(p.clProgram, numDevices, deviceListPtr, cOptions, nil, nil); err != C.CL_SUCCESS {
buffer := make([]byte, 4096)
var bLen C.size_t
var err C.cl_int
for _, dev := range p.devices {
for i := 2; i >= 0; i-- {
err = C.clGetProgramBuildInfo(p.clProgram, dev.id, C.CL_PROGRAM_BUILD_LOG, C.size_t(len(buffer)), unsafe.Pointer(&buffer[0]), &bLen)
if err == C.CL_INVALID_VALUE && i > 0 && bLen < 1024*1024 {
// INVALID_VALUE probably means our buffer isn't large enough
buffer = make([]byte, bLen)
} else {
break
}
}
if err != C.CL_SUCCESS {
return toError(err)
}
if bLen > 1 {
return BuildError{
Device: dev,
Message: string(buffer[:bLen-1]),
}
}
}
return BuildError{
Device: nil,
Message: "build failed and produced no log entries",
}
}
return nil
}
func (p *Program) CreateKernel(name string) (*Kernel, error) {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
var err C.cl_int
clKernel := C.clCreateKernel(p.clProgram, cName, &err)
if err != C.CL_SUCCESS {
return nil, toError(err)
}
kernel := &Kernel{clKernel: clKernel, name: name}
runtime.SetFinalizer(kernel, releaseKernel)
return kernel, nil
}

View File

@ -0,0 +1,193 @@
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import "unsafe"
type CommandQueueProperty int
const (
CommandQueueOutOfOrderExecModeEnable CommandQueueProperty = C.CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE
CommandQueueProfilingEnable CommandQueueProperty = C.CL_QUEUE_PROFILING_ENABLE
)
type CommandQueue struct {
clQueue C.cl_command_queue
device *Device
}
func releaseCommandQueue(q *CommandQueue) {
if q.clQueue != nil {
C.clReleaseCommandQueue(q.clQueue)
q.clQueue = nil
}
}
// Call clReleaseCommandQueue on the CommandQueue. Using the CommandQueue after Release will cause a panick.
func (q *CommandQueue) Release() {
releaseCommandQueue(q)
}
// Blocks until all previously queued OpenCL commands in a command-queue are issued to the associated device and have completed.
func (q *CommandQueue) Finish() error {
return toError(C.clFinish(q.clQueue))
}
// Issues all previously queued OpenCL commands in a command-queue to the device associated with the command-queue.
func (q *CommandQueue) Flush() error {
return toError(C.clFlush(q.clQueue))
}
// Enqueues a command to map a region of the buffer object given by buffer into the host address space and returns a pointer to this mapped region.
func (q *CommandQueue) EnqueueMapBuffer(buffer *MemObject, blocking bool, flags MapFlag, offset, size int, eventWaitList []*Event) (*MappedMemObject, *Event, error) {
var event C.cl_event
var err C.cl_int
ptr := C.clEnqueueMapBuffer(q.clQueue, buffer.clMem, clBool(blocking), flags.toCl(), C.size_t(offset), C.size_t(size), C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event, &err)
if err != C.CL_SUCCESS {
return nil, nil, toError(err)
}
ev := newEvent(event)
if ptr == nil {
return nil, ev, ErrUnknown
}
return &MappedMemObject{ptr: ptr, size: size}, ev, nil
}
// Enqueues a command to map a region of an image object into the host address space and returns a pointer to this mapped region.
func (q *CommandQueue) EnqueueMapImage(buffer *MemObject, blocking bool, flags MapFlag, origin, region [3]int, eventWaitList []*Event) (*MappedMemObject, *Event, error) {
cOrigin := sizeT3(origin)
cRegion := sizeT3(region)
var event C.cl_event
var err C.cl_int
var rowPitch, slicePitch C.size_t
ptr := C.clEnqueueMapImage(q.clQueue, buffer.clMem, clBool(blocking), flags.toCl(), &cOrigin[0], &cRegion[0], &rowPitch, &slicePitch, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event, &err)
if err != C.CL_SUCCESS {
return nil, nil, toError(err)
}
ev := newEvent(event)
if ptr == nil {
return nil, ev, ErrUnknown
}
size := 0 // TODO: could calculate this
return &MappedMemObject{ptr: ptr, size: size, rowPitch: int(rowPitch), slicePitch: int(slicePitch)}, ev, nil
}
// Enqueues a command to unmap a previously mapped region of a memory object.
func (q *CommandQueue) EnqueueUnmapMemObject(buffer *MemObject, mappedObj *MappedMemObject, eventWaitList []*Event) (*Event, error) {
var event C.cl_event
if err := C.clEnqueueUnmapMemObject(q.clQueue, buffer.clMem, mappedObj.ptr, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event); err != C.CL_SUCCESS {
return nil, toError(err)
}
return newEvent(event), nil
}
// Enqueues a command to copy a buffer object to another buffer object.
func (q *CommandQueue) EnqueueCopyBuffer(srcBuffer, dstBuffer *MemObject, srcOffset, dstOffset, byteCount int, eventWaitList []*Event) (*Event, error) {
var event C.cl_event
err := toError(C.clEnqueueCopyBuffer(q.clQueue, srcBuffer.clMem, dstBuffer.clMem, C.size_t(srcOffset), C.size_t(dstOffset), C.size_t(byteCount), C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
// Enqueue commands to write to a buffer object from host memory.
func (q *CommandQueue) EnqueueWriteBuffer(buffer *MemObject, blocking bool, offset, dataSize int, dataPtr unsafe.Pointer, eventWaitList []*Event) (*Event, error) {
var event C.cl_event
err := toError(C.clEnqueueWriteBuffer(q.clQueue, buffer.clMem, clBool(blocking), C.size_t(offset), C.size_t(dataSize), dataPtr, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
func (q *CommandQueue) EnqueueWriteBufferFloat32(buffer *MemObject, blocking bool, offset int, data []float32, eventWaitList []*Event) (*Event, error) {
dataPtr := unsafe.Pointer(&data[0])
dataSize := int(unsafe.Sizeof(data[0])) * len(data)
return q.EnqueueWriteBuffer(buffer, blocking, offset, dataSize, dataPtr, eventWaitList)
}
// Enqueue commands to read from a buffer object to host memory.
func (q *CommandQueue) EnqueueReadBuffer(buffer *MemObject, blocking bool, offset, dataSize int, dataPtr unsafe.Pointer, eventWaitList []*Event) (*Event, error) {
var event C.cl_event
err := toError(C.clEnqueueReadBuffer(q.clQueue, buffer.clMem, clBool(blocking), C.size_t(offset), C.size_t(dataSize), dataPtr, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
func (q *CommandQueue) EnqueueReadBufferFloat32(buffer *MemObject, blocking bool, offset int, data []float32, eventWaitList []*Event) (*Event, error) {
dataPtr := unsafe.Pointer(&data[0])
dataSize := int(unsafe.Sizeof(data[0])) * len(data)
return q.EnqueueReadBuffer(buffer, blocking, offset, dataSize, dataPtr, eventWaitList)
}
// Enqueues a command to execute a kernel on a device.
func (q *CommandQueue) EnqueueNDRangeKernel(kernel *Kernel, globalWorkOffset, globalWorkSize, localWorkSize []int, eventWaitList []*Event) (*Event, error) {
workDim := len(globalWorkSize)
var globalWorkOffsetList []C.size_t
var globalWorkOffsetPtr *C.size_t
if globalWorkOffset != nil {
globalWorkOffsetList = make([]C.size_t, len(globalWorkOffset))
for i, off := range globalWorkOffset {
globalWorkOffsetList[i] = C.size_t(off)
}
globalWorkOffsetPtr = &globalWorkOffsetList[0]
}
var globalWorkSizeList []C.size_t
var globalWorkSizePtr *C.size_t
if globalWorkSize != nil {
globalWorkSizeList = make([]C.size_t, len(globalWorkSize))
for i, off := range globalWorkSize {
globalWorkSizeList[i] = C.size_t(off)
}
globalWorkSizePtr = &globalWorkSizeList[0]
}
var localWorkSizeList []C.size_t
var localWorkSizePtr *C.size_t
if localWorkSize != nil {
localWorkSizeList = make([]C.size_t, len(localWorkSize))
for i, off := range localWorkSize {
localWorkSizeList[i] = C.size_t(off)
}
localWorkSizePtr = &localWorkSizeList[0]
}
var event C.cl_event
err := toError(C.clEnqueueNDRangeKernel(q.clQueue, kernel.clKernel, C.cl_uint(workDim), globalWorkOffsetPtr, globalWorkSizePtr, localWorkSizePtr, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
// Enqueues a command to read from a 2D or 3D image object to host memory.
func (q *CommandQueue) EnqueueReadImage(image *MemObject, blocking bool, origin, region [3]int, rowPitch, slicePitch int, data []byte, eventWaitList []*Event) (*Event, error) {
cOrigin := sizeT3(origin)
cRegion := sizeT3(region)
var event C.cl_event
err := toError(C.clEnqueueReadImage(q.clQueue, image.clMem, clBool(blocking), &cOrigin[0], &cRegion[0], C.size_t(rowPitch), C.size_t(slicePitch), unsafe.Pointer(&data[0]), C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
// Enqueues a command to write from a 2D or 3D image object to host memory.
func (q *CommandQueue) EnqueueWriteImage(image *MemObject, blocking bool, origin, region [3]int, rowPitch, slicePitch int, data []byte, eventWaitList []*Event) (*Event, error) {
cOrigin := sizeT3(origin)
cRegion := sizeT3(region)
var event C.cl_event
err := toError(C.clEnqueueWriteImage(q.clQueue, image.clMem, clBool(blocking), &cOrigin[0], &cRegion[0], C.size_t(rowPitch), C.size_t(slicePitch), unsafe.Pointer(&data[0]), C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
func (q *CommandQueue) EnqueueFillBuffer(buffer *MemObject, pattern unsafe.Pointer, patternSize, offset, size int, eventWaitList []*Event) (*Event, error) {
var event C.cl_event
err := toError(C.clEnqueueFillBuffer(q.clQueue, buffer.clMem, pattern, C.size_t(patternSize), C.size_t(offset), C.size_t(size), C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
// A synchronization point that enqueues a barrier operation.
func (q *CommandQueue) EnqueueBarrierWithWaitList(eventWaitList []*Event) (*Event, error) {
var event C.cl_event
err := toError(C.clEnqueueBarrierWithWaitList(q.clQueue, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}
// Enqueues a marker command which waits for either a list of events to complete, or all previously enqueued commands to complete.
func (q *CommandQueue) EnqueueMarkerWithWaitList(eventWaitList []*Event) (*Event, error) {
var event C.cl_event
err := toError(C.clEnqueueMarkerWithWaitList(q.clQueue, C.cl_uint(len(eventWaitList)), eventListPtr(eventWaitList), &event))
return newEvent(event), err
}

View File

@ -0,0 +1,487 @@
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
import (
"errors"
"fmt"
"reflect"
"runtime"
"strings"
"unsafe"
)
var (
ErrUnknown = errors.New("cl: unknown error") // Generally an unexpected result from an OpenCL function (e.g. CL_SUCCESS but null pointer)
)
type ErrOther int
func (e ErrOther) Error() string {
return fmt.Sprintf("cl: error %d", int(e))
}
var (
ErrDeviceNotFound = errors.New("cl: Device Not Found")
ErrDeviceNotAvailable = errors.New("cl: Device Not Available")
ErrCompilerNotAvailable = errors.New("cl: Compiler Not Available")
ErrMemObjectAllocationFailure = errors.New("cl: Mem Object Allocation Failure")
ErrOutOfResources = errors.New("cl: Out Of Resources")
ErrOutOfHostMemory = errors.New("cl: Out Of Host Memory")
ErrProfilingInfoNotAvailable = errors.New("cl: Profiling Info Not Available")
ErrMemCopyOverlap = errors.New("cl: Mem Copy Overlap")
ErrImageFormatMismatch = errors.New("cl: Image Format Mismatch")
ErrImageFormatNotSupported = errors.New("cl: Image Format Not Supported")
ErrBuildProgramFailure = errors.New("cl: Build Program Failure")
ErrMapFailure = errors.New("cl: Map Failure")
ErrMisalignedSubBufferOffset = errors.New("cl: Misaligned Sub Buffer Offset")
ErrExecStatusErrorForEventsInWaitList = errors.New("cl: Exec Status Error For Events In Wait List")
ErrCompileProgramFailure = errors.New("cl: Compile Program Failure")
ErrLinkerNotAvailable = errors.New("cl: Linker Not Available")
ErrLinkProgramFailure = errors.New("cl: Link Program Failure")
ErrDevicePartitionFailed = errors.New("cl: Device Partition Failed")
ErrKernelArgInfoNotAvailable = errors.New("cl: Kernel Arg Info Not Available")
ErrInvalidValue = errors.New("cl: Invalid Value")
ErrInvalidDeviceType = errors.New("cl: Invalid Device Type")
ErrInvalidPlatform = errors.New("cl: Invalid Platform")
ErrInvalidDevice = errors.New("cl: Invalid Device")
ErrInvalidContext = errors.New("cl: Invalid Context")
ErrInvalidQueueProperties = errors.New("cl: Invalid Queue Properties")
ErrInvalidCommandQueue = errors.New("cl: Invalid Command Queue")
ErrInvalidHostPtr = errors.New("cl: Invalid Host Ptr")
ErrInvalidMemObject = errors.New("cl: Invalid Mem Object")
ErrInvalidImageFormatDescriptor = errors.New("cl: Invalid Image Format Descriptor")
ErrInvalidImageSize = errors.New("cl: Invalid Image Size")
ErrInvalidSampler = errors.New("cl: Invalid Sampler")
ErrInvalidBinary = errors.New("cl: Invalid Binary")
ErrInvalidBuildOptions = errors.New("cl: Invalid Build Options")
ErrInvalidProgram = errors.New("cl: Invalid Program")
ErrInvalidProgramExecutable = errors.New("cl: Invalid Program Executable")
ErrInvalidKernelName = errors.New("cl: Invalid Kernel Name")
ErrInvalidKernelDefinition = errors.New("cl: Invalid Kernel Definition")
ErrInvalidKernel = errors.New("cl: Invalid Kernel")
ErrInvalidArgIndex = errors.New("cl: Invalid Arg Index")
ErrInvalidArgValue = errors.New("cl: Invalid Arg Value")
ErrInvalidArgSize = errors.New("cl: Invalid Arg Size")
ErrInvalidKernelArgs = errors.New("cl: Invalid Kernel Args")
ErrInvalidWorkDimension = errors.New("cl: Invalid Work Dimension")
ErrInvalidWorkGroupSize = errors.New("cl: Invalid Work Group Size")
ErrInvalidWorkItemSize = errors.New("cl: Invalid Work Item Size")
ErrInvalidGlobalOffset = errors.New("cl: Invalid Global Offset")
ErrInvalidEventWaitList = errors.New("cl: Invalid Event Wait List")
ErrInvalidEvent = errors.New("cl: Invalid Event")
ErrInvalidOperation = errors.New("cl: Invalid Operation")
ErrInvalidGlObject = errors.New("cl: Invalid Gl Object")
ErrInvalidBufferSize = errors.New("cl: Invalid Buffer Size")
ErrInvalidMipLevel = errors.New("cl: Invalid Mip Level")
ErrInvalidGlobalWorkSize = errors.New("cl: Invalid Global Work Size")
ErrInvalidProperty = errors.New("cl: Invalid Property")
ErrInvalidImageDescriptor = errors.New("cl: Invalid Image Descriptor")
ErrInvalidCompilerOptions = errors.New("cl: Invalid Compiler Options")
ErrInvalidLinkerOptions = errors.New("cl: Invalid Linker Options")
ErrInvalidDevicePartitionCount = errors.New("cl: Invalid Device Partition Count")
)
var errorMap = map[C.cl_int]error{
C.CL_SUCCESS: nil,
C.CL_DEVICE_NOT_FOUND: ErrDeviceNotFound,
C.CL_DEVICE_NOT_AVAILABLE: ErrDeviceNotAvailable,
C.CL_COMPILER_NOT_AVAILABLE: ErrCompilerNotAvailable,
C.CL_MEM_OBJECT_ALLOCATION_FAILURE: ErrMemObjectAllocationFailure,
C.CL_OUT_OF_RESOURCES: ErrOutOfResources,
C.CL_OUT_OF_HOST_MEMORY: ErrOutOfHostMemory,
C.CL_PROFILING_INFO_NOT_AVAILABLE: ErrProfilingInfoNotAvailable,
C.CL_MEM_COPY_OVERLAP: ErrMemCopyOverlap,
C.CL_IMAGE_FORMAT_MISMATCH: ErrImageFormatMismatch,
C.CL_IMAGE_FORMAT_NOT_SUPPORTED: ErrImageFormatNotSupported,
C.CL_BUILD_PROGRAM_FAILURE: ErrBuildProgramFailure,
C.CL_MAP_FAILURE: ErrMapFailure,
C.CL_MISALIGNED_SUB_BUFFER_OFFSET: ErrMisalignedSubBufferOffset,
C.CL_EXEC_STATUS_ERROR_FOR_EVENTS_IN_WAIT_LIST: ErrExecStatusErrorForEventsInWaitList,
C.CL_INVALID_VALUE: ErrInvalidValue,
C.CL_INVALID_DEVICE_TYPE: ErrInvalidDeviceType,
C.CL_INVALID_PLATFORM: ErrInvalidPlatform,
C.CL_INVALID_DEVICE: ErrInvalidDevice,
C.CL_INVALID_CONTEXT: ErrInvalidContext,
C.CL_INVALID_QUEUE_PROPERTIES: ErrInvalidQueueProperties,
C.CL_INVALID_COMMAND_QUEUE: ErrInvalidCommandQueue,
C.CL_INVALID_HOST_PTR: ErrInvalidHostPtr,
C.CL_INVALID_MEM_OBJECT: ErrInvalidMemObject,
C.CL_INVALID_IMAGE_FORMAT_DESCRIPTOR: ErrInvalidImageFormatDescriptor,
C.CL_INVALID_IMAGE_SIZE: ErrInvalidImageSize,
C.CL_INVALID_SAMPLER: ErrInvalidSampler,
C.CL_INVALID_BINARY: ErrInvalidBinary,
C.CL_INVALID_BUILD_OPTIONS: ErrInvalidBuildOptions,
C.CL_INVALID_PROGRAM: ErrInvalidProgram,
C.CL_INVALID_PROGRAM_EXECUTABLE: ErrInvalidProgramExecutable,
C.CL_INVALID_KERNEL_NAME: ErrInvalidKernelName,
C.CL_INVALID_KERNEL_DEFINITION: ErrInvalidKernelDefinition,
C.CL_INVALID_KERNEL: ErrInvalidKernel,
C.CL_INVALID_ARG_INDEX: ErrInvalidArgIndex,
C.CL_INVALID_ARG_VALUE: ErrInvalidArgValue,
C.CL_INVALID_ARG_SIZE: ErrInvalidArgSize,
C.CL_INVALID_KERNEL_ARGS: ErrInvalidKernelArgs,
C.CL_INVALID_WORK_DIMENSION: ErrInvalidWorkDimension,
C.CL_INVALID_WORK_GROUP_SIZE: ErrInvalidWorkGroupSize,
C.CL_INVALID_WORK_ITEM_SIZE: ErrInvalidWorkItemSize,
C.CL_INVALID_GLOBAL_OFFSET: ErrInvalidGlobalOffset,
C.CL_INVALID_EVENT_WAIT_LIST: ErrInvalidEventWaitList,
C.CL_INVALID_EVENT: ErrInvalidEvent,
C.CL_INVALID_OPERATION: ErrInvalidOperation,
C.CL_INVALID_GL_OBJECT: ErrInvalidGlObject,
C.CL_INVALID_BUFFER_SIZE: ErrInvalidBufferSize,
C.CL_INVALID_MIP_LEVEL: ErrInvalidMipLevel,
C.CL_INVALID_GLOBAL_WORK_SIZE: ErrInvalidGlobalWorkSize,
C.CL_INVALID_PROPERTY: ErrInvalidProperty,
}
func toError(code C.cl_int) error {
if err, ok := errorMap[code]; ok {
return err
}
return ErrOther(code)
}
type LocalMemType int
const (
LocalMemTypeNone LocalMemType = C.CL_NONE
LocalMemTypeGlobal LocalMemType = C.CL_GLOBAL
LocalMemTypeLocal LocalMemType = C.CL_LOCAL
)
var localMemTypeMap = map[LocalMemType]string{
LocalMemTypeNone: "None",
LocalMemTypeGlobal: "Global",
LocalMemTypeLocal: "Local",
}
func (t LocalMemType) String() string {
name := localMemTypeMap[t]
if name == "" {
name = "Unknown"
}
return name
}
type ExecCapability int
const (
ExecCapabilityKernel ExecCapability = C.CL_EXEC_KERNEL // The OpenCL device can execute OpenCL kernels.
ExecCapabilityNativeKernel ExecCapability = C.CL_EXEC_NATIVE_KERNEL // The OpenCL device can execute native kernels.
)
func (ec ExecCapability) String() string {
var parts []string
if ec&ExecCapabilityKernel != 0 {
parts = append(parts, "Kernel")
}
if ec&ExecCapabilityNativeKernel != 0 {
parts = append(parts, "NativeKernel")
}
if parts == nil {
return ""
}
return strings.Join(parts, "|")
}
type MemCacheType int
const (
MemCacheTypeNone MemCacheType = C.CL_NONE
MemCacheTypeReadOnlyCache MemCacheType = C.CL_READ_ONLY_CACHE
MemCacheTypeReadWriteCache MemCacheType = C.CL_READ_WRITE_CACHE
)
func (ct MemCacheType) String() string {
switch ct {
case MemCacheTypeNone:
return "None"
case MemCacheTypeReadOnlyCache:
return "ReadOnly"
case MemCacheTypeReadWriteCache:
return "ReadWrite"
}
return fmt.Sprintf("Unknown(%x)", int(ct))
}
type MemFlag int
const (
MemReadWrite MemFlag = C.CL_MEM_READ_WRITE
MemWriteOnly MemFlag = C.CL_MEM_WRITE_ONLY
MemReadOnly MemFlag = C.CL_MEM_READ_ONLY
MemUseHostPtr MemFlag = C.CL_MEM_USE_HOST_PTR
MemAllocHostPtr MemFlag = C.CL_MEM_ALLOC_HOST_PTR
MemCopyHostPtr MemFlag = C.CL_MEM_COPY_HOST_PTR
MemWriteOnlyHost MemFlag = C.CL_MEM_HOST_WRITE_ONLY
MemReadOnlyHost MemFlag = C.CL_MEM_HOST_READ_ONLY
MemNoAccessHost MemFlag = C.CL_MEM_HOST_NO_ACCESS
)
type MemObjectType int
const (
MemObjectTypeBuffer MemObjectType = C.CL_MEM_OBJECT_BUFFER
MemObjectTypeImage2D MemObjectType = C.CL_MEM_OBJECT_IMAGE2D
MemObjectTypeImage3D MemObjectType = C.CL_MEM_OBJECT_IMAGE3D
)
type MapFlag int
const (
// This flag specifies that the region being mapped in the memory object is being mapped for reading.
MapFlagRead MapFlag = C.CL_MAP_READ
MapFlagWrite MapFlag = C.CL_MAP_WRITE
MapFlagWriteInvalidateRegion MapFlag = C.CL_MAP_WRITE_INVALIDATE_REGION
)
func (mf MapFlag) toCl() C.cl_map_flags {
return C.cl_map_flags(mf)
}
type ChannelOrder int
const (
ChannelOrderR ChannelOrder = C.CL_R
ChannelOrderA ChannelOrder = C.CL_A
ChannelOrderRG ChannelOrder = C.CL_RG
ChannelOrderRA ChannelOrder = C.CL_RA
ChannelOrderRGB ChannelOrder = C.CL_RGB
ChannelOrderRGBA ChannelOrder = C.CL_RGBA
ChannelOrderBGRA ChannelOrder = C.CL_BGRA
ChannelOrderARGB ChannelOrder = C.CL_ARGB
ChannelOrderIntensity ChannelOrder = C.CL_INTENSITY
ChannelOrderLuminance ChannelOrder = C.CL_LUMINANCE
ChannelOrderRx ChannelOrder = C.CL_Rx
ChannelOrderRGx ChannelOrder = C.CL_RGx
ChannelOrderRGBx ChannelOrder = C.CL_RGBx
)
var channelOrderNameMap = map[ChannelOrder]string{
ChannelOrderR: "R",
ChannelOrderA: "A",
ChannelOrderRG: "RG",
ChannelOrderRA: "RA",
ChannelOrderRGB: "RGB",
ChannelOrderRGBA: "RGBA",
ChannelOrderBGRA: "BGRA",
ChannelOrderARGB: "ARGB",
ChannelOrderIntensity: "Intensity",
ChannelOrderLuminance: "Luminance",
ChannelOrderRx: "Rx",
ChannelOrderRGx: "RGx",
ChannelOrderRGBx: "RGBx",
}
func (co ChannelOrder) String() string {
name := channelOrderNameMap[co]
if name == "" {
name = fmt.Sprintf("Unknown(%x)", int(co))
}
return name
}
type ChannelDataType int
const (
ChannelDataTypeSNormInt8 ChannelDataType = C.CL_SNORM_INT8
ChannelDataTypeSNormInt16 ChannelDataType = C.CL_SNORM_INT16
ChannelDataTypeUNormInt8 ChannelDataType = C.CL_UNORM_INT8
ChannelDataTypeUNormInt16 ChannelDataType = C.CL_UNORM_INT16
ChannelDataTypeUNormShort565 ChannelDataType = C.CL_UNORM_SHORT_565
ChannelDataTypeUNormShort555 ChannelDataType = C.CL_UNORM_SHORT_555
ChannelDataTypeUNormInt101010 ChannelDataType = C.CL_UNORM_INT_101010
ChannelDataTypeSignedInt8 ChannelDataType = C.CL_SIGNED_INT8
ChannelDataTypeSignedInt16 ChannelDataType = C.CL_SIGNED_INT16
ChannelDataTypeSignedInt32 ChannelDataType = C.CL_SIGNED_INT32
ChannelDataTypeUnsignedInt8 ChannelDataType = C.CL_UNSIGNED_INT8
ChannelDataTypeUnsignedInt16 ChannelDataType = C.CL_UNSIGNED_INT16
ChannelDataTypeUnsignedInt32 ChannelDataType = C.CL_UNSIGNED_INT32
ChannelDataTypeHalfFloat ChannelDataType = C.CL_HALF_FLOAT
ChannelDataTypeFloat ChannelDataType = C.CL_FLOAT
)
var channelDataTypeNameMap = map[ChannelDataType]string{
ChannelDataTypeSNormInt8: "SNormInt8",
ChannelDataTypeSNormInt16: "SNormInt16",
ChannelDataTypeUNormInt8: "UNormInt8",
ChannelDataTypeUNormInt16: "UNormInt16",
ChannelDataTypeUNormShort565: "UNormShort565",
ChannelDataTypeUNormShort555: "UNormShort555",
ChannelDataTypeUNormInt101010: "UNormInt101010",
ChannelDataTypeSignedInt8: "SignedInt8",
ChannelDataTypeSignedInt16: "SignedInt16",
ChannelDataTypeSignedInt32: "SignedInt32",
ChannelDataTypeUnsignedInt8: "UnsignedInt8",
ChannelDataTypeUnsignedInt16: "UnsignedInt16",
ChannelDataTypeUnsignedInt32: "UnsignedInt32",
ChannelDataTypeHalfFloat: "HalfFloat",
ChannelDataTypeFloat: "Float",
}
func (ct ChannelDataType) String() string {
name := channelDataTypeNameMap[ct]
if name == "" {
name = fmt.Sprintf("Unknown(%x)", int(ct))
}
return name
}
type ImageFormat struct {
ChannelOrder ChannelOrder
ChannelDataType ChannelDataType
}
func (f ImageFormat) toCl() C.cl_image_format {
var format C.cl_image_format
format.image_channel_order = C.cl_channel_order(f.ChannelOrder)
format.image_channel_data_type = C.cl_channel_type(f.ChannelDataType)
return format
}
type ProfilingInfo int
const (
// A 64-bit value that describes the current device time counter in
// nanoseconds when the command identified by event is enqueued in
// a command-queue by the host.
ProfilingInfoCommandQueued ProfilingInfo = C.CL_PROFILING_COMMAND_QUEUED
// A 64-bit value that describes the current device time counter in
// nanoseconds when the command identified by event that has been
// enqueued is submitted by the host to the device associated with the command-queue.
ProfilingInfoCommandSubmit ProfilingInfo = C.CL_PROFILING_COMMAND_SUBMIT
// A 64-bit value that describes the current device time counter in
// nanoseconds when the command identified by event starts execution on the device.
ProfilingInfoCommandStart ProfilingInfo = C.CL_PROFILING_COMMAND_START
// A 64-bit value that describes the current device time counter in
// nanoseconds when the command identified by event has finished
// execution on the device.
ProfilingInfoCommandEnd ProfilingInfo = C.CL_PROFILING_COMMAND_END
)
type CommmandExecStatus int
const (
CommmandExecStatusComplete CommmandExecStatus = C.CL_COMPLETE
CommmandExecStatusRunning CommmandExecStatus = C.CL_RUNNING
CommmandExecStatusSubmitted CommmandExecStatus = C.CL_SUBMITTED
CommmandExecStatusQueued CommmandExecStatus = C.CL_QUEUED
)
type Event struct {
clEvent C.cl_event
}
func releaseEvent(ev *Event) {
if ev.clEvent != nil {
C.clReleaseEvent(ev.clEvent)
ev.clEvent = nil
}
}
func (e *Event) Release() {
releaseEvent(e)
}
func (e *Event) GetEventProfilingInfo(paramName ProfilingInfo) (int64, error) {
var paramValue C.cl_ulong
if err := C.clGetEventProfilingInfo(e.clEvent, C.cl_profiling_info(paramName), C.size_t(unsafe.Sizeof(paramValue)), unsafe.Pointer(&paramValue), nil); err != C.CL_SUCCESS {
return 0, toError(err)
}
return int64(paramValue), nil
}
// Sets the execution status of a user event object.
//
// `status` specifies the new execution status to be set and
// can be CL_COMPLETE or a negative integer value to indicate
// an error. A negative integer value causes all enqueued commands
// that wait on this user event to be terminated. clSetUserEventStatus
// can only be called once to change the execution status of event.
func (e *Event) SetUserEventStatus(status int) error {
return toError(C.clSetUserEventStatus(e.clEvent, C.cl_int(status)))
}
// Waits on the host thread for commands identified by event objects in
// events to complete. A command is considered complete if its execution
// status is CL_COMPLETE or a negative value. The events specified in
// event_list act as synchronization points.
//
// If the cl_khr_gl_event extension is enabled, event objects can also be
// used to reflect the status of an OpenGL sync object. The sync object
// in turn refers to a fence command executing in an OpenGL command
// stream. This provides another method of coordinating sharing of buffers
// and images between OpenGL and OpenCL.
func WaitForEvents(events []*Event) error {
return toError(C.clWaitForEvents(C.cl_uint(len(events)), eventListPtr(events)))
}
func newEvent(clEvent C.cl_event) *Event {
ev := &Event{clEvent: clEvent}
runtime.SetFinalizer(ev, releaseEvent)
return ev
}
func eventListPtr(el []*Event) *C.cl_event {
if el == nil {
return nil
}
elist := make([]C.cl_event, len(el))
for i, e := range el {
elist[i] = e.clEvent
}
return (*C.cl_event)(&elist[0])
}
func clBool(b bool) C.cl_bool {
if b {
return C.CL_TRUE
}
return C.CL_FALSE
}
func sizeT3(i3 [3]int) [3]C.size_t {
var val [3]C.size_t
val[0] = C.size_t(i3[0])
val[1] = C.size_t(i3[1])
val[2] = C.size_t(i3[2])
return val
}
type MappedMemObject struct {
ptr unsafe.Pointer
size int
rowPitch int
slicePitch int
}
func (mb *MappedMemObject) ByteSlice() []byte {
var byteSlice []byte
sliceHeader := (*reflect.SliceHeader)(unsafe.Pointer(&byteSlice))
sliceHeader.Cap = mb.size
sliceHeader.Len = mb.size
sliceHeader.Data = uintptr(mb.ptr)
return byteSlice
}
func (mb *MappedMemObject) Ptr() unsafe.Pointer {
return mb.ptr
}
func (mb *MappedMemObject) Size() int {
return mb.size
}
func (mb *MappedMemObject) RowPitch() int {
return mb.rowPitch
}
func (mb *MappedMemObject) SlicePitch() int {
return mb.slicePitch
}

View File

@ -0,0 +1,71 @@
// +build cl12
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
const (
ChannelDataTypeUNormInt24 ChannelDataType = C.CL_UNORM_INT24
ChannelOrderDepth ChannelOrder = C.CL_DEPTH
ChannelOrderDepthStencil ChannelOrder = C.CL_DEPTH_STENCIL
MemHostNoAccess MemFlag = C.CL_MEM_HOST_NO_ACCESS // OpenCL 1.2
MemHostReadOnly MemFlag = C.CL_MEM_HOST_READ_ONLY // OpenCL 1.2
MemHostWriteOnly MemFlag = C.CL_MEM_HOST_WRITE_ONLY // OpenCL 1.2
MemObjectTypeImage1D MemObjectType = C.CL_MEM_OBJECT_IMAGE1D
MemObjectTypeImage1DArray MemObjectType = C.CL_MEM_OBJECT_IMAGE1D_ARRAY
MemObjectTypeImage1DBuffer MemObjectType = C.CL_MEM_OBJECT_IMAGE1D_BUFFER
MemObjectTypeImage2DArray MemObjectType = C.CL_MEM_OBJECT_IMAGE2D_ARRAY
// This flag specifies that the region being mapped in the memory object is being mapped for writing.
//
// The contents of the region being mapped are to be discarded. This is typically the case when the
// region being mapped is overwritten by the host. This flag allows the implementation to no longer
// guarantee that the pointer returned by clEnqueueMapBuffer or clEnqueueMapImage contains the
// latest bits in the region being mapped which can be a significant performance enhancement.
MapFlagWriteInvalidateRegion MapFlag = C.CL_MAP_WRITE_INVALIDATE_REGION
)
func init() {
errorMap[C.CL_COMPILE_PROGRAM_FAILURE] = ErrCompileProgramFailure
errorMap[C.CL_DEVICE_PARTITION_FAILED] = ErrDevicePartitionFailed
errorMap[C.CL_INVALID_COMPILER_OPTIONS] = ErrInvalidCompilerOptions
errorMap[C.CL_INVALID_DEVICE_PARTITION_COUNT] = ErrInvalidDevicePartitionCount
errorMap[C.CL_INVALID_IMAGE_DESCRIPTOR] = ErrInvalidImageDescriptor
errorMap[C.CL_INVALID_LINKER_OPTIONS] = ErrInvalidLinkerOptions
errorMap[C.CL_KERNEL_ARG_INFO_NOT_AVAILABLE] = ErrKernelArgInfoNotAvailable
errorMap[C.CL_LINK_PROGRAM_FAILURE] = ErrLinkProgramFailure
errorMap[C.CL_LINKER_NOT_AVAILABLE] = ErrLinkerNotAvailable
channelOrderNameMap[ChannelOrderDepth] = "Depth"
channelOrderNameMap[ChannelOrderDepthStencil] = "DepthStencil"
channelDataTypeNameMap[ChannelDataTypeUNormInt24] = "UNormInt24"
}
type ImageDescription struct {
Type MemObjectType
Width, Height, Depth int
ArraySize, RowPitch, SlicePitch int
NumMipLevels, NumSamples int
Buffer *MemObject
}
func (d ImageDescription) toCl() C.cl_image_desc {
var desc C.cl_image_desc
desc.image_type = C.cl_mem_object_type(d.Type)
desc.image_width = C.size_t(d.Width)
desc.image_height = C.size_t(d.Height)
desc.image_depth = C.size_t(d.Depth)
desc.image_array_size = C.size_t(d.ArraySize)
desc.image_row_pitch = C.size_t(d.RowPitch)
desc.image_slice_pitch = C.size_t(d.SlicePitch)
desc.num_mip_levels = C.cl_uint(d.NumMipLevels)
desc.num_samples = C.cl_uint(d.NumSamples)
desc.buffer = nil
if d.Buffer != nil {
desc.buffer = d.Buffer.clMem
}
return desc
}

View File

@ -0,0 +1,45 @@
package cl
// #ifdef __APPLE__
// #include "OpenCL/opencl.h"
// #else
// #include "cl.h"
// #endif
import "C"
// Extension: cl_APPLE_fixed_alpha_channel_orders
//
// These selectors may be passed to clCreateImage2D() in the cl_image_format.image_channel_order field.
// They are like CL_BGRA and CL_ARGB except that the alpha channel to be ignored. On calls to read_imagef,
// the alpha will be 0xff (1.0f) if the sample falls in the image and 0 if it does not fall in the image.
// On calls to write_imagef, the alpha value is ignored and 0xff (1.0f) is written. These formats are
// currently only available for the CL_UNORM_INT8 cl_channel_type. They are intended to support legacy
// image formats.
const (
ChannelOrder1RGBApple ChannelOrder = C.CL_1RGB_APPLE // Introduced in MacOS X.7.
ChannelOrderBGR1Apple ChannelOrder = C.CL_BGR1_APPLE // Introduced in MacOS X.7.
)
// Extension: cl_APPLE_biased_fixed_point_image_formats
//
// This selector may be passed to clCreateImage2D() in the cl_image_format.image_channel_data_type field.
// It defines a biased signed 1.14 fixed point storage format, with range [-1, 3). The conversion from
// float to this fixed point format is defined as follows:
//
// ushort float_to_sfixed14( float x ){
// int i = convert_int_sat_rte( x * 0x1.0p14f ); // scale [-1, 3.0) to [-16384, 3*16384), round to nearest integer
// i = add_sat( i, 0x4000 ); // apply bias, to convert to [0, 65535) range
// return convert_ushort_sat(i); // clamp to destination size
// }
//
// The inverse conversion is the reverse process. The formats are currently only available on the CPU with
// the CL_RGBA channel layout.
const (
ChannelDataTypeSFixed14Apple ChannelDataType = C.CL_SFIXED14_APPLE // Introduced in MacOS X.7.
)
func init() {
channelOrderNameMap[ChannelOrder1RGBApple] = "1RGBApple"
channelOrderNameMap[ChannelOrderBGR1Apple] = "RGB1Apple"
channelDataTypeNameMap[ChannelDataTypeSFixed14Apple] = "SFixed14Apple"
}

View File

@ -1,5 +1,12 @@
language: go
go: 1.1
sudo: false
go:
- 1.0.3
- 1.1.2
- 1.2.2
- 1.3.3
- 1.4.2
script:
- go vet ./...

View File

@ -1,18 +1,17 @@
[![Coverage](http://gocover.io/_badge/github.com/codegangsta/cli?0)](http://gocover.io/github.com/codegangsta/cli)
[![Build Status](https://travis-ci.org/codegangsta/cli.png?branch=master)](https://travis-ci.org/codegangsta/cli)
[![GoDoc](https://godoc.org/github.com/codegangsta/cli?status.svg)](https://godoc.org/github.com/codegangsta/cli)
# cli.go
cli.go is simple, fast, and fun package for building command line apps in Go. The goal is to enable developers to write fast and distributable command line applications in an expressive way.
You can view the API docs here:
http://godoc.org/github.com/codegangsta/cli
`cli.go` is simple, fast, and fun package for building command line apps in Go. The goal is to enable developers to write fast and distributable command line applications in an expressive way.
## Overview
Command line apps are usually so tiny that there is absolutely no reason why your code should *not* be self-documenting. Things like generating help text and parsing command flags/options should not hinder productivity when writing a command line app.
**This is where cli.go comes into play.** cli.go makes command line programming fun, organized, and expressive!
**This is where `cli.go` comes into play.** `cli.go` makes command line programming fun, organized, and expressive!
## Installation
Make sure you have a working Go environment (go 1.1 is *required*). [See the install instructions](http://golang.org/doc/install.html).
Make sure you have a working Go environment (go 1.1+ is *required*). [See the install instructions](http://golang.org/doc/install.html).
To install `cli.go`, simply run:
```
@ -25,7 +24,7 @@ export PATH=$PATH:$GOPATH/bin
```
## Getting Started
One of the philosophies behind cli.go is that an API should be playful and full of discovery. So a cli.go app can be as little as one line of code in `main()`.
One of the philosophies behind `cli.go` is that an API should be playful and full of discovery. So a `cli.go` app can be as little as one line of code in `main()`.
``` go
package main
@ -103,7 +102,8 @@ $ greet
Hello friend!
```
cli.go also generates some bitchass help text:
`cli.go` also generates neat help text:
```
$ greet help
NAME:
@ -158,6 +158,8 @@ app.Action = func(c *cli.Context) {
...
```
See full list of flags at http://godoc.org/github.com/codegangsta/cli
#### Alternate Names
You can set alternate (or short) names for flags by providing a comma-delimited list for the `Name`. e.g.
@ -289,6 +291,21 @@ setting the `PROG` variable to the name of your program:
`PROG=myprogram source /.../cli/autocomplete/bash_autocomplete`
#### To Distribute
Copy `autocomplete/bash_autocomplete` into `/etc/bash_completion.d/` and rename
it to the name of the program you wish to add autocomplete support for (or
automatically install it there if you are distributing a package). Don't forget
to source the file to make it active in the current shell.
```
sudo cp src/bash_autocomplete /etc/bash_completion.d/<myprogram>
source /etc/bash_completion.d/<myprogram>
```
Alternatively, you can just document that users should source the generic
`autocomplete/bash_autocomplete` in their bash configuration with `$PROG` set
to the name of their program (as above).
## Contribution Guidelines
Feel free to put up a pull request to fix a bug or maybe add a feature. I will give it a code review and make sure that it does not break backwards compatibility. If I or any other collaborators agree that it is in line with the vision of the project, we will work with you to get the code into a mergeable state and merge it into the master branch.

View File

@ -5,19 +5,20 @@ import (
"io"
"io/ioutil"
"os"
"strings"
"text/tabwriter"
"text/template"
"time"
)
// App is the main structure of a cli application. It is recomended that
// and app be created with the cli.NewApp() function
// an app be created with the cli.NewApp() function
type App struct {
// The name of the program. Defaults to os.Args[0]
Name string
// Full name of command for help, defaults to Name
HelpName string
// Description of the program.
Usage string
// Description of the program argument format.
ArgsUsage string
// Version of the program
Version string
// List of commands to execute
@ -46,6 +47,8 @@ type App struct {
Compiled time.Time
// List of all authors who contributed
Authors []Author
// Copyright of the binary if any
Copyright string
// Name of Author (Note: Use App.Authors, this is deprecated)
Author string
// Email of Author (Note: Use App.Authors, this is deprecated)
@ -68,6 +71,7 @@ func compileTime() time.Time {
func NewApp() *App {
return &App{
Name: os.Args[0],
HelpName: os.Args[0],
Usage: "A new cli application",
Version: "0.0.0",
BashComplete: DefaultAppComplete,
@ -83,25 +87,14 @@ func (a *App) Run(arguments []string) (err error) {
a.Authors = append(a.Authors, Author{Name: a.Author, Email: a.Email})
}
if HelpPrinter == nil {
defer func() {
HelpPrinter = nil
}()
HelpPrinter = func(templ string, data interface{}) {
funcMap := template.FuncMap{
"join": strings.Join,
}
w := tabwriter.NewWriter(a.Writer, 0, 8, 1, '\t', 0)
t := template.Must(template.New("help").Funcs(funcMap).Parse(templ))
err := t.Execute(w, data)
if err != nil {
panic(err)
}
w.Flush()
newCmds := []Command{}
for _, c := range a.Commands {
if c.HelpName == "" {
c.HelpName = fmt.Sprintf("%s %s", a.HelpName, c.Name)
}
newCmds = append(newCmds, c)
}
a.Commands = newCmds
// append help to commands
if a.Command(helpCommand.Name) == nil && !a.HideHelp {
@ -127,17 +120,16 @@ func (a *App) Run(arguments []string) (err error) {
nerr := normalizeFlags(a.Flags, set)
if nerr != nil {
fmt.Fprintln(a.Writer, nerr)
context := NewContext(a, set, set)
context := NewContext(a, set, nil)
ShowAppHelp(context)
fmt.Fprintln(a.Writer)
return nerr
}
context := NewContext(a, set, set)
context := NewContext(a, set, nil)
if err != nil {
fmt.Fprintf(a.Writer, "Incorrect Usage.\n\n")
ShowAppHelp(context)
fmt.Fprintln(a.Writer, "Incorrect Usage.")
fmt.Fprintln(a.Writer)
ShowAppHelp(context)
return err
}
@ -145,20 +137,26 @@ func (a *App) Run(arguments []string) (err error) {
return nil
}
if checkHelp(context) {
if !a.HideHelp && checkHelp(context) {
ShowAppHelp(context)
return nil
}
if checkVersion(context) {
if !a.HideVersion && checkVersion(context) {
ShowVersion(context)
return nil
}
if a.After != nil {
defer func() {
// err is always nil here.
// There is a check to see if it is non-nil
// just few lines before.
err = a.After(context)
afterErr := a.After(context)
if afterErr != nil {
if err != nil {
err = NewMultiError(err, afterErr)
} else {
err = afterErr
}
}
}()
}
@ -203,6 +201,15 @@ func (a *App) RunAsSubcommand(ctx *Context) (err error) {
}
}
newCmds := []Command{}
for _, c := range a.Commands {
if c.HelpName == "" {
c.HelpName = fmt.Sprintf("%s %s", a.HelpName, c.Name)
}
newCmds = append(newCmds, c)
}
a.Commands = newCmds
// append flags
if a.EnableBashCompletion {
a.appendFlag(BashCompletionFlag)
@ -213,21 +220,22 @@ func (a *App) RunAsSubcommand(ctx *Context) (err error) {
set.SetOutput(ioutil.Discard)
err = set.Parse(ctx.Args().Tail())
nerr := normalizeFlags(a.Flags, set)
context := NewContext(a, set, ctx.globalSet)
context := NewContext(a, set, ctx)
if nerr != nil {
fmt.Fprintln(a.Writer, nerr)
fmt.Fprintln(a.Writer)
if len(a.Commands) > 0 {
ShowSubcommandHelp(context)
} else {
ShowCommandHelp(ctx, context.Args().First())
}
fmt.Fprintln(a.Writer)
return nerr
}
if err != nil {
fmt.Fprintf(a.Writer, "Incorrect Usage.\n\n")
fmt.Fprintln(a.Writer, "Incorrect Usage.")
fmt.Fprintln(a.Writer)
ShowSubcommandHelp(context)
return err
}
@ -248,10 +256,14 @@ func (a *App) RunAsSubcommand(ctx *Context) (err error) {
if a.After != nil {
defer func() {
// err is always nil here.
// There is a check to see if it is non-nil
// just few lines before.
err = a.After(context)
afterErr := a.After(context)
if afterErr != nil {
if err != nil {
err = NewMultiError(err, afterErr)
} else {
err = afterErr
}
}
}()
}

View File

@ -1,622 +0,0 @@
package cli_test
import (
"flag"
"fmt"
"os"
"testing"
"github.com/codegangsta/cli"
)
func ExampleApp() {
// set args for examples sake
os.Args = []string{"greet", "--name", "Jeremy"}
app := cli.NewApp()
app.Name = "greet"
app.Flags = []cli.Flag{
cli.StringFlag{Name: "name", Value: "bob", Usage: "a name to say"},
}
app.Action = func(c *cli.Context) {
fmt.Printf("Hello %v\n", c.String("name"))
}
app.Author = "Harrison"
app.Email = "harrison@lolwut.com"
app.Authors = []cli.Author{cli.Author{Name: "Oliver Allen", Email: "oliver@toyshop.com"}}
app.Run(os.Args)
// Output:
// Hello Jeremy
}
func ExampleAppSubcommand() {
// set args for examples sake
os.Args = []string{"say", "hi", "english", "--name", "Jeremy"}
app := cli.NewApp()
app.Name = "say"
app.Commands = []cli.Command{
{
Name: "hello",
Aliases: []string{"hi"},
Usage: "use it to see a description",
Description: "This is how we describe hello the function",
Subcommands: []cli.Command{
{
Name: "english",
Aliases: []string{"en"},
Usage: "sends a greeting in english",
Description: "greets someone in english",
Flags: []cli.Flag{
cli.StringFlag{
Name: "name",
Value: "Bob",
Usage: "Name of the person to greet",
},
},
Action: func(c *cli.Context) {
fmt.Println("Hello,", c.String("name"))
},
},
},
},
}
app.Run(os.Args)
// Output:
// Hello, Jeremy
}
func ExampleAppHelp() {
// set args for examples sake
os.Args = []string{"greet", "h", "describeit"}
app := cli.NewApp()
app.Name = "greet"
app.Flags = []cli.Flag{
cli.StringFlag{Name: "name", Value: "bob", Usage: "a name to say"},
}
app.Commands = []cli.Command{
{
Name: "describeit",
Aliases: []string{"d"},
Usage: "use it to see a description",
Description: "This is how we describe describeit the function",
Action: func(c *cli.Context) {
fmt.Printf("i like to describe things")
},
},
}
app.Run(os.Args)
// Output:
// NAME:
// describeit - use it to see a description
//
// USAGE:
// command describeit [arguments...]
//
// DESCRIPTION:
// This is how we describe describeit the function
}
func ExampleAppBashComplete() {
// set args for examples sake
os.Args = []string{"greet", "--generate-bash-completion"}
app := cli.NewApp()
app.Name = "greet"
app.EnableBashCompletion = true
app.Commands = []cli.Command{
{
Name: "describeit",
Aliases: []string{"d"},
Usage: "use it to see a description",
Description: "This is how we describe describeit the function",
Action: func(c *cli.Context) {
fmt.Printf("i like to describe things")
},
}, {
Name: "next",
Usage: "next example",
Description: "more stuff to see when generating bash completion",
Action: func(c *cli.Context) {
fmt.Printf("the next example")
},
},
}
app.Run(os.Args)
// Output:
// describeit
// d
// next
// help
// h
}
func TestApp_Run(t *testing.T) {
s := ""
app := cli.NewApp()
app.Action = func(c *cli.Context) {
s = s + c.Args().First()
}
err := app.Run([]string{"command", "foo"})
expect(t, err, nil)
err = app.Run([]string{"command", "bar"})
expect(t, err, nil)
expect(t, s, "foobar")
}
var commandAppTests = []struct {
name string
expected bool
}{
{"foobar", true},
{"batbaz", true},
{"b", true},
{"f", true},
{"bat", false},
{"nothing", false},
}
func TestApp_Command(t *testing.T) {
app := cli.NewApp()
fooCommand := cli.Command{Name: "foobar", Aliases: []string{"f"}}
batCommand := cli.Command{Name: "batbaz", Aliases: []string{"b"}}
app.Commands = []cli.Command{
fooCommand,
batCommand,
}
for _, test := range commandAppTests {
expect(t, app.Command(test.name) != nil, test.expected)
}
}
func TestApp_CommandWithArgBeforeFlags(t *testing.T) {
var parsedOption, firstArg string
app := cli.NewApp()
command := cli.Command{
Name: "cmd",
Flags: []cli.Flag{
cli.StringFlag{Name: "option", Value: "", Usage: "some option"},
},
Action: func(c *cli.Context) {
parsedOption = c.String("option")
firstArg = c.Args().First()
},
}
app.Commands = []cli.Command{command}
app.Run([]string{"", "cmd", "my-arg", "--option", "my-option"})
expect(t, parsedOption, "my-option")
expect(t, firstArg, "my-arg")
}
func TestApp_RunAsSubcommandParseFlags(t *testing.T) {
var context *cli.Context
a := cli.NewApp()
a.Commands = []cli.Command{
{
Name: "foo",
Action: func(c *cli.Context) {
context = c
},
Flags: []cli.Flag{
cli.StringFlag{
Name: "lang",
Value: "english",
Usage: "language for the greeting",
},
},
Before: func(_ *cli.Context) error { return nil },
},
}
a.Run([]string{"", "foo", "--lang", "spanish", "abcd"})
expect(t, context.Args().Get(0), "abcd")
expect(t, context.String("lang"), "spanish")
}
func TestApp_CommandWithFlagBeforeTerminator(t *testing.T) {
var parsedOption string
var args []string
app := cli.NewApp()
command := cli.Command{
Name: "cmd",
Flags: []cli.Flag{
cli.StringFlag{Name: "option", Value: "", Usage: "some option"},
},
Action: func(c *cli.Context) {
parsedOption = c.String("option")
args = c.Args()
},
}
app.Commands = []cli.Command{command}
app.Run([]string{"", "cmd", "my-arg", "--option", "my-option", "--", "--notARealFlag"})
expect(t, parsedOption, "my-option")
expect(t, args[0], "my-arg")
expect(t, args[1], "--")
expect(t, args[2], "--notARealFlag")
}
func TestApp_CommandWithNoFlagBeforeTerminator(t *testing.T) {
var args []string
app := cli.NewApp()
command := cli.Command{
Name: "cmd",
Action: func(c *cli.Context) {
args = c.Args()
},
}
app.Commands = []cli.Command{command}
app.Run([]string{"", "cmd", "my-arg", "--", "notAFlagAtAll"})
expect(t, args[0], "my-arg")
expect(t, args[1], "--")
expect(t, args[2], "notAFlagAtAll")
}
func TestApp_Float64Flag(t *testing.T) {
var meters float64
app := cli.NewApp()
app.Flags = []cli.Flag{
cli.Float64Flag{Name: "height", Value: 1.5, Usage: "Set the height, in meters"},
}
app.Action = func(c *cli.Context) {
meters = c.Float64("height")
}
app.Run([]string{"", "--height", "1.93"})
expect(t, meters, 1.93)
}
func TestApp_ParseSliceFlags(t *testing.T) {
var parsedOption, firstArg string
var parsedIntSlice []int
var parsedStringSlice []string
app := cli.NewApp()
command := cli.Command{
Name: "cmd",
Flags: []cli.Flag{
cli.IntSliceFlag{Name: "p", Value: &cli.IntSlice{}, Usage: "set one or more ip addr"},
cli.StringSliceFlag{Name: "ip", Value: &cli.StringSlice{}, Usage: "set one or more ports to open"},
},
Action: func(c *cli.Context) {
parsedIntSlice = c.IntSlice("p")
parsedStringSlice = c.StringSlice("ip")
parsedOption = c.String("option")
firstArg = c.Args().First()
},
}
app.Commands = []cli.Command{command}
app.Run([]string{"", "cmd", "my-arg", "-p", "22", "-p", "80", "-ip", "8.8.8.8", "-ip", "8.8.4.4"})
IntsEquals := func(a, b []int) bool {
if len(a) != len(b) {
return false
}
for i, v := range a {
if v != b[i] {
return false
}
}
return true
}
StrsEquals := func(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i, v := range a {
if v != b[i] {
return false
}
}
return true
}
var expectedIntSlice = []int{22, 80}
var expectedStringSlice = []string{"8.8.8.8", "8.8.4.4"}
if !IntsEquals(parsedIntSlice, expectedIntSlice) {
t.Errorf("%v does not match %v", parsedIntSlice, expectedIntSlice)
}
if !StrsEquals(parsedStringSlice, expectedStringSlice) {
t.Errorf("%v does not match %v", parsedStringSlice, expectedStringSlice)
}
}
func TestApp_DefaultStdout(t *testing.T) {
app := cli.NewApp()
if app.Writer != os.Stdout {
t.Error("Default output writer not set.")
}
}
type mockWriter struct {
written []byte
}
func (fw *mockWriter) Write(p []byte) (n int, err error) {
if fw.written == nil {
fw.written = p
} else {
fw.written = append(fw.written, p...)
}
return len(p), nil
}
func (fw *mockWriter) GetWritten() (b []byte) {
return fw.written
}
func TestApp_SetStdout(t *testing.T) {
w := &mockWriter{}
app := cli.NewApp()
app.Name = "test"
app.Writer = w
err := app.Run([]string{"help"})
if err != nil {
t.Fatalf("Run error: %s", err)
}
if len(w.written) == 0 {
t.Error("App did not write output to desired writer.")
}
}
func TestApp_BeforeFunc(t *testing.T) {
beforeRun, subcommandRun := false, false
beforeError := fmt.Errorf("fail")
var err error
app := cli.NewApp()
app.Before = func(c *cli.Context) error {
beforeRun = true
s := c.String("opt")
if s == "fail" {
return beforeError
}
return nil
}
app.Commands = []cli.Command{
cli.Command{
Name: "sub",
Action: func(c *cli.Context) {
subcommandRun = true
},
},
}
app.Flags = []cli.Flag{
cli.StringFlag{Name: "opt"},
}
// run with the Before() func succeeding
err = app.Run([]string{"command", "--opt", "succeed", "sub"})
if err != nil {
t.Fatalf("Run error: %s", err)
}
if beforeRun == false {
t.Errorf("Before() not executed when expected")
}
if subcommandRun == false {
t.Errorf("Subcommand not executed when expected")
}
// reset
beforeRun, subcommandRun = false, false
// run with the Before() func failing
err = app.Run([]string{"command", "--opt", "fail", "sub"})
// should be the same error produced by the Before func
if err != beforeError {
t.Errorf("Run error expected, but not received")
}
if beforeRun == false {
t.Errorf("Before() not executed when expected")
}
if subcommandRun == true {
t.Errorf("Subcommand executed when NOT expected")
}
}
func TestApp_AfterFunc(t *testing.T) {
afterRun, subcommandRun := false, false
afterError := fmt.Errorf("fail")
var err error
app := cli.NewApp()
app.After = func(c *cli.Context) error {
afterRun = true
s := c.String("opt")
if s == "fail" {
return afterError
}
return nil
}
app.Commands = []cli.Command{
cli.Command{
Name: "sub",
Action: func(c *cli.Context) {
subcommandRun = true
},
},
}
app.Flags = []cli.Flag{
cli.StringFlag{Name: "opt"},
}
// run with the After() func succeeding
err = app.Run([]string{"command", "--opt", "succeed", "sub"})
if err != nil {
t.Fatalf("Run error: %s", err)
}
if afterRun == false {
t.Errorf("After() not executed when expected")
}
if subcommandRun == false {
t.Errorf("Subcommand not executed when expected")
}
// reset
afterRun, subcommandRun = false, false
// run with the Before() func failing
err = app.Run([]string{"command", "--opt", "fail", "sub"})
// should be the same error produced by the Before func
if err != afterError {
t.Errorf("Run error expected, but not received")
}
if afterRun == false {
t.Errorf("After() not executed when expected")
}
if subcommandRun == false {
t.Errorf("Subcommand not executed when expected")
}
}
func TestAppNoHelpFlag(t *testing.T) {
oldFlag := cli.HelpFlag
defer func() {
cli.HelpFlag = oldFlag
}()
cli.HelpFlag = cli.BoolFlag{}
app := cli.NewApp()
err := app.Run([]string{"test", "-h"})
if err != flag.ErrHelp {
t.Errorf("expected error about missing help flag, but got: %s (%T)", err, err)
}
}
func TestAppHelpPrinter(t *testing.T) {
oldPrinter := cli.HelpPrinter
defer func() {
cli.HelpPrinter = oldPrinter
}()
var wasCalled = false
cli.HelpPrinter = func(template string, data interface{}) {
wasCalled = true
}
app := cli.NewApp()
app.Run([]string{"-h"})
if wasCalled == false {
t.Errorf("Help printer expected to be called, but was not")
}
}
func TestAppVersionPrinter(t *testing.T) {
oldPrinter := cli.VersionPrinter
defer func() {
cli.VersionPrinter = oldPrinter
}()
var wasCalled = false
cli.VersionPrinter = func(c *cli.Context) {
wasCalled = true
}
app := cli.NewApp()
ctx := cli.NewContext(app, nil, nil)
cli.ShowVersion(ctx)
if wasCalled == false {
t.Errorf("Version printer expected to be called, but was not")
}
}
func TestAppCommandNotFound(t *testing.T) {
beforeRun, subcommandRun := false, false
app := cli.NewApp()
app.CommandNotFound = func(c *cli.Context, command string) {
beforeRun = true
}
app.Commands = []cli.Command{
cli.Command{
Name: "bar",
Action: func(c *cli.Context) {
subcommandRun = true
},
},
}
app.Run([]string{"command", "foo"})
expect(t, beforeRun, true)
expect(t, subcommandRun, false)
}
func TestGlobalFlagsInSubcommands(t *testing.T) {
subcommandRun := false
app := cli.NewApp()
app.Flags = []cli.Flag{
cli.BoolFlag{Name: "debug, d", Usage: "Enable debugging"},
}
app.Commands = []cli.Command{
cli.Command{
Name: "foo",
Subcommands: []cli.Command{
{
Name: "bar",
Action: func(c *cli.Context) {
if c.GlobalBool("debug") {
subcommandRun = true
}
},
},
},
},
}
app.Run([]string{"command", "-d", "foo", "bar"})
expect(t, subcommandRun, true)
}

View File

@ -1,5 +1,7 @@
#! /bin/bash
: ${PROG:=$(basename ${BASH_SOURCE})}
_cli_bash_autocomplete() {
local cur prev opts base
COMPREPLY=()

View File

@ -17,3 +17,24 @@
// app.Run(os.Args)
// }
package cli
import (
"strings"
)
type MultiError struct {
Errors []error
}
func NewMultiError(err ...error) MultiError {
return MultiError{Errors: err}
}
func (m MultiError) Error() string {
errs := make([]string, len(m.Errors))
for i, err := range m.Errors {
errs[i] = err.Error()
}
return strings.Join(errs, "\n")
}

View File

@ -1,100 +0,0 @@
package cli_test
import (
"os"
"github.com/codegangsta/cli"
)
func Example() {
app := cli.NewApp()
app.Name = "todo"
app.Usage = "task list on the command line"
app.Commands = []cli.Command{
{
Name: "add",
Aliases: []string{"a"},
Usage: "add a task to the list",
Action: func(c *cli.Context) {
println("added task: ", c.Args().First())
},
},
{
Name: "complete",
Aliases: []string{"c"},
Usage: "complete a task on the list",
Action: func(c *cli.Context) {
println("completed task: ", c.Args().First())
},
},
}
app.Run(os.Args)
}
func ExampleSubcommand() {
app := cli.NewApp()
app.Name = "say"
app.Commands = []cli.Command{
{
Name: "hello",
Aliases: []string{"hi"},
Usage: "use it to see a description",
Description: "This is how we describe hello the function",
Subcommands: []cli.Command{
{
Name: "english",
Aliases: []string{"en"},
Usage: "sends a greeting in english",
Description: "greets someone in english",
Flags: []cli.Flag{
cli.StringFlag{
Name: "name",
Value: "Bob",
Usage: "Name of the person to greet",
},
},
Action: func(c *cli.Context) {
println("Hello, ", c.String("name"))
},
}, {
Name: "spanish",
Aliases: []string{"sp"},
Usage: "sends a greeting in spanish",
Flags: []cli.Flag{
cli.StringFlag{
Name: "surname",
Value: "Jones",
Usage: "Surname of the person to greet",
},
},
Action: func(c *cli.Context) {
println("Hola, ", c.String("surname"))
},
}, {
Name: "french",
Aliases: []string{"fr"},
Usage: "sends a greeting in french",
Flags: []cli.Flag{
cli.StringFlag{
Name: "nickname",
Value: "Stevie",
Usage: "Nickname of the person to greet",
},
},
Action: func(c *cli.Context) {
println("Bonjour, ", c.String("nickname"))
},
},
},
}, {
Name: "bye",
Usage: "says goodbye",
Action: func(c *cli.Context) {
println("bye")
},
},
}
app.Run(os.Args)
}

View File

@ -18,6 +18,8 @@ type Command struct {
Usage string
// A longer explanation of how the command works
Description string
// A short description of the arguments of this command
ArgsUsage string
// The function to call when checking for bash command completions
BashComplete func(context *Context)
// An action to execute before any sub-subcommands are run, but after the context is ready
@ -36,11 +38,23 @@ type Command struct {
SkipFlagParsing bool
// Boolean to hide built-in help command
HideHelp bool
// Full name of command for help, defaults to full command name, including parent commands.
HelpName string
commandNamePath []string
}
// Returns the full name of the command.
// For subcommands this ensures that parent commands are part of the command path
func (c Command) FullName() string {
if c.commandNamePath == nil {
return c.Name
}
return strings.Join(c.commandNamePath, " ")
}
// Invokes the command given the context, parses ctx.Args() to generate command-specific flags
func (c Command) Run(ctx *Context) error {
if len(c.Subcommands) > 0 || c.Before != nil || c.After != nil {
return c.startApp(ctx)
}
@ -91,9 +105,9 @@ func (c Command) Run(ctx *Context) error {
}
if err != nil {
fmt.Fprint(ctx.App.Writer, "Incorrect Usage.\n\n")
ShowCommandHelp(ctx, c.Name)
fmt.Fprintln(ctx.App.Writer, "Incorrect Usage.")
fmt.Fprintln(ctx.App.Writer)
ShowCommandHelp(ctx, c.Name)
return err
}
@ -102,10 +116,9 @@ func (c Command) Run(ctx *Context) error {
fmt.Fprintln(ctx.App.Writer, nerr)
fmt.Fprintln(ctx.App.Writer)
ShowCommandHelp(ctx, c.Name)
fmt.Fprintln(ctx.App.Writer)
return nerr
}
context := NewContext(ctx.App, set, ctx.globalSet)
context := NewContext(ctx.App, set, ctx)
if checkCommandCompletions(context, c.Name) {
return nil
@ -144,6 +157,12 @@ func (c Command) startApp(ctx *Context) error {
// set the name and usage
app.Name = fmt.Sprintf("%s %s", ctx.App.Name, c.Name)
if c.HelpName == "" {
app.HelpName = c.HelpName
} else {
app.HelpName = fmt.Sprintf("%s %s", ctx.App.Name, c.Name)
}
if c.Description != "" {
app.Usage = c.Description
} else {
@ -158,6 +177,13 @@ func (c Command) startApp(ctx *Context) error {
app.Flags = c.Flags
app.HideHelp = c.HideHelp
app.Version = ctx.App.Version
app.HideVersion = ctx.App.HideVersion
app.Compiled = ctx.App.Compiled
app.Author = ctx.App.Author
app.Email = ctx.App.Email
app.Writer = ctx.App.Writer
// bash completion
app.EnableBashCompletion = ctx.App.EnableBashCompletion
if c.BashComplete != nil {
@ -173,5 +199,12 @@ func (c Command) startApp(ctx *Context) error {
app.Action = helpSubcommand.Action
}
var newCmds []Command
for _, cc := range app.Commands {
cc.commandNamePath = []string{c.Name, cc.Name}
newCmds = append(newCmds, cc)
}
app.Commands = newCmds
return app.RunAsSubcommand(ctx)
}

View File

@ -1,49 +0,0 @@
package cli_test
import (
"flag"
"testing"
"github.com/codegangsta/cli"
)
func TestCommandDoNotIgnoreFlags(t *testing.T) {
app := cli.NewApp()
set := flag.NewFlagSet("test", 0)
test := []string{"blah", "blah", "-break"}
set.Parse(test)
c := cli.NewContext(app, set, set)
command := cli.Command{
Name: "test-cmd",
Aliases: []string{"tc"},
Usage: "this is for testing",
Description: "testing",
Action: func(_ *cli.Context) {},
}
err := command.Run(c)
expect(t, err.Error(), "flag provided but not defined: -break")
}
func TestCommandIgnoreFlags(t *testing.T) {
app := cli.NewApp()
set := flag.NewFlagSet("test", 0)
test := []string{"blah", "blah"}
set.Parse(test)
c := cli.NewContext(app, set, set)
command := cli.Command{
Name: "test-cmd",
Aliases: []string{"tc"},
Usage: "this is for testing",
Description: "testing",
Action: func(_ *cli.Context) {},
SkipFlagParsing: true,
}
err := command.Run(c)
expect(t, err, nil)
}

View File

@ -16,14 +16,14 @@ type Context struct {
App *App
Command Command
flagSet *flag.FlagSet
globalSet *flag.FlagSet
setFlags map[string]bool
globalSetFlags map[string]bool
parentContext *Context
}
// Creates a new context. For use in when invoking an App or Command action.
func NewContext(app *App, set *flag.FlagSet, globalSet *flag.FlagSet) *Context {
return &Context{App: app, flagSet: set, globalSet: globalSet}
func NewContext(app *App, set *flag.FlagSet, parentCtx *Context) *Context {
return &Context{App: app, flagSet: set, parentContext: parentCtx}
}
// Looks up the value of a local int flag, returns 0 if no int flag exists
@ -73,37 +73,58 @@ func (c *Context) Generic(name string) interface{} {
// Looks up the value of a global int flag, returns 0 if no int flag exists
func (c *Context) GlobalInt(name string) int {
return lookupInt(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupInt(name, fs)
}
return 0
}
// Looks up the value of a global time.Duration flag, returns 0 if no time.Duration flag exists
func (c *Context) GlobalDuration(name string) time.Duration {
return lookupDuration(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupDuration(name, fs)
}
return 0
}
// Looks up the value of a global bool flag, returns false if no bool flag exists
func (c *Context) GlobalBool(name string) bool {
return lookupBool(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupBool(name, fs)
}
return false
}
// Looks up the value of a global string flag, returns "" if no string flag exists
func (c *Context) GlobalString(name string) string {
return lookupString(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupString(name, fs)
}
return ""
}
// Looks up the value of a global string slice flag, returns nil if no string slice flag exists
func (c *Context) GlobalStringSlice(name string) []string {
return lookupStringSlice(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupStringSlice(name, fs)
}
return nil
}
// Looks up the value of a global int slice flag, returns nil if no int slice flag exists
func (c *Context) GlobalIntSlice(name string) []int {
return lookupIntSlice(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupIntSlice(name, fs)
}
return nil
}
// Looks up the value of a global generic flag, returns nil if no generic flag exists
func (c *Context) GlobalGeneric(name string) interface{} {
return lookupGeneric(name, c.globalSet)
if fs := lookupGlobalFlagSet(name, c); fs != nil {
return lookupGeneric(name, fs)
}
return nil
}
// Returns the number of flags set
@ -126,11 +147,17 @@ func (c *Context) IsSet(name string) bool {
func (c *Context) GlobalIsSet(name string) bool {
if c.globalSetFlags == nil {
c.globalSetFlags = make(map[string]bool)
c.globalSet.Visit(func(f *flag.Flag) {
c.globalSetFlags[f.Name] = true
})
ctx := c
if ctx.parentContext != nil {
ctx = ctx.parentContext
}
for ; ctx != nil && c.globalSetFlags[name] == false; ctx = ctx.parentContext {
ctx.flagSet.Visit(func(f *flag.Flag) {
c.globalSetFlags[f.Name] = true
})
}
}
return c.globalSetFlags[name] == true
return c.globalSetFlags[name]
}
// Returns a slice of flag names used in this context.
@ -157,6 +184,11 @@ func (c *Context) GlobalFlagNames() (names []string) {
return
}
// Returns the parent context, if any
func (c *Context) Parent() *Context {
return c.parentContext
}
type Args []string
// Returns the command line arguments associated with the context.
@ -201,6 +233,18 @@ func (a Args) Swap(from, to int) error {
return nil
}
func lookupGlobalFlagSet(name string, ctx *Context) *flag.FlagSet {
if ctx.parentContext != nil {
ctx = ctx.parentContext
}
for ; ctx != nil; ctx = ctx.parentContext {
if f := ctx.flagSet.Lookup(name); f != nil {
return ctx.flagSet
}
}
return nil
}
func lookupInt(name string, set *flag.FlagSet) int {
f := set.Lookup(name)
if f != nil {

View File

@ -1,111 +0,0 @@
package cli_test
import (
"flag"
"testing"
"time"
"github.com/codegangsta/cli"
)
func TestNewContext(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Int("myflag", 12, "doc")
globalSet := flag.NewFlagSet("test", 0)
globalSet.Int("myflag", 42, "doc")
command := cli.Command{Name: "mycommand"}
c := cli.NewContext(nil, set, globalSet)
c.Command = command
expect(t, c.Int("myflag"), 12)
expect(t, c.GlobalInt("myflag"), 42)
expect(t, c.Command.Name, "mycommand")
}
func TestContext_Int(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Int("myflag", 12, "doc")
c := cli.NewContext(nil, set, set)
expect(t, c.Int("myflag"), 12)
}
func TestContext_Duration(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Duration("myflag", time.Duration(12*time.Second), "doc")
c := cli.NewContext(nil, set, set)
expect(t, c.Duration("myflag"), time.Duration(12*time.Second))
}
func TestContext_String(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("myflag", "hello world", "doc")
c := cli.NewContext(nil, set, set)
expect(t, c.String("myflag"), "hello world")
}
func TestContext_Bool(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Bool("myflag", false, "doc")
c := cli.NewContext(nil, set, set)
expect(t, c.Bool("myflag"), false)
}
func TestContext_BoolT(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Bool("myflag", true, "doc")
c := cli.NewContext(nil, set, set)
expect(t, c.BoolT("myflag"), true)
}
func TestContext_Args(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Bool("myflag", false, "doc")
c := cli.NewContext(nil, set, set)
set.Parse([]string{"--myflag", "bat", "baz"})
expect(t, len(c.Args()), 2)
expect(t, c.Bool("myflag"), true)
}
func TestContext_IsSet(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Bool("myflag", false, "doc")
set.String("otherflag", "hello world", "doc")
globalSet := flag.NewFlagSet("test", 0)
globalSet.Bool("myflagGlobal", true, "doc")
c := cli.NewContext(nil, set, globalSet)
set.Parse([]string{"--myflag", "bat", "baz"})
globalSet.Parse([]string{"--myflagGlobal", "bat", "baz"})
expect(t, c.IsSet("myflag"), true)
expect(t, c.IsSet("otherflag"), false)
expect(t, c.IsSet("bogusflag"), false)
expect(t, c.IsSet("myflagGlobal"), false)
}
func TestContext_GlobalIsSet(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Bool("myflag", false, "doc")
set.String("otherflag", "hello world", "doc")
globalSet := flag.NewFlagSet("test", 0)
globalSet.Bool("myflagGlobal", true, "doc")
globalSet.Bool("myflagGlobalUnset", true, "doc")
c := cli.NewContext(nil, set, globalSet)
set.Parse([]string{"--myflag", "bat", "baz"})
globalSet.Parse([]string{"--myflagGlobal", "bat", "baz"})
expect(t, c.GlobalIsSet("myflag"), false)
expect(t, c.GlobalIsSet("otherflag"), false)
expect(t, c.GlobalIsSet("bogusflag"), false)
expect(t, c.GlobalIsSet("myflagGlobal"), true)
expect(t, c.GlobalIsSet("myflagGlobalUnset"), false)
expect(t, c.GlobalIsSet("bogusGlobal"), false)
}
func TestContext_NumFlags(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.Bool("myflag", false, "doc")
set.String("otherflag", "hello world", "doc")
globalSet := flag.NewFlagSet("test", 0)
globalSet.Bool("myflagGlobal", true, "doc")
c := cli.NewContext(nil, set, globalSet)
set.Parse([]string{"--myflag", "--otherflag=foo"})
globalSet.Parse([]string{"--myflagGlobal"})
expect(t, c.NumFlags(), 2)
}

View File

@ -99,21 +99,27 @@ func (f GenericFlag) getName() string {
return f.Name
}
// StringSlice is an opaque type for []string to satisfy flag.Value
type StringSlice []string
// Set appends the string value to the list of values
func (f *StringSlice) Set(value string) error {
*f = append(*f, value)
return nil
}
// String returns a readable representation of this value (for usage defaults)
func (f *StringSlice) String() string {
return fmt.Sprintf("%s", *f)
}
// Value returns the slice of strings set by this flag
func (f *StringSlice) Value() []string {
return *f
}
// StringSlice is a string flag that can be specified multiple times on the
// command-line
type StringSliceFlag struct {
Name string
Value *StringSlice
@ -121,12 +127,14 @@ type StringSliceFlag struct {
EnvVar string
}
// String returns the usage
func (f StringSliceFlag) String() string {
firstName := strings.Trim(strings.Split(f.Name, ",")[0], " ")
pref := prefixFor(firstName)
return withEnvHint(f.EnvVar, fmt.Sprintf("%s [%v]\t%v", prefixedNames(f.Name), pref+firstName+" option "+pref+firstName+" option", f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f StringSliceFlag) Apply(set *flag.FlagSet) {
if f.EnvVar != "" {
for _, envVar := range strings.Split(f.EnvVar, ",") {
@ -144,6 +152,9 @@ func (f StringSliceFlag) Apply(set *flag.FlagSet) {
}
eachName(f.Name, func(name string) {
if f.Value == nil {
f.Value = &StringSlice{}
}
set.Var(f.Value, name, f.Usage)
})
}
@ -152,10 +163,11 @@ func (f StringSliceFlag) getName() string {
return f.Name
}
// StringSlice is an opaque type for []int to satisfy flag.Value
type IntSlice []int
// Set parses the value into an integer and appends it to the list of values
func (f *IntSlice) Set(value string) error {
tmp, err := strconv.Atoi(value)
if err != nil {
return err
@ -165,14 +177,18 @@ func (f *IntSlice) Set(value string) error {
return nil
}
// String returns a readable representation of this value (for usage defaults)
func (f *IntSlice) String() string {
return fmt.Sprintf("%d", *f)
}
// Value returns the slice of ints set by this flag
func (f *IntSlice) Value() []int {
return *f
}
// IntSliceFlag is an int flag that can be specified multiple times on the
// command-line
type IntSliceFlag struct {
Name string
Value *IntSlice
@ -180,12 +196,14 @@ type IntSliceFlag struct {
EnvVar string
}
// String returns the usage
func (f IntSliceFlag) String() string {
firstName := strings.Trim(strings.Split(f.Name, ",")[0], " ")
pref := prefixFor(firstName)
return withEnvHint(f.EnvVar, fmt.Sprintf("%s [%v]\t%v", prefixedNames(f.Name), pref+firstName+" option "+pref+firstName+" option", f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f IntSliceFlag) Apply(set *flag.FlagSet) {
if f.EnvVar != "" {
for _, envVar := range strings.Split(f.EnvVar, ",") {
@ -206,6 +224,9 @@ func (f IntSliceFlag) Apply(set *flag.FlagSet) {
}
eachName(f.Name, func(name string) {
if f.Value == nil {
f.Value = &IntSlice{}
}
set.Var(f.Value, name, f.Usage)
})
}
@ -214,16 +235,19 @@ func (f IntSliceFlag) getName() string {
return f.Name
}
// BoolFlag is a switch that defaults to false
type BoolFlag struct {
Name string
Usage string
EnvVar string
}
// String returns a readable representation of this value (for usage defaults)
func (f BoolFlag) String() string {
return withEnvHint(f.EnvVar, fmt.Sprintf("%s\t%v", prefixedNames(f.Name), f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f BoolFlag) Apply(set *flag.FlagSet) {
val := false
if f.EnvVar != "" {
@ -248,16 +272,20 @@ func (f BoolFlag) getName() string {
return f.Name
}
// BoolTFlag this represents a boolean flag that is true by default, but can
// still be set to false by --some-flag=false
type BoolTFlag struct {
Name string
Usage string
EnvVar string
}
// String returns a readable representation of this value (for usage defaults)
func (f BoolTFlag) String() string {
return withEnvHint(f.EnvVar, fmt.Sprintf("%s\t%v", prefixedNames(f.Name), f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f BoolTFlag) Apply(set *flag.FlagSet) {
val := true
if f.EnvVar != "" {
@ -282,6 +310,7 @@ func (f BoolTFlag) getName() string {
return f.Name
}
// StringFlag represents a flag that takes as string value
type StringFlag struct {
Name string
Value string
@ -289,6 +318,7 @@ type StringFlag struct {
EnvVar string
}
// String returns the usage
func (f StringFlag) String() string {
var fmtString string
fmtString = "%s %v\t%v"
@ -302,6 +332,7 @@ func (f StringFlag) String() string {
return withEnvHint(f.EnvVar, fmt.Sprintf(fmtString, prefixedNames(f.Name), f.Value, f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f StringFlag) Apply(set *flag.FlagSet) {
if f.EnvVar != "" {
for _, envVar := range strings.Split(f.EnvVar, ",") {
@ -322,6 +353,8 @@ func (f StringFlag) getName() string {
return f.Name
}
// IntFlag is a flag that takes an integer
// Errors if the value provided cannot be parsed
type IntFlag struct {
Name string
Value int
@ -329,10 +362,12 @@ type IntFlag struct {
EnvVar string
}
// String returns the usage
func (f IntFlag) String() string {
return withEnvHint(f.EnvVar, fmt.Sprintf("%s \"%v\"\t%v", prefixedNames(f.Name), f.Value, f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f IntFlag) Apply(set *flag.FlagSet) {
if f.EnvVar != "" {
for _, envVar := range strings.Split(f.EnvVar, ",") {
@ -356,6 +391,8 @@ func (f IntFlag) getName() string {
return f.Name
}
// DurationFlag is a flag that takes a duration specified in Go's duration
// format: https://golang.org/pkg/time/#ParseDuration
type DurationFlag struct {
Name string
Value time.Duration
@ -363,10 +400,12 @@ type DurationFlag struct {
EnvVar string
}
// String returns a readable representation of this value (for usage defaults)
func (f DurationFlag) String() string {
return withEnvHint(f.EnvVar, fmt.Sprintf("%s \"%v\"\t%v", prefixedNames(f.Name), f.Value, f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f DurationFlag) Apply(set *flag.FlagSet) {
if f.EnvVar != "" {
for _, envVar := range strings.Split(f.EnvVar, ",") {
@ -390,6 +429,8 @@ func (f DurationFlag) getName() string {
return f.Name
}
// Float64Flag is a flag that takes an float value
// Errors if the value provided cannot be parsed
type Float64Flag struct {
Name string
Value float64
@ -397,10 +438,12 @@ type Float64Flag struct {
EnvVar string
}
// String returns the usage
func (f Float64Flag) String() string {
return withEnvHint(f.EnvVar, fmt.Sprintf("%s \"%v\"\t%v", prefixedNames(f.Name), f.Value, f.Usage))
}
// Apply populates the flag given the flag set and environment
func (f Float64Flag) Apply(set *flag.FlagSet) {
if f.EnvVar != "" {
for _, envVar := range strings.Split(f.EnvVar, ",") {

View File

@ -1,742 +0,0 @@
package cli_test
import (
"fmt"
"os"
"reflect"
"strings"
"testing"
"github.com/codegangsta/cli"
)
var boolFlagTests = []struct {
name string
expected string
}{
{"help", "--help\t"},
{"h", "-h\t"},
}
func TestBoolFlagHelpOutput(t *testing.T) {
for _, test := range boolFlagTests {
flag := cli.BoolFlag{Name: test.name}
output := flag.String()
if output != test.expected {
t.Errorf("%s does not match %s", output, test.expected)
}
}
}
var stringFlagTests = []struct {
name string
value string
expected string
}{
{"help", "", "--help \t"},
{"h", "", "-h \t"},
{"h", "", "-h \t"},
{"test", "Something", "--test \"Something\"\t"},
}
func TestStringFlagHelpOutput(t *testing.T) {
for _, test := range stringFlagTests {
flag := cli.StringFlag{Name: test.name, Value: test.value}
output := flag.String()
if output != test.expected {
t.Errorf("%s does not match %s", output, test.expected)
}
}
}
func TestStringFlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_FOO", "derp")
for _, test := range stringFlagTests {
flag := cli.StringFlag{Name: test.name, Value: test.value, EnvVar: "APP_FOO"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_FOO]") {
t.Errorf("%s does not end with [$APP_FOO]", output)
}
}
}
var stringSliceFlagTests = []struct {
name string
value *cli.StringSlice
expected string
}{
{"help", func() *cli.StringSlice {
s := &cli.StringSlice{}
s.Set("")
return s
}(), "--help [--help option --help option]\t"},
{"h", func() *cli.StringSlice {
s := &cli.StringSlice{}
s.Set("")
return s
}(), "-h [-h option -h option]\t"},
{"h", func() *cli.StringSlice {
s := &cli.StringSlice{}
s.Set("")
return s
}(), "-h [-h option -h option]\t"},
{"test", func() *cli.StringSlice {
s := &cli.StringSlice{}
s.Set("Something")
return s
}(), "--test [--test option --test option]\t"},
}
func TestStringSliceFlagHelpOutput(t *testing.T) {
for _, test := range stringSliceFlagTests {
flag := cli.StringSliceFlag{Name: test.name, Value: test.value}
output := flag.String()
if output != test.expected {
t.Errorf("%q does not match %q", output, test.expected)
}
}
}
func TestStringSliceFlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_QWWX", "11,4")
for _, test := range stringSliceFlagTests {
flag := cli.StringSliceFlag{Name: test.name, Value: test.value, EnvVar: "APP_QWWX"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_QWWX]") {
t.Errorf("%q does not end with [$APP_QWWX]", output)
}
}
}
var intFlagTests = []struct {
name string
expected string
}{
{"help", "--help \"0\"\t"},
{"h", "-h \"0\"\t"},
}
func TestIntFlagHelpOutput(t *testing.T) {
for _, test := range intFlagTests {
flag := cli.IntFlag{Name: test.name}
output := flag.String()
if output != test.expected {
t.Errorf("%s does not match %s", output, test.expected)
}
}
}
func TestIntFlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_BAR", "2")
for _, test := range intFlagTests {
flag := cli.IntFlag{Name: test.name, EnvVar: "APP_BAR"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_BAR]") {
t.Errorf("%s does not end with [$APP_BAR]", output)
}
}
}
var durationFlagTests = []struct {
name string
expected string
}{
{"help", "--help \"0\"\t"},
{"h", "-h \"0\"\t"},
}
func TestDurationFlagHelpOutput(t *testing.T) {
for _, test := range durationFlagTests {
flag := cli.DurationFlag{Name: test.name}
output := flag.String()
if output != test.expected {
t.Errorf("%s does not match %s", output, test.expected)
}
}
}
func TestDurationFlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_BAR", "2h3m6s")
for _, test := range durationFlagTests {
flag := cli.DurationFlag{Name: test.name, EnvVar: "APP_BAR"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_BAR]") {
t.Errorf("%s does not end with [$APP_BAR]", output)
}
}
}
var intSliceFlagTests = []struct {
name string
value *cli.IntSlice
expected string
}{
{"help", &cli.IntSlice{}, "--help [--help option --help option]\t"},
{"h", &cli.IntSlice{}, "-h [-h option -h option]\t"},
{"h", &cli.IntSlice{}, "-h [-h option -h option]\t"},
{"test", func() *cli.IntSlice {
i := &cli.IntSlice{}
i.Set("9")
return i
}(), "--test [--test option --test option]\t"},
}
func TestIntSliceFlagHelpOutput(t *testing.T) {
for _, test := range intSliceFlagTests {
flag := cli.IntSliceFlag{Name: test.name, Value: test.value}
output := flag.String()
if output != test.expected {
t.Errorf("%q does not match %q", output, test.expected)
}
}
}
func TestIntSliceFlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_SMURF", "42,3")
for _, test := range intSliceFlagTests {
flag := cli.IntSliceFlag{Name: test.name, Value: test.value, EnvVar: "APP_SMURF"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_SMURF]") {
t.Errorf("%q does not end with [$APP_SMURF]", output)
}
}
}
var float64FlagTests = []struct {
name string
expected string
}{
{"help", "--help \"0\"\t"},
{"h", "-h \"0\"\t"},
}
func TestFloat64FlagHelpOutput(t *testing.T) {
for _, test := range float64FlagTests {
flag := cli.Float64Flag{Name: test.name}
output := flag.String()
if output != test.expected {
t.Errorf("%s does not match %s", output, test.expected)
}
}
}
func TestFloat64FlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_BAZ", "99.4")
for _, test := range float64FlagTests {
flag := cli.Float64Flag{Name: test.name, EnvVar: "APP_BAZ"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_BAZ]") {
t.Errorf("%s does not end with [$APP_BAZ]", output)
}
}
}
var genericFlagTests = []struct {
name string
value cli.Generic
expected string
}{
{"test", &Parser{"abc", "def"}, "--test \"abc,def\"\ttest flag"},
{"t", &Parser{"abc", "def"}, "-t \"abc,def\"\ttest flag"},
}
func TestGenericFlagHelpOutput(t *testing.T) {
for _, test := range genericFlagTests {
flag := cli.GenericFlag{Name: test.name, Value: test.value, Usage: "test flag"}
output := flag.String()
if output != test.expected {
t.Errorf("%q does not match %q", output, test.expected)
}
}
}
func TestGenericFlagWithEnvVarHelpOutput(t *testing.T) {
os.Clearenv()
os.Setenv("APP_ZAP", "3")
for _, test := range genericFlagTests {
flag := cli.GenericFlag{Name: test.name, EnvVar: "APP_ZAP"}
output := flag.String()
if !strings.HasSuffix(output, " [$APP_ZAP]") {
t.Errorf("%s does not end with [$APP_ZAP]", output)
}
}
}
func TestParseMultiString(t *testing.T) {
(&cli.App{
Flags: []cli.Flag{
cli.StringFlag{Name: "serve, s"},
},
Action: func(ctx *cli.Context) {
if ctx.String("serve") != "10" {
t.Errorf("main name not set")
}
if ctx.String("s") != "10" {
t.Errorf("short name not set")
}
},
}).Run([]string{"run", "-s", "10"})
}
func TestParseMultiStringFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_COUNT", "20")
(&cli.App{
Flags: []cli.Flag{
cli.StringFlag{Name: "count, c", EnvVar: "APP_COUNT"},
},
Action: func(ctx *cli.Context) {
if ctx.String("count") != "20" {
t.Errorf("main name not set")
}
if ctx.String("c") != "20" {
t.Errorf("short name not set")
}
},
}).Run([]string{"run"})
}
func TestParseMultiStringFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_COUNT", "20")
(&cli.App{
Flags: []cli.Flag{
cli.StringFlag{Name: "count, c", EnvVar: "COMPAT_COUNT,APP_COUNT"},
},
Action: func(ctx *cli.Context) {
if ctx.String("count") != "20" {
t.Errorf("main name not set")
}
if ctx.String("c") != "20" {
t.Errorf("short name not set")
}
},
}).Run([]string{"run"})
}
func TestParseMultiStringSlice(t *testing.T) {
(&cli.App{
Flags: []cli.Flag{
cli.StringSliceFlag{Name: "serve, s", Value: &cli.StringSlice{}},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.StringSlice("serve"), []string{"10", "20"}) {
t.Errorf("main name not set")
}
if !reflect.DeepEqual(ctx.StringSlice("s"), []string{"10", "20"}) {
t.Errorf("short name not set")
}
},
}).Run([]string{"run", "-s", "10", "-s", "20"})
}
func TestParseMultiStringSliceFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_INTERVALS", "20,30,40")
(&cli.App{
Flags: []cli.Flag{
cli.StringSliceFlag{Name: "intervals, i", Value: &cli.StringSlice{}, EnvVar: "APP_INTERVALS"},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.StringSlice("intervals"), []string{"20", "30", "40"}) {
t.Errorf("main name not set from env")
}
if !reflect.DeepEqual(ctx.StringSlice("i"), []string{"20", "30", "40"}) {
t.Errorf("short name not set from env")
}
},
}).Run([]string{"run"})
}
func TestParseMultiStringSliceFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_INTERVALS", "20,30,40")
(&cli.App{
Flags: []cli.Flag{
cli.StringSliceFlag{Name: "intervals, i", Value: &cli.StringSlice{}, EnvVar: "COMPAT_INTERVALS,APP_INTERVALS"},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.StringSlice("intervals"), []string{"20", "30", "40"}) {
t.Errorf("main name not set from env")
}
if !reflect.DeepEqual(ctx.StringSlice("i"), []string{"20", "30", "40"}) {
t.Errorf("short name not set from env")
}
},
}).Run([]string{"run"})
}
func TestParseMultiInt(t *testing.T) {
a := cli.App{
Flags: []cli.Flag{
cli.IntFlag{Name: "serve, s"},
},
Action: func(ctx *cli.Context) {
if ctx.Int("serve") != 10 {
t.Errorf("main name not set")
}
if ctx.Int("s") != 10 {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run", "-s", "10"})
}
func TestParseMultiIntFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_TIMEOUT_SECONDS", "10")
a := cli.App{
Flags: []cli.Flag{
cli.IntFlag{Name: "timeout, t", EnvVar: "APP_TIMEOUT_SECONDS"},
},
Action: func(ctx *cli.Context) {
if ctx.Int("timeout") != 10 {
t.Errorf("main name not set")
}
if ctx.Int("t") != 10 {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiIntFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_TIMEOUT_SECONDS", "10")
a := cli.App{
Flags: []cli.Flag{
cli.IntFlag{Name: "timeout, t", EnvVar: "COMPAT_TIMEOUT_SECONDS,APP_TIMEOUT_SECONDS"},
},
Action: func(ctx *cli.Context) {
if ctx.Int("timeout") != 10 {
t.Errorf("main name not set")
}
if ctx.Int("t") != 10 {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiIntSlice(t *testing.T) {
(&cli.App{
Flags: []cli.Flag{
cli.IntSliceFlag{Name: "serve, s", Value: &cli.IntSlice{}},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.IntSlice("serve"), []int{10, 20}) {
t.Errorf("main name not set")
}
if !reflect.DeepEqual(ctx.IntSlice("s"), []int{10, 20}) {
t.Errorf("short name not set")
}
},
}).Run([]string{"run", "-s", "10", "-s", "20"})
}
func TestParseMultiIntSliceFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_INTERVALS", "20,30,40")
(&cli.App{
Flags: []cli.Flag{
cli.IntSliceFlag{Name: "intervals, i", Value: &cli.IntSlice{}, EnvVar: "APP_INTERVALS"},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.IntSlice("intervals"), []int{20, 30, 40}) {
t.Errorf("main name not set from env")
}
if !reflect.DeepEqual(ctx.IntSlice("i"), []int{20, 30, 40}) {
t.Errorf("short name not set from env")
}
},
}).Run([]string{"run"})
}
func TestParseMultiIntSliceFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_INTERVALS", "20,30,40")
(&cli.App{
Flags: []cli.Flag{
cli.IntSliceFlag{Name: "intervals, i", Value: &cli.IntSlice{}, EnvVar: "COMPAT_INTERVALS,APP_INTERVALS"},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.IntSlice("intervals"), []int{20, 30, 40}) {
t.Errorf("main name not set from env")
}
if !reflect.DeepEqual(ctx.IntSlice("i"), []int{20, 30, 40}) {
t.Errorf("short name not set from env")
}
},
}).Run([]string{"run"})
}
func TestParseMultiFloat64(t *testing.T) {
a := cli.App{
Flags: []cli.Flag{
cli.Float64Flag{Name: "serve, s"},
},
Action: func(ctx *cli.Context) {
if ctx.Float64("serve") != 10.2 {
t.Errorf("main name not set")
}
if ctx.Float64("s") != 10.2 {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run", "-s", "10.2"})
}
func TestParseMultiFloat64FromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_TIMEOUT_SECONDS", "15.5")
a := cli.App{
Flags: []cli.Flag{
cli.Float64Flag{Name: "timeout, t", EnvVar: "APP_TIMEOUT_SECONDS"},
},
Action: func(ctx *cli.Context) {
if ctx.Float64("timeout") != 15.5 {
t.Errorf("main name not set")
}
if ctx.Float64("t") != 15.5 {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiFloat64FromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_TIMEOUT_SECONDS", "15.5")
a := cli.App{
Flags: []cli.Flag{
cli.Float64Flag{Name: "timeout, t", EnvVar: "COMPAT_TIMEOUT_SECONDS,APP_TIMEOUT_SECONDS"},
},
Action: func(ctx *cli.Context) {
if ctx.Float64("timeout") != 15.5 {
t.Errorf("main name not set")
}
if ctx.Float64("t") != 15.5 {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiBool(t *testing.T) {
a := cli.App{
Flags: []cli.Flag{
cli.BoolFlag{Name: "serve, s"},
},
Action: func(ctx *cli.Context) {
if ctx.Bool("serve") != true {
t.Errorf("main name not set")
}
if ctx.Bool("s") != true {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run", "--serve"})
}
func TestParseMultiBoolFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_DEBUG", "1")
a := cli.App{
Flags: []cli.Flag{
cli.BoolFlag{Name: "debug, d", EnvVar: "APP_DEBUG"},
},
Action: func(ctx *cli.Context) {
if ctx.Bool("debug") != true {
t.Errorf("main name not set from env")
}
if ctx.Bool("d") != true {
t.Errorf("short name not set from env")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiBoolFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_DEBUG", "1")
a := cli.App{
Flags: []cli.Flag{
cli.BoolFlag{Name: "debug, d", EnvVar: "COMPAT_DEBUG,APP_DEBUG"},
},
Action: func(ctx *cli.Context) {
if ctx.Bool("debug") != true {
t.Errorf("main name not set from env")
}
if ctx.Bool("d") != true {
t.Errorf("short name not set from env")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiBoolT(t *testing.T) {
a := cli.App{
Flags: []cli.Flag{
cli.BoolTFlag{Name: "serve, s"},
},
Action: func(ctx *cli.Context) {
if ctx.BoolT("serve") != true {
t.Errorf("main name not set")
}
if ctx.BoolT("s") != true {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run", "--serve"})
}
func TestParseMultiBoolTFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_DEBUG", "0")
a := cli.App{
Flags: []cli.Flag{
cli.BoolTFlag{Name: "debug, d", EnvVar: "APP_DEBUG"},
},
Action: func(ctx *cli.Context) {
if ctx.BoolT("debug") != false {
t.Errorf("main name not set from env")
}
if ctx.BoolT("d") != false {
t.Errorf("short name not set from env")
}
},
}
a.Run([]string{"run"})
}
func TestParseMultiBoolTFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_DEBUG", "0")
a := cli.App{
Flags: []cli.Flag{
cli.BoolTFlag{Name: "debug, d", EnvVar: "COMPAT_DEBUG,APP_DEBUG"},
},
Action: func(ctx *cli.Context) {
if ctx.BoolT("debug") != false {
t.Errorf("main name not set from env")
}
if ctx.BoolT("d") != false {
t.Errorf("short name not set from env")
}
},
}
a.Run([]string{"run"})
}
type Parser [2]string
func (p *Parser) Set(value string) error {
parts := strings.Split(value, ",")
if len(parts) != 2 {
return fmt.Errorf("invalid format")
}
(*p)[0] = parts[0]
(*p)[1] = parts[1]
return nil
}
func (p *Parser) String() string {
return fmt.Sprintf("%s,%s", p[0], p[1])
}
func TestParseGeneric(t *testing.T) {
a := cli.App{
Flags: []cli.Flag{
cli.GenericFlag{Name: "serve, s", Value: &Parser{}},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.Generic("serve"), &Parser{"10", "20"}) {
t.Errorf("main name not set")
}
if !reflect.DeepEqual(ctx.Generic("s"), &Parser{"10", "20"}) {
t.Errorf("short name not set")
}
},
}
a.Run([]string{"run", "-s", "10,20"})
}
func TestParseGenericFromEnv(t *testing.T) {
os.Clearenv()
os.Setenv("APP_SERVE", "20,30")
a := cli.App{
Flags: []cli.Flag{
cli.GenericFlag{Name: "serve, s", Value: &Parser{}, EnvVar: "APP_SERVE"},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.Generic("serve"), &Parser{"20", "30"}) {
t.Errorf("main name not set from env")
}
if !reflect.DeepEqual(ctx.Generic("s"), &Parser{"20", "30"}) {
t.Errorf("short name not set from env")
}
},
}
a.Run([]string{"run"})
}
func TestParseGenericFromEnvCascade(t *testing.T) {
os.Clearenv()
os.Setenv("APP_FOO", "99,2000")
a := cli.App{
Flags: []cli.Flag{
cli.GenericFlag{Name: "foos", Value: &Parser{}, EnvVar: "COMPAT_FOO,APP_FOO"},
},
Action: func(ctx *cli.Context) {
if !reflect.DeepEqual(ctx.Generic("foos"), &Parser{"99", "2000"}) {
t.Errorf("value not set from env")
}
},
}
a.Run([]string{"run"})
}

View File

@ -1,6 +1,12 @@
package cli
import "fmt"
import (
"fmt"
"io"
"strings"
"text/tabwriter"
"text/template"
)
// The text template for the Default help topic.
// cli.go uses text/template to render templates. You can
@ -9,30 +15,33 @@ var AppHelpTemplate = `NAME:
{{.Name}} - {{.Usage}}
USAGE:
{{.Name}} {{if .Flags}}[global options] {{end}}command{{if .Flags}} [command options]{{end}} [arguments...]
{{.HelpName}} {{if .Flags}}[global options]{{end}}{{if .Commands}} command [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}
{{if .Version}}
VERSION:
{{.Version}}
AUTHOR(S):
{{range .Authors}}{{ . }}
{{end}}
{{end}}{{if len .Authors}}
AUTHOR(S):
{{range .Authors}}{{ . }}{{end}}
{{end}}{{if .Commands}}
COMMANDS:
{{range .Commands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
{{end}}{{if .Flags}}
{{end}}{{end}}{{if .Flags}}
GLOBAL OPTIONS:
{{range .Flags}}{{.}}
{{end}}{{end}}
{{end}}{{end}}{{if .Copyright }}
COPYRIGHT:
{{.Copyright}}
{{end}}
`
// The text template for the command help topic.
// cli.go uses text/template to render templates. You can
// render custom help text by setting this variable.
var CommandHelpTemplate = `NAME:
{{.Name}} - {{.Usage}}
{{.HelpName}} - {{.Usage}}
USAGE:
command {{.Name}}{{if .Flags}} [command options]{{end}} [arguments...]{{if .Description}}
{{.HelpName}}{{if .Flags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}{{if .Description}}
DESCRIPTION:
{{.Description}}{{end}}{{if .Flags}}
@ -46,10 +55,10 @@ OPTIONS:
// cli.go uses text/template to render templates. You can
// render custom help text by setting this variable.
var SubcommandHelpTemplate = `NAME:
{{.Name}} - {{.Usage}}
{{.HelpName}} - {{.Usage}}
USAGE:
{{.Name}} command{{if .Flags}} [command options]{{end}} [arguments...]
{{.HelpName}} command{{if .Flags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}}
COMMANDS:
{{range .Commands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
@ -60,9 +69,10 @@ OPTIONS:
`
var helpCommand = Command{
Name: "help",
Aliases: []string{"h"},
Usage: "Shows a list of commands or help for one command",
Name: "help",
Aliases: []string{"h"},
Usage: "Shows a list of commands or help for one command",
ArgsUsage: "[command]",
Action: func(c *Context) {
args := c.Args()
if args.Present() {
@ -74,9 +84,10 @@ var helpCommand = Command{
}
var helpSubcommand = Command{
Name: "help",
Aliases: []string{"h"},
Usage: "Shows a list of commands or help for one command",
Name: "help",
Aliases: []string{"h"},
Usage: "Shows a list of commands or help for one command",
ArgsUsage: "[command]",
Action: func(c *Context) {
args := c.Args()
if args.Present() {
@ -87,16 +98,16 @@ var helpSubcommand = Command{
},
}
// Prints help for the App
type helpPrinter func(templ string, data interface{})
// Prints help for the App or Command
type helpPrinter func(w io.Writer, templ string, data interface{})
var HelpPrinter helpPrinter = nil
var HelpPrinter helpPrinter = printHelp
// Prints version for the App
var VersionPrinter = printVersion
func ShowAppHelp(c *Context) {
HelpPrinter(AppHelpTemplate, c.App)
HelpPrinter(c.App.Writer, AppHelpTemplate, c.App)
}
// Prints the list of subcommands as the default app completion method
@ -109,24 +120,24 @@ func DefaultAppComplete(c *Context) {
}
// Prints help for the given command
func ShowCommandHelp(c *Context, command string) {
func ShowCommandHelp(ctx *Context, command string) {
// show the subcommand help for a command with subcommands
if command == "" {
HelpPrinter(SubcommandHelpTemplate, c.App)
HelpPrinter(ctx.App.Writer, SubcommandHelpTemplate, ctx.App)
return
}
for _, c := range c.App.Commands {
for _, c := range ctx.App.Commands {
if c.HasName(command) {
HelpPrinter(CommandHelpTemplate, c)
HelpPrinter(ctx.App.Writer, CommandHelpTemplate, c)
return
}
}
if c.App.CommandNotFound != nil {
c.App.CommandNotFound(c, command)
if ctx.App.CommandNotFound != nil {
ctx.App.CommandNotFound(ctx, command)
} else {
fmt.Fprintf(c.App.Writer, "No help topic for '%v'\n", command)
fmt.Fprintf(ctx.App.Writer, "No help topic for '%v'\n", command)
}
}
@ -160,22 +171,42 @@ func ShowCommandCompletions(ctx *Context, command string) {
}
}
func checkVersion(c *Context) bool {
if c.GlobalBool("version") {
ShowVersion(c)
return true
func printHelp(out io.Writer, templ string, data interface{}) {
funcMap := template.FuncMap{
"join": strings.Join,
}
return false
w := tabwriter.NewWriter(out, 0, 8, 1, '\t', 0)
t := template.Must(template.New("help").Funcs(funcMap).Parse(templ))
err := t.Execute(w, data)
if err != nil {
panic(err)
}
w.Flush()
}
func checkVersion(c *Context) bool {
found := false
if VersionFlag.Name != "" {
eachName(VersionFlag.Name, func(name string) {
if c.GlobalBool(name) || c.Bool(name) {
found = true
}
})
}
return found
}
func checkHelp(c *Context) bool {
if c.GlobalBool("h") || c.GlobalBool("help") {
ShowAppHelp(c)
return true
found := false
if HelpFlag.Name != "" {
eachName(HelpFlag.Name, func(name string) {
if c.GlobalBool(name) || c.Bool(name) {
found = true
}
})
}
return false
return found
}
func checkCommandHelp(c *Context, name string) bool {

View File

@ -1,19 +0,0 @@
package cli_test
import (
"reflect"
"testing"
)
/* Test Helpers */
func expect(t *testing.T, a interface{}, b interface{}) {
if a != b {
t.Errorf("Expected %v (type %v) - Got %v (type %v)", b, reflect.TypeOf(b), a, reflect.TypeOf(a))
}
}
func refute(t *testing.T, a interface{}, b interface{}) {
if a == b {
t.Errorf("Did not expect %v (type %v) - Got %v (type %v)", b, reflect.TypeOf(b), a, reflect.TypeOf(a))
}
}

View File

@ -1,3 +1,21 @@
// Copyright 2015 The go-ethereum Authors
// Copyright 2015 Lefteris Karapetsas <lefteris@refu.co>
// Copyright 2015 Matthew Wampler-Doty <matthew.wampler.doty@gmail.com>
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethash
/*
@ -30,8 +48,8 @@ import (
)
var (
minDifficulty = new(big.Int).Exp(big.NewInt(2), big.NewInt(256), big.NewInt(0))
sharedLight = new(Light)
maxUint256 = new(big.Int).Exp(big.NewInt(2), big.NewInt(256), big.NewInt(0))
sharedLight = new(Light)
)
const (
@ -140,7 +158,7 @@ func (l *Light) Verify(block pow.Block) bool {
// the finalizer before the call completes.
_ = cache
// The actual check.
target := new(big.Int).Div(minDifficulty, difficulty)
target := new(big.Int).Div(maxUint256, difficulty)
return h256ToHash(ret.result).Big().Cmp(target) <= 0
}
@ -199,7 +217,7 @@ func (d *dag) generate() {
if d.dir == "" {
d.dir = DefaultDir
}
glog.V(logger.Info).Infof("Generating DAG for epoch %d (%x)", d.epoch, seedHash)
glog.V(logger.Info).Infof("Generating DAG for epoch %d (size %d) (%x)", d.epoch, dagSize, seedHash)
// Generate a temporary cache.
// TODO: this could share the cache with Light
cache := C.ethash_light_new_internal(cacheSize, (*C.ethash_h256_t)(unsafe.Pointer(&seedHash[0])))
@ -220,14 +238,18 @@ func (d *dag) generate() {
})
}
func freeDAG(h *dag) {
C.ethash_full_delete(h.ptr)
h.ptr = nil
func freeDAG(d *dag) {
C.ethash_full_delete(d.ptr)
d.ptr = nil
}
func (d *dag) Ptr() unsafe.Pointer {
return unsafe.Pointer(d.ptr.data)
}
//export ethashGoCallback
func ethashGoCallback(percent C.unsigned) C.int {
glog.V(logger.Info).Infof("Still generating DAG: %d%%", percent)
glog.V(logger.Info).Infof("Generating DAG: %d%%", percent)
return 0
}
@ -273,7 +295,7 @@ func (pow *Full) getDAG(blockNum uint64) (d *dag) {
return d
}
func (pow *Full) Search(block pow.Block, stop <-chan struct{}) (nonce uint64, mixDigest []byte) {
func (pow *Full) Search(block pow.Block, stop <-chan struct{}, index int) (nonce uint64, mixDigest []byte) {
dag := pow.getDAG(block.NumberU64())
r := rand.New(rand.NewSource(time.Now().UnixNano()))
@ -286,7 +308,7 @@ func (pow *Full) Search(block pow.Block, stop <-chan struct{}) (nonce uint64, mi
nonce = uint64(r.Int63())
hash := hashToH256(block.HashNoNonce())
target := new(big.Int).Div(minDifficulty, diff)
target := new(big.Int).Div(maxUint256, diff)
for {
select {
case <-stop:

View File

@ -0,0 +1,629 @@
// Copyright 2014 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// +build opencl
package ethash
//#cgo LDFLAGS: -w
//#include <stdint.h>
//#include <string.h>
//#include "src/libethash/internal.h"
import "C"
import (
crand "crypto/rand"
"encoding/binary"
"fmt"
"math"
"math/big"
mrand "math/rand"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"unsafe"
"github.com/Gustav-Simonsson/go-opencl/cl"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/pow"
)
/*
This code have two main entry points:
1. The initCL(...) function configures one or more OpenCL device
(for now only GPU) and loads the Ethash DAG onto device memory
2. The Search(...) function loads a Ethash nonce into device(s) memory and
executes the Ethash OpenCL kernel.
Throughout the code, we refer to "host memory" and "device memory".
For most systems (e.g. regular PC GPU miner) the host memory is RAM and
device memory is the GPU global memory (e.g. GDDR5).
References mentioned in code comments:
1. https://github.com/ethereum/wiki/wiki/Ethash
2. https://github.com/ethereum/cpp-ethereum/blob/develop/libethash-cl/ethash_cl_miner.cpp
3. https://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/
4. http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_OpenCL_Programming_User_Guide.pdf
*/
type OpenCLDevice struct {
deviceId int
device *cl.Device
openCL11 bool // OpenCL version 1.1 and 1.2 are handled a bit different
openCL12 bool
dagBuf *cl.MemObject // Ethash full DAG in device mem
headerBuf *cl.MemObject // Hash of block-to-mine in device mem
searchBuffers []*cl.MemObject
searchKernel *cl.Kernel
hashKernel *cl.Kernel
queue *cl.CommandQueue
ctx *cl.Context
workGroupSize int
nonceRand *mrand.Rand // seeded by crypto/rand, see comments where it's initialised
result common.Hash
}
type OpenCLMiner struct {
mu sync.Mutex
ethash *Ethash // Ethash full DAG & cache in host mem
deviceIds []int
devices []*OpenCLDevice
dagSize uint64
hashRate int32 // Go atomics & uint64 have some issues; int32 is supported on all platforms
}
type pendingSearch struct {
bufIndex uint32
startNonce uint64
}
const (
SIZEOF_UINT32 = 4
// See [1]
ethashMixBytesLen = 128
ethashAccesses = 64
// See [4]
workGroupSize = 32 // must be multiple of 8
maxSearchResults = 63
searchBufSize = 2
globalWorkSize = 1024 * 256
)
func NewCL(deviceIds []int) *OpenCLMiner {
ids := make([]int, len(deviceIds))
copy(ids, deviceIds)
return &OpenCLMiner{
ethash: New(),
dagSize: 0, // to see if we need to update DAG.
deviceIds: ids,
}
}
func PrintDevices() {
fmt.Println("=============================================")
fmt.Println("============ OpenCL Device Info =============")
fmt.Println("=============================================")
var found []*cl.Device
platforms, err := cl.GetPlatforms()
if err != nil {
fmt.Println("Plaform error (check your OpenCL installation): %v", err)
return
}
for i, p := range platforms {
fmt.Println("Platform id ", i)
fmt.Println("Platform Name ", p.Name())
fmt.Println("Platform Vendor ", p.Vendor())
fmt.Println("Platform Version ", p.Version())
fmt.Println("Platform Extensions ", p.Extensions())
fmt.Println("Platform Profile ", p.Profile())
fmt.Println("")
devices, err := cl.GetDevices(p, cl.DeviceTypeGPU)
if err != nil {
fmt.Println("Device error (check your GPU drivers) :", err)
return
}
for _, d := range devices {
fmt.Println("Device OpenCL id ", i)
fmt.Println("Device id for mining ", len(found))
fmt.Println("Device Name ", d.Name())
fmt.Println("Vendor ", d.Vendor())
fmt.Println("Version ", d.Version())
fmt.Println("Driver version ", d.DriverVersion())
fmt.Println("Address bits ", d.AddressBits())
fmt.Println("Max clock freq ", d.MaxClockFrequency())
fmt.Println("Global mem size ", d.GlobalMemSize())
fmt.Println("Max constant buffer size", d.MaxConstantBufferSize())
fmt.Println("Max mem alloc size ", d.MaxMemAllocSize())
fmt.Println("Max compute units ", d.MaxComputeUnits())
fmt.Println("Max work group size ", d.MaxWorkGroupSize())
fmt.Println("Max work item sizes ", d.MaxWorkItemSizes())
fmt.Println("=============================================")
found = append(found, d)
}
}
if len(found) == 0 {
fmt.Println("Found no GPU(s). Check that your OS can see the GPU(s)")
} else {
var idsFormat string
for i := 0; i < len(found); i++ {
idsFormat += strconv.Itoa(i)
if i != len(found)-1 {
idsFormat += ","
}
}
fmt.Printf("Found %v devices. Benchmark first GPU: geth gpubench 0\n", len(found))
fmt.Printf("Mine using all GPUs: geth --minegpu %v\n", idsFormat)
}
}
// See [2]. We basically do the same here, but the Go OpenCL bindings
// are at a slightly higher abtraction level.
func InitCL(blockNum uint64, c *OpenCLMiner) error {
platforms, err := cl.GetPlatforms()
if err != nil {
return fmt.Errorf("Plaform error: %v\nCheck your OpenCL installation and then run geth gpuinfo", err)
}
var devices []*cl.Device
for _, p := range platforms {
ds, err := cl.GetDevices(p, cl.DeviceTypeGPU)
if err != nil {
return fmt.Errorf("Devices error: %v\nCheck your GPU drivers and then run geth gpuinfo", err)
}
for _, d := range ds {
devices = append(devices, d)
}
}
pow := New()
_ = pow.getDAG(blockNum) // generates DAG if we don't have it
pow.Light.getCache(blockNum) // and cache
c.ethash = pow
dagSize := uint64(C.ethash_get_datasize(C.uint64_t(blockNum)))
c.dagSize = dagSize
for _, id := range c.deviceIds {
if id > len(devices)-1 {
return fmt.Errorf("Device id not found. See available device ids with: geth gpuinfo")
} else {
err := initCLDevice(id, devices[id], c)
if err != nil {
return err
}
}
}
if len(c.devices) == 0 {
return fmt.Errorf("No GPU devices found")
}
return nil
}
func initCLDevice(deviceId int, device *cl.Device, c *OpenCLMiner) error {
devMaxAlloc := uint64(device.MaxMemAllocSize())
devGlobalMem := uint64(device.GlobalMemSize())
// TODO: more fine grained version logic
if device.Version() == "OpenCL 1.0" {
fmt.Println("Device OpenCL version not supported: ", device.Version())
return fmt.Errorf("opencl version not supported")
}
var cl11, cl12 bool
if device.Version() == "OpenCL 1.1" {
cl11 = true
}
if device.Version() == "OpenCL 1.2" {
cl12 = true
}
// log warnings but carry on; some device drivers report inaccurate values
if c.dagSize > devGlobalMem {
fmt.Printf("WARNING: device memory may be insufficient: %v. DAG size: %v.\n", devGlobalMem, c.dagSize)
}
if c.dagSize > devMaxAlloc {
fmt.Printf("WARNING: DAG size (%v) larger than device max memory allocation size (%v).\n", c.dagSize, devMaxAlloc)
fmt.Printf("You probably have to export GPU_MAX_ALLOC_PERCENT=95\n")
}
fmt.Printf("Initialising device %v: %v\n", deviceId, device.Name())
context, err := cl.CreateContext([]*cl.Device{device})
if err != nil {
return fmt.Errorf("failed creating context:", err)
}
// TODO: test running with CL_QUEUE_PROFILING_ENABLE for profiling?
queue, err := context.CreateCommandQueue(device, 0)
if err != nil {
return fmt.Errorf("command queue err:", err)
}
// See [4] section 3.2 and [3] "clBuildProgram".
// The OpenCL kernel code is compiled at run-time.
kvs := make(map[string]string, 4)
kvs["GROUP_SIZE"] = strconv.FormatUint(workGroupSize, 10)
kvs["DAG_SIZE"] = strconv.FormatUint(c.dagSize/ethashMixBytesLen, 10)
kvs["ACCESSES"] = strconv.FormatUint(ethashAccesses, 10)
kvs["MAX_OUTPUTS"] = strconv.FormatUint(maxSearchResults, 10)
kernelCode := replaceWords(kernel, kvs)
program, err := context.CreateProgramWithSource([]string{kernelCode})
if err != nil {
return fmt.Errorf("program err:", err)
}
/* if using AMD OpenCL impl, you can set this to debug on x86 CPU device.
see AMD OpenCL programming guide section 4.2
export in shell before running:
export AMD_OCL_BUILD_OPTIONS_APPEND="-g -O0"
export CPU_MAX_COMPUTE_UNITS=1
buildOpts := "-g -cl-opt-disable"
*/
buildOpts := ""
err = program.BuildProgram([]*cl.Device{device}, buildOpts)
if err != nil {
return fmt.Errorf("program build err:", err)
}
var searchKernelName, hashKernelName string
searchKernelName = "ethash_search"
hashKernelName = "ethash_hash"
searchKernel, err := program.CreateKernel(searchKernelName)
hashKernel, err := program.CreateKernel(hashKernelName)
if err != nil {
return fmt.Errorf("kernel err:", err)
}
// TODO: when this DAG size appears, patch the Go bindings
// (context.go) to work with uint64 as size_t
if c.dagSize > math.MaxInt32 {
fmt.Println("DAG too large for allocation.")
return fmt.Errorf("DAG too large for alloc")
}
// TODO: patch up Go bindings to work with size_t, will overflow if > maxint32
// TODO: fuck. shit's gonna overflow around 2017-06-09 12:17:02
dagBuf := *(new(*cl.MemObject))
dagBuf, err = context.CreateEmptyBuffer(cl.MemReadOnly, int(c.dagSize))
if err != nil {
return fmt.Errorf("allocating dag buf failed: ", err)
}
// write DAG to device mem
dagPtr := unsafe.Pointer(c.ethash.Full.current.ptr.data)
_, err = queue.EnqueueWriteBuffer(dagBuf, true, 0, int(c.dagSize), dagPtr, nil)
if err != nil {
return fmt.Errorf("writing to dag buf failed: ", err)
}
searchBuffers := make([]*cl.MemObject, searchBufSize)
for i := 0; i < searchBufSize; i++ {
searchBuff, err := context.CreateEmptyBuffer(cl.MemWriteOnly, (1+maxSearchResults)*SIZEOF_UINT32)
if err != nil {
return fmt.Errorf("search buffer err:", err)
}
searchBuffers[i] = searchBuff
}
headerBuf, err := context.CreateEmptyBuffer(cl.MemReadOnly, 32)
if err != nil {
return fmt.Errorf("header buffer err:", err)
}
// Unique, random nonces are crucial for mining efficieny.
// While we do not need cryptographically secure PRNG for nonces,
// we want to have uniform distribution and minimal repetition of nonces.
// We could guarantee strict uniqueness of nonces by generating unique ranges,
// but a int64 seed from crypto/rand should be good enough.
// we then use math/rand for speed and to avoid draining OS entropy pool
seed, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))
if err != nil {
return err
}
nonceRand := mrand.New(mrand.NewSource(seed.Int64()))
deviceStruct := &OpenCLDevice{
deviceId: deviceId,
device: device,
openCL11: cl11,
openCL12: cl12,
dagBuf: dagBuf,
headerBuf: headerBuf,
searchBuffers: searchBuffers,
searchKernel: searchKernel,
hashKernel: hashKernel,
queue: queue,
ctx: context,
workGroupSize: workGroupSize,
nonceRand: nonceRand,
}
c.devices = append(c.devices, deviceStruct)
return nil
}
func (c *OpenCLMiner) Search(block pow.Block, stop <-chan struct{}, index int) (uint64, []byte) {
c.mu.Lock()
newDagSize := uint64(C.ethash_get_datasize(C.uint64_t(block.NumberU64())))
if newDagSize > c.dagSize {
// TODO: clean up buffers from previous DAG?
err := InitCL(block.NumberU64(), c)
if err != nil {
fmt.Println("OpenCL init error: ", err)
return 0, []byte{0}
}
}
defer c.mu.Unlock()
// Avoid unneeded OpenCL initialisation if we received stop while running InitCL
select {
case <-stop:
return 0, []byte{0}
default:
}
headerHash := block.HashNoNonce()
diff := block.Difficulty()
target256 := new(big.Int).Div(maxUint256, diff)
target64 := new(big.Int).Rsh(target256, 192).Uint64()
var zero uint32 = 0
d := c.devices[index]
_, err := d.queue.EnqueueWriteBuffer(d.headerBuf, false, 0, 32, unsafe.Pointer(&headerHash[0]), nil)
if err != nil {
fmt.Println("Error in Search clEnqueueWriterBuffer : ", err)
return 0, []byte{0}
}
for i := 0; i < searchBufSize; i++ {
_, err := d.queue.EnqueueWriteBuffer(d.searchBuffers[i], false, 0, 4, unsafe.Pointer(&zero), nil)
if err != nil {
fmt.Println("Error in Search clEnqueueWriterBuffer : ", err)
return 0, []byte{0}
}
}
// wait for all search buffers to complete
err = d.queue.Finish()
if err != nil {
fmt.Println("Error in Search clFinish : ", err)
return 0, []byte{0}
}
err = d.searchKernel.SetArg(1, d.headerBuf)
if err != nil {
fmt.Println("Error in Search clSetKernelArg : ", err)
return 0, []byte{0}
}
err = d.searchKernel.SetArg(2, d.dagBuf)
if err != nil {
fmt.Println("Error in Search clSetKernelArg : ", err)
return 0, []byte{0}
}
err = d.searchKernel.SetArg(4, target64)
if err != nil {
fmt.Println("Error in Search clSetKernelArg : ", err)
return 0, []byte{0}
}
err = d.searchKernel.SetArg(5, uint32(math.MaxUint32))
if err != nil {
fmt.Println("Error in Search clSetKernelArg : ", err)
return 0, []byte{0}
}
// wait on this before returning
var preReturnEvent *cl.Event
if d.openCL12 {
preReturnEvent, err = d.ctx.CreateUserEvent()
if err != nil {
fmt.Println("Error in Search create CL user event : ", err)
return 0, []byte{0}
}
}
pending := make([]pendingSearch, 0, searchBufSize)
var p *pendingSearch
searchBufIndex := uint32(0)
var checkNonce uint64
loops := int64(0)
prevHashRate := int32(0)
start := time.Now().UnixNano()
// we grab a single random nonce and sets this as argument to the kernel search function
// the device will then add each local threads gid to the nonce, creating a unique nonce
// for each device computing unit executing in parallel
initNonce := uint64(d.nonceRand.Int63())
for nonce := initNonce; ; nonce += uint64(globalWorkSize) {
select {
case <-stop:
/*
if d.openCL12 {
err = cl.WaitForEvents([]*cl.Event{preReturnEvent})
if err != nil {
fmt.Println("Error in Search WaitForEvents: ", err)
}
}
*/
atomic.AddInt32(&c.hashRate, -prevHashRate)
return 0, []byte{0}
default:
}
if (loops % (1 << 7)) == 0 {
elapsed := time.Now().UnixNano() - start
// TODO: verify if this is correct hash rate calculation
hashes := (float64(1e9) / float64(elapsed)) * float64(loops*1024*256)
hashrateDiff := int32(hashes) - prevHashRate
prevHashRate = int32(hashes)
atomic.AddInt32(&c.hashRate, hashrateDiff)
}
loops++
err = d.searchKernel.SetArg(0, d.searchBuffers[searchBufIndex])
if err != nil {
fmt.Println("Error in Search clSetKernelArg : ", err)
return 0, []byte{0}
}
err = d.searchKernel.SetArg(3, nonce)
if err != nil {
fmt.Println("Error in Search clSetKernelArg : ", err)
return 0, []byte{0}
}
// execute kernel
_, err := d.queue.EnqueueNDRangeKernel(
d.searchKernel,
[]int{0},
[]int{globalWorkSize},
[]int{d.workGroupSize},
nil)
if err != nil {
fmt.Println("Error in Search clEnqueueNDRangeKernel : ", err)
return 0, []byte{0}
}
pending = append(pending, pendingSearch{bufIndex: searchBufIndex, startNonce: nonce})
searchBufIndex = (searchBufIndex + 1) % searchBufSize
if len(pending) == searchBufSize {
p = &(pending[searchBufIndex])
cres, _, err := d.queue.EnqueueMapBuffer(d.searchBuffers[p.bufIndex], true,
cl.MapFlagRead, 0, (1+maxSearchResults)*SIZEOF_UINT32,
nil)
if err != nil {
fmt.Println("Error in Search clEnqueueMapBuffer: ", err)
return 0, []byte{0}
}
results := cres.ByteSlice()
nfound := binary.LittleEndian.Uint32(results)
nfound = uint32(math.Min(float64(nfound), float64(maxSearchResults)))
// OpenCL returns the offsets from the start nonce
for i := uint32(0); i < nfound; i++ {
lo := (i + 1) * SIZEOF_UINT32
hi := (i + 2) * SIZEOF_UINT32
upperNonce := uint64(binary.LittleEndian.Uint32(results[lo:hi]))
checkNonce = p.startNonce + upperNonce
if checkNonce != 0 {
cn := C.uint64_t(checkNonce)
ds := C.uint64_t(c.dagSize)
// We verify that the nonce is indeed a solution by
// executing the Ethash verification function (on the CPU).
ret := C.ethash_light_compute_internal(c.ethash.Light.current.ptr, ds, hashToH256(headerHash), cn)
// TODO: return result first
if ret.success && h256ToHash(ret.result).Big().Cmp(target256) <= 0 {
_, err = d.queue.EnqueueUnmapMemObject(d.searchBuffers[p.bufIndex], cres, nil)
if err != nil {
fmt.Println("Error in Search clEnqueueUnmapMemObject: ", err)
}
if d.openCL12 {
err = cl.WaitForEvents([]*cl.Event{preReturnEvent})
if err != nil {
fmt.Println("Error in Search WaitForEvents: ", err)
}
}
return checkNonce, C.GoBytes(unsafe.Pointer(&ret.mix_hash), C.int(32))
}
_, err := d.queue.EnqueueWriteBuffer(d.searchBuffers[p.bufIndex], false, 0, 4, unsafe.Pointer(&zero), nil)
if err != nil {
fmt.Println("Error in Search cl: EnqueueWriteBuffer", err)
return 0, []byte{0}
}
}
}
_, err = d.queue.EnqueueUnmapMemObject(d.searchBuffers[p.bufIndex], cres, nil)
if err != nil {
fmt.Println("Error in Search clEnqueueUnMapMemObject: ", err)
return 0, []byte{0}
}
pending = append(pending[:searchBufIndex], pending[searchBufIndex+1:]...)
}
}
if d.openCL12 {
err := cl.WaitForEvents([]*cl.Event{preReturnEvent})
if err != nil {
fmt.Println("Error in Search clWaitForEvents: ", err)
return 0, []byte{0}
}
}
return 0, []byte{0}
}
func (c *OpenCLMiner) Verify(block pow.Block) bool {
return c.ethash.Light.Verify(block)
}
func (c *OpenCLMiner) GetHashrate() int64 {
return int64(atomic.LoadInt32(&c.hashRate))
}
func (c *OpenCLMiner) Turbo(on bool) {
// This is GPU mining. Always be turbo.
}
func replaceWords(text string, kvs map[string]string) string {
for k, v := range kvs {
text = strings.Replace(text, k, v, -1)
}
return text
}
func logErr(err error) {
if err != nil {
fmt.Println("Error in OpenCL call:", err)
}
}
func argErr(err error) error {
return fmt.Errorf("arg err: %v", err)
}

View File

@ -0,0 +1,600 @@
package ethash
/* DO NOT EDIT!!!
This code is version controlled at
https://github.com/ethereum/cpp-ethereum/blob/develop/libethash-cl/ethash_cl_miner_kernel.cl
If needed change it there first, then copy over here.
*/
const kernel = `
// author Tim Hughes <tim@twistedfury.com>
// Tested on Radeon HD 7850
// Hashrate: 15940347 hashes/s
// Bandwidth: 124533 MB/s
// search kernel should fit in <= 84 VGPRS (3 wavefronts)
#define THREADS_PER_HASH (128 / 16)
#define HASHES_PER_LOOP (GROUP_SIZE / THREADS_PER_HASH)
#define FNV_PRIME 0x01000193
__constant uint2 const Keccak_f1600_RC[24] = {
(uint2)(0x00000001, 0x00000000),
(uint2)(0x00008082, 0x00000000),
(uint2)(0x0000808a, 0x80000000),
(uint2)(0x80008000, 0x80000000),
(uint2)(0x0000808b, 0x00000000),
(uint2)(0x80000001, 0x00000000),
(uint2)(0x80008081, 0x80000000),
(uint2)(0x00008009, 0x80000000),
(uint2)(0x0000008a, 0x00000000),
(uint2)(0x00000088, 0x00000000),
(uint2)(0x80008009, 0x00000000),
(uint2)(0x8000000a, 0x00000000),
(uint2)(0x8000808b, 0x00000000),
(uint2)(0x0000008b, 0x80000000),
(uint2)(0x00008089, 0x80000000),
(uint2)(0x00008003, 0x80000000),
(uint2)(0x00008002, 0x80000000),
(uint2)(0x00000080, 0x80000000),
(uint2)(0x0000800a, 0x00000000),
(uint2)(0x8000000a, 0x80000000),
(uint2)(0x80008081, 0x80000000),
(uint2)(0x00008080, 0x80000000),
(uint2)(0x80000001, 0x00000000),
(uint2)(0x80008008, 0x80000000),
};
void keccak_f1600_round(uint2* a, uint r, uint out_size)
{
#if !__ENDIAN_LITTLE__
for (uint i = 0; i != 25; ++i)
a[i] = a[i].yx;
#endif
uint2 b[25];
uint2 t;
// Theta
b[0] = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20];
b[1] = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21];
b[2] = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22];
b[3] = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23];
b[4] = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24];
t = b[4] ^ (uint2)(b[1].x << 1 | b[1].y >> 31, b[1].y << 1 | b[1].x >> 31);
a[0] ^= t;
a[5] ^= t;
a[10] ^= t;
a[15] ^= t;
a[20] ^= t;
t = b[0] ^ (uint2)(b[2].x << 1 | b[2].y >> 31, b[2].y << 1 | b[2].x >> 31);
a[1] ^= t;
a[6] ^= t;
a[11] ^= t;
a[16] ^= t;
a[21] ^= t;
t = b[1] ^ (uint2)(b[3].x << 1 | b[3].y >> 31, b[3].y << 1 | b[3].x >> 31);
a[2] ^= t;
a[7] ^= t;
a[12] ^= t;
a[17] ^= t;
a[22] ^= t;
t = b[2] ^ (uint2)(b[4].x << 1 | b[4].y >> 31, b[4].y << 1 | b[4].x >> 31);
a[3] ^= t;
a[8] ^= t;
a[13] ^= t;
a[18] ^= t;
a[23] ^= t;
t = b[3] ^ (uint2)(b[0].x << 1 | b[0].y >> 31, b[0].y << 1 | b[0].x >> 31);
a[4] ^= t;
a[9] ^= t;
a[14] ^= t;
a[19] ^= t;
a[24] ^= t;
// Rho Pi
b[0] = a[0];
b[10] = (uint2)(a[1].x << 1 | a[1].y >> 31, a[1].y << 1 | a[1].x >> 31);
b[7] = (uint2)(a[10].x << 3 | a[10].y >> 29, a[10].y << 3 | a[10].x >> 29);
b[11] = (uint2)(a[7].x << 6 | a[7].y >> 26, a[7].y << 6 | a[7].x >> 26);
b[17] = (uint2)(a[11].x << 10 | a[11].y >> 22, a[11].y << 10 | a[11].x >> 22);
b[18] = (uint2)(a[17].x << 15 | a[17].y >> 17, a[17].y << 15 | a[17].x >> 17);
b[3] = (uint2)(a[18].x << 21 | a[18].y >> 11, a[18].y << 21 | a[18].x >> 11);
b[5] = (uint2)(a[3].x << 28 | a[3].y >> 4, a[3].y << 28 | a[3].x >> 4);
b[16] = (uint2)(a[5].y << 4 | a[5].x >> 28, a[5].x << 4 | a[5].y >> 28);
b[8] = (uint2)(a[16].y << 13 | a[16].x >> 19, a[16].x << 13 | a[16].y >> 19);
b[21] = (uint2)(a[8].y << 23 | a[8].x >> 9, a[8].x << 23 | a[8].y >> 9);
b[24] = (uint2)(a[21].x << 2 | a[21].y >> 30, a[21].y << 2 | a[21].x >> 30);
b[4] = (uint2)(a[24].x << 14 | a[24].y >> 18, a[24].y << 14 | a[24].x >> 18);
b[15] = (uint2)(a[4].x << 27 | a[4].y >> 5, a[4].y << 27 | a[4].x >> 5);
b[23] = (uint2)(a[15].y << 9 | a[15].x >> 23, a[15].x << 9 | a[15].y >> 23);
b[19] = (uint2)(a[23].y << 24 | a[23].x >> 8, a[23].x << 24 | a[23].y >> 8);
b[13] = (uint2)(a[19].x << 8 | a[19].y >> 24, a[19].y << 8 | a[19].x >> 24);
b[12] = (uint2)(a[13].x << 25 | a[13].y >> 7, a[13].y << 25 | a[13].x >> 7);
b[2] = (uint2)(a[12].y << 11 | a[12].x >> 21, a[12].x << 11 | a[12].y >> 21);
b[20] = (uint2)(a[2].y << 30 | a[2].x >> 2, a[2].x << 30 | a[2].y >> 2);
b[14] = (uint2)(a[20].x << 18 | a[20].y >> 14, a[20].y << 18 | a[20].x >> 14);
b[22] = (uint2)(a[14].y << 7 | a[14].x >> 25, a[14].x << 7 | a[14].y >> 25);
b[9] = (uint2)(a[22].y << 29 | a[22].x >> 3, a[22].x << 29 | a[22].y >> 3);
b[6] = (uint2)(a[9].x << 20 | a[9].y >> 12, a[9].y << 20 | a[9].x >> 12);
b[1] = (uint2)(a[6].y << 12 | a[6].x >> 20, a[6].x << 12 | a[6].y >> 20);
// Chi
a[0] = bitselect(b[0] ^ b[2], b[0], b[1]);
a[1] = bitselect(b[1] ^ b[3], b[1], b[2]);
a[2] = bitselect(b[2] ^ b[4], b[2], b[3]);
a[3] = bitselect(b[3] ^ b[0], b[3], b[4]);
if (out_size >= 4)
{
a[4] = bitselect(b[4] ^ b[1], b[4], b[0]);
a[5] = bitselect(b[5] ^ b[7], b[5], b[6]);
a[6] = bitselect(b[6] ^ b[8], b[6], b[7]);
a[7] = bitselect(b[7] ^ b[9], b[7], b[8]);
a[8] = bitselect(b[8] ^ b[5], b[8], b[9]);
if (out_size >= 8)
{
a[9] = bitselect(b[9] ^ b[6], b[9], b[5]);
a[10] = bitselect(b[10] ^ b[12], b[10], b[11]);
a[11] = bitselect(b[11] ^ b[13], b[11], b[12]);
a[12] = bitselect(b[12] ^ b[14], b[12], b[13]);
a[13] = bitselect(b[13] ^ b[10], b[13], b[14]);
a[14] = bitselect(b[14] ^ b[11], b[14], b[10]);
a[15] = bitselect(b[15] ^ b[17], b[15], b[16]);
a[16] = bitselect(b[16] ^ b[18], b[16], b[17]);
a[17] = bitselect(b[17] ^ b[19], b[17], b[18]);
a[18] = bitselect(b[18] ^ b[15], b[18], b[19]);
a[19] = bitselect(b[19] ^ b[16], b[19], b[15]);
a[20] = bitselect(b[20] ^ b[22], b[20], b[21]);
a[21] = bitselect(b[21] ^ b[23], b[21], b[22]);
a[22] = bitselect(b[22] ^ b[24], b[22], b[23]);
a[23] = bitselect(b[23] ^ b[20], b[23], b[24]);
a[24] = bitselect(b[24] ^ b[21], b[24], b[20]);
}
}
// Iota
a[0] ^= Keccak_f1600_RC[r];
#if !__ENDIAN_LITTLE__
for (uint i = 0; i != 25; ++i)
a[i] = a[i].yx;
#endif
}
void keccak_f1600_no_absorb(ulong* a, uint in_size, uint out_size, uint isolate)
{
for (uint i = in_size; i != 25; ++i)
{
a[i] = 0;
}
#if __ENDIAN_LITTLE__
a[in_size] ^= 0x0000000000000001;
a[24-out_size*2] ^= 0x8000000000000000;
#else
a[in_size] ^= 0x0100000000000000;
a[24-out_size*2] ^= 0x0000000000000080;
#endif
// Originally I unrolled the first and last rounds to interface
// better with surrounding code, however I haven't done this
// without causing the AMD compiler to blow up the VGPR usage.
uint r = 0;
do
{
// This dynamic branch stops the AMD compiler unrolling the loop
// and additionally saves about 33% of the VGPRs, enough to gain another
// wavefront. Ideally we'd get 4 in flight, but 3 is the best I can
// massage out of the compiler. It doesn't really seem to matter how
// much we try and help the compiler save VGPRs because it seems to throw
// that information away, hence the implementation of keccak here
// doesn't bother.
if (isolate)
{
keccak_f1600_round((uint2*)a, r++, 25);
}
}
while (r < 23);
// final round optimised for digest size
keccak_f1600_round((uint2*)a, r++, out_size);
}
#define copy(dst, src, count) for (uint i = 0; i != count; ++i) { (dst)[i] = (src)[i]; }
#define countof(x) (sizeof(x) / sizeof(x[0]))
uint fnv(uint x, uint y)
{
return x * FNV_PRIME ^ y;
}
uint4 fnv4(uint4 x, uint4 y)
{
return x * FNV_PRIME ^ y;
}
uint fnv_reduce(uint4 v)
{
return fnv(fnv(fnv(v.x, v.y), v.z), v.w);
}
typedef union
{
ulong ulongs[32 / sizeof(ulong)];
uint uints[32 / sizeof(uint)];
} hash32_t;
typedef union
{
ulong ulongs[64 / sizeof(ulong)];
uint4 uint4s[64 / sizeof(uint4)];
} hash64_t;
typedef union
{
uint uints[128 / sizeof(uint)];
uint4 uint4s[128 / sizeof(uint4)];
} hash128_t;
hash64_t init_hash(__constant hash32_t const* header, ulong nonce, uint isolate)
{
hash64_t init;
uint const init_size = countof(init.ulongs);
uint const hash_size = countof(header->ulongs);
// sha3_512(header .. nonce)
ulong state[25];
copy(state, header->ulongs, hash_size);
state[hash_size] = nonce;
keccak_f1600_no_absorb(state, hash_size + 1, init_size, isolate);
copy(init.ulongs, state, init_size);
return init;
}
uint inner_loop_chunks(uint4 init, uint thread_id, __local uint* share, __global hash128_t const* g_dag, __global hash128_t const* g_dag1, __global hash128_t const* g_dag2, __global hash128_t const* g_dag3, uint isolate)
{
uint4 mix = init;
// share init0
if (thread_id == 0)
*share = mix.x;
barrier(CLK_LOCAL_MEM_FENCE);
uint init0 = *share;
uint a = 0;
do
{
bool update_share = thread_id == (a/4) % THREADS_PER_HASH;
#pragma unroll
for (uint i = 0; i != 4; ++i)
{
if (update_share)
{
uint m[4] = { mix.x, mix.y, mix.z, mix.w };
*share = fnv(init0 ^ (a+i), m[i]) % DAG_SIZE;
}
barrier(CLK_LOCAL_MEM_FENCE);
mix = fnv4(mix, *share>=3 * DAG_SIZE / 4 ? g_dag3[*share - 3 * DAG_SIZE / 4].uint4s[thread_id] : *share>=DAG_SIZE / 2 ? g_dag2[*share - DAG_SIZE / 2].uint4s[thread_id] : *share>=DAG_SIZE / 4 ? g_dag1[*share - DAG_SIZE / 4].uint4s[thread_id]:g_dag[*share].uint4s[thread_id]);
}
} while ((a += 4) != (ACCESSES & isolate));
return fnv_reduce(mix);
}
uint inner_loop(uint4 init, uint thread_id, __local uint* share, __global hash128_t const* g_dag, uint isolate)
{
uint4 mix = init;
// share init0
if (thread_id == 0)
*share = mix.x;
barrier(CLK_LOCAL_MEM_FENCE);
uint init0 = *share;
uint a = 0;
do
{
bool update_share = thread_id == (a/4) % THREADS_PER_HASH;
#pragma unroll
for (uint i = 0; i != 4; ++i)
{
if (update_share)
{
uint m[4] = { mix.x, mix.y, mix.z, mix.w };
*share = fnv(init0 ^ (a+i), m[i]) % DAG_SIZE;
}
barrier(CLK_LOCAL_MEM_FENCE);
mix = fnv4(mix, g_dag[*share].uint4s[thread_id]);
}
}
while ((a += 4) != (ACCESSES & isolate));
return fnv_reduce(mix);
}
hash32_t final_hash(hash64_t const* init, hash32_t const* mix, uint isolate)
{
ulong state[25];
hash32_t hash;
uint const hash_size = countof(hash.ulongs);
uint const init_size = countof(init->ulongs);
uint const mix_size = countof(mix->ulongs);
// keccak_256(keccak_512(header..nonce) .. mix);
copy(state, init->ulongs, init_size);
copy(state + init_size, mix->ulongs, mix_size);
keccak_f1600_no_absorb(state, init_size+mix_size, hash_size, isolate);
// copy out
copy(hash.ulongs, state, hash_size);
return hash;
}
hash32_t compute_hash_simple(
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
ulong nonce,
uint isolate
)
{
hash64_t init = init_hash(g_header, nonce, isolate);
hash128_t mix;
for (uint i = 0; i != countof(mix.uint4s); ++i)
{
mix.uint4s[i] = init.uint4s[i % countof(init.uint4s)];
}
uint mix_val = mix.uints[0];
uint init0 = mix.uints[0];
uint a = 0;
do
{
uint pi = fnv(init0 ^ a, mix_val) % DAG_SIZE;
uint n = (a+1) % countof(mix.uints);
#pragma unroll
for (uint i = 0; i != countof(mix.uints); ++i)
{
mix.uints[i] = fnv(mix.uints[i], g_dag[pi].uints[i]);
mix_val = i == n ? mix.uints[i] : mix_val;
}
}
while (++a != (ACCESSES & isolate));
// reduce to output
hash32_t fnv_mix;
for (uint i = 0; i != countof(fnv_mix.uints); ++i)
{
fnv_mix.uints[i] = fnv_reduce(mix.uint4s[i]);
}
return final_hash(&init, &fnv_mix, isolate);
}
typedef union
{
struct
{
hash64_t init;
uint pad; // avoid lds bank conflicts
};
hash32_t mix;
} compute_hash_share;
hash32_t compute_hash(
__local compute_hash_share* share,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
ulong nonce,
uint isolate
)
{
uint const gid = get_global_id(0);
// Compute one init hash per work item.
hash64_t init = init_hash(g_header, nonce, isolate);
// Threads work together in this phase in groups of 8.
uint const thread_id = gid % THREADS_PER_HASH;
uint const hash_id = (gid % GROUP_SIZE) / THREADS_PER_HASH;
hash32_t mix;
uint i = 0;
do
{
// share init with other threads
if (i == thread_id)
share[hash_id].init = init;
barrier(CLK_LOCAL_MEM_FENCE);
uint4 thread_init = share[hash_id].init.uint4s[thread_id % (64 / sizeof(uint4))];
barrier(CLK_LOCAL_MEM_FENCE);
uint thread_mix = inner_loop(thread_init, thread_id, share[hash_id].mix.uints, g_dag, isolate);
share[hash_id].mix.uints[thread_id] = thread_mix;
barrier(CLK_LOCAL_MEM_FENCE);
if (i == thread_id)
mix = share[hash_id].mix;
barrier(CLK_LOCAL_MEM_FENCE);
}
while (++i != (THREADS_PER_HASH & isolate));
return final_hash(&init, &mix, isolate);
}
hash32_t compute_hash_chunks(
__local compute_hash_share* share,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
__global hash128_t const* g_dag1,
__global hash128_t const* g_dag2,
__global hash128_t const* g_dag3,
ulong nonce,
uint isolate
)
{
uint const gid = get_global_id(0);
// Compute one init hash per work item.
hash64_t init = init_hash(g_header, nonce, isolate);
// Threads work together in this phase in groups of 8.
uint const thread_id = gid % THREADS_PER_HASH;
uint const hash_id = (gid % GROUP_SIZE) / THREADS_PER_HASH;
hash32_t mix;
uint i = 0;
do
{
// share init with other threads
if (i == thread_id)
share[hash_id].init = init;
barrier(CLK_LOCAL_MEM_FENCE);
uint4 thread_init = share[hash_id].init.uint4s[thread_id % (64 / sizeof(uint4))];
barrier(CLK_LOCAL_MEM_FENCE);
uint thread_mix = inner_loop_chunks(thread_init, thread_id, share[hash_id].mix.uints, g_dag, g_dag1, g_dag2, g_dag3, isolate);
share[hash_id].mix.uints[thread_id] = thread_mix;
barrier(CLK_LOCAL_MEM_FENCE);
if (i == thread_id)
mix = share[hash_id].mix;
barrier(CLK_LOCAL_MEM_FENCE);
}
while (++i != (THREADS_PER_HASH & isolate));
return final_hash(&init, &mix, isolate);
}
__attribute__((reqd_work_group_size(GROUP_SIZE, 1, 1)))
__kernel void ethash_hash_simple(
__global hash32_t* g_hashes,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
ulong start_nonce,
uint isolate
)
{
uint const gid = get_global_id(0);
g_hashes[gid] = compute_hash_simple(g_header, g_dag, start_nonce + gid, isolate);
}
__attribute__((reqd_work_group_size(GROUP_SIZE, 1, 1)))
__kernel void ethash_search_simple(
__global volatile uint* restrict g_output,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
ulong start_nonce,
ulong target,
uint isolate
)
{
uint const gid = get_global_id(0);
hash32_t hash = compute_hash_simple(g_header, g_dag, start_nonce + gid, isolate);
if (hash.ulongs[countof(hash.ulongs)-1] < target)
{
uint slot = min(convert_uint(MAX_OUTPUTS), convert_uint(atomic_inc(&g_output[0]) + 1));
g_output[slot] = gid;
}
}
__attribute__((reqd_work_group_size(GROUP_SIZE, 1, 1)))
__kernel void ethash_hash(
__global hash32_t* g_hashes,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
ulong start_nonce,
uint isolate
)
{
__local compute_hash_share share[HASHES_PER_LOOP];
uint const gid = get_global_id(0);
g_hashes[gid] = compute_hash(share, g_header, g_dag, start_nonce + gid, isolate);
}
__attribute__((reqd_work_group_size(GROUP_SIZE, 1, 1)))
__kernel void ethash_search(
__global volatile uint* restrict g_output,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
ulong start_nonce,
ulong target,
uint isolate
)
{
__local compute_hash_share share[HASHES_PER_LOOP];
uint const gid = get_global_id(0);
hash32_t hash = compute_hash(share, g_header, g_dag, start_nonce + gid, isolate);
if (as_ulong(as_uchar8(hash.ulongs[0]).s76543210) < target)
{
uint slot = min((uint)MAX_OUTPUTS, atomic_inc(&g_output[0]) + 1);
g_output[slot] = gid;
}
}
__attribute__((reqd_work_group_size(GROUP_SIZE, 1, 1)))
__kernel void ethash_hash_chunks(
__global hash32_t* g_hashes,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
__global hash128_t const* g_dag1,
__global hash128_t const* g_dag2,
__global hash128_t const* g_dag3,
ulong start_nonce,
uint isolate
)
{
__local compute_hash_share share[HASHES_PER_LOOP];
uint const gid = get_global_id(0);
g_hashes[gid] = compute_hash_chunks(share, g_header, g_dag, g_dag1, g_dag2, g_dag3,start_nonce + gid, isolate);
}
__attribute__((reqd_work_group_size(GROUP_SIZE, 1, 1)))
__kernel void ethash_search_chunks(
__global volatile uint* restrict g_output,
__constant hash32_t const* g_header,
__global hash128_t const* g_dag,
__global hash128_t const* g_dag1,
__global hash128_t const* g_dag2,
__global hash128_t const* g_dag3,
ulong start_nonce,
ulong target,
uint isolate
)
{
__local compute_hash_share share[HASHES_PER_LOOP];
uint const gid = get_global_id(0);
hash32_t hash = compute_hash_chunks(share, g_header, g_dag, g_dag1, g_dag2, g_dag3, start_nonce + gid, isolate);
if (as_ulong(as_uchar8(hash.ulongs[0]).s76543210) < target)
{
uint slot = min(convert_uint(MAX_OUTPUTS), convert_uint(atomic_inc(&g_output[0]) + 1));
g_output[slot] = gid;
}
}
`

View File

@ -1,3 +1,20 @@
// Copyright 2015 The go-ethereum Authors
// Copyright 2015 Lefteris Karapetsas <lefteris@refu.co>
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethash
import (
@ -92,7 +109,7 @@ func TestEthashConcurrentVerify(t *testing.T) {
defer os.RemoveAll(eth.Full.Dir)
block := &testBlock{difficulty: big.NewInt(10)}
nonce, md := eth.Search(block, nil)
nonce, md := eth.Search(block, nil, 0)
block.nonce = nonce
block.mixDigest = common.BytesToHash(md)
@ -135,7 +152,7 @@ func TestEthashConcurrentSearch(t *testing.T) {
// launch n searches concurrently.
for i := 0; i < nsearch; i++ {
go func() {
nonce, md := eth.Search(block, stop)
nonce, md := eth.Search(block, stop, 0)
select {
case found <- searchRes{n: nonce, md: md}:
case <-stop:
@ -167,7 +184,7 @@ func TestEthashSearchAcrossEpoch(t *testing.T) {
for i := epochLength - 40; i < epochLength+40; i++ {
block := &testBlock{number: i, difficulty: big.NewInt(90)}
rand.Read(block.hashNoNonce[:])
nonce, md := eth.Search(block, nil)
nonce, md := eth.Search(block, nil, 0)
block.nonce = nonce
block.mixDigest = common.BytesToHash(md)
if !eth.Verify(block) {

View File

@ -1,3 +1,19 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package ethash
/*

View File

@ -35,10 +35,14 @@
#elif defined(__FreeBSD__) || defined(__DragonFly__) || defined(__NetBSD__)
#define ethash_swap_u32(input_) bswap32(input_)
#define ethash_swap_u64(input_) bswap64(input_)
#elif defined(__OpenBSD__)
#include <endian.h>
#define ethash_swap_u32(input_) swap32(input_)
#define ethash_swap_u64(input_) swap64(input_)
#else // posix
#include <byteswap.h>
#define ethash_swap_u32(input_) __bswap_32(input_)
#define ethash_swap_u64(input_) __bswap_64(input_)
#define ethash_swap_u32(input_) bswap_32(input_)
#define ethash_swap_u64(input_) bswap_64(input_)
#endif

View File

@ -29,6 +29,10 @@ extern "C" {
#define FNV_PRIME 0x01000193
/* The FNV-1 spec multiplies the prime with the input one byte (octet) in turn.
We instead multiply it with the full 32-bit input.
This gives a different result compared to a canonical FNV-1 implementation.
*/
static inline uint32_t fnv_hash(uint32_t const x, uint32_t const y)
{
return x * FNV_PRIME ^ y;

View File

@ -0,0 +1 @@
/gotasks/specs

View File

@ -5,10 +5,40 @@ Installation
Run `go get -u github.com/huin/goupnp`.
Documentation
-------------
All doc links below are for ![GoDoc](https://godoc.org/github.com/huin/goupnp?status.svg).
Supported DCPs (you probably want to start with one of these):
* [av1](https://godoc.org/github.com/huin/goupnp/dcps/av1) - Client for UPnP Device Control Protocol MediaServer v1 and MediaRenderer v1.
* [internetgateway1](https://godoc.org/github.com/huin/goupnp/dcps/internetgateway1) - Client for UPnP Device Control Protocol Internet Gateway Device v1.
* [internetgateway2](https://godoc.org/github.com/huin/goupnp/dcps/internetgateway2) - Client for UPnP Device Control Protocol Internet Gateway Device v2.
Core components:
* [(goupnp)](https://godoc.org/github.com/huin/goupnp) core library - contains datastructures and utilities typically used by the implemented DCPs.
* [httpu](https://godoc.org/github.com/huin/goupnp/httpu) HTTPU implementation, underlies SSDP.
* [ssdp](https://godoc.org/github.com/huin/goupnp/ssdp) SSDP client implementation (simple service discovery protocol) - used to discover UPnP services on a network.
* [soap](https://godoc.org/github.com/huin/goupnp/soap) SOAP client implementation (simple object access protocol) - used to communicate with discovered services.
Regenerating dcps generated source code:
----------------------------------------
1. Install gotasks: `go get -u github.com/jingweno/gotask`
2. Change to the gotasks directory: `cd gotasks`
3. Download UPnP specification data (if not done already): `wget http://upnp.org/resources/upnpresources.zip`
4. Regenerate source code: `gotask specgen -s upnpresources.zip -o ../dcps`
3. Run specgen task: `gotask specgen`
Supporting additional UPnP devices and services:
------------------------------------------------
Supporting additional services is, in the trivial case, simply a matter of
adding the service to the `dcpMetadata` whitelist in `gotasks/specgen_task.go`,
regenerating the source code (see above), and committing that source code.
However, it would be helpful if anyone needing such a service could test the
service against the service they have, and then reporting any trouble
encountered as an [issue on this
project](https://github.com/huin/goupnp/issues/new). If it just works, then
please report at least minimal working functionality as an issue, and
optionally contribute the metadata upstream.

View File

@ -0,0 +1,27 @@
package main
import (
"log"
"github.com/huin/goupnp/ssdp"
)
func main() {
c := make(chan ssdp.Update)
srv, reg := ssdp.NewServerAndRegistry()
reg.AddListener(c)
go listener(c)
if err := srv.ListenAndServe(); err != nil {
log.Print("ListenAndServe failed: ", err)
}
}
func listener(c <-chan ssdp.Update) {
for u := range c {
if u.Entry != nil {
log.Printf("Event: %v USN: %s Entry: %#v", u.EventType, u.USN, *u.Entry)
} else {
log.Printf("Event: %v USN: %s Entry: <nil>", u.EventType, u.USN)
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,62 +0,0 @@
package example_test
import (
"fmt"
"os"
"github.com/huin/goupnp"
"github.com/huin/goupnp/dcps/internetgateway1"
)
// Use discovered WANPPPConnection1 services to find external IP addresses.
func Example_WANPPPConnection1_GetExternalIPAddress() {
clients, errors, err := internetgateway1.NewWANPPPConnection1Clients()
extIPClients := make([]GetExternalIPAddresser, len(clients))
for i, client := range clients {
extIPClients[i] = client
}
DisplayExternalIPResults(extIPClients, errors, err)
// Output:
}
// Use discovered WANIPConnection services to find external IP addresses.
func Example_WANIPConnection_GetExternalIPAddress() {
clients, errors, err := internetgateway1.NewWANIPConnection1Clients()
extIPClients := make([]GetExternalIPAddresser, len(clients))
for i, client := range clients {
extIPClients[i] = client
}
DisplayExternalIPResults(extIPClients, errors, err)
// Output:
}
type GetExternalIPAddresser interface {
GetExternalIPAddress() (NewExternalIPAddress string, err error)
GetServiceClient() *goupnp.ServiceClient
}
func DisplayExternalIPResults(clients []GetExternalIPAddresser, errors []error, err error) {
if err != nil {
fmt.Fprintln(os.Stderr, "Error discovering service with UPnP: ", err)
return
}
if len(errors) > 0 {
fmt.Fprintf(os.Stderr, "Error discovering %d services:\n", len(errors))
for _, err := range errors {
fmt.Println(" ", err)
}
}
fmt.Fprintf(os.Stderr, "Successfully discovered %d services:\n", len(clients))
for _, client := range clients {
device := &client.GetServiceClient().RootDevice.Device
fmt.Fprintln(os.Stderr, " Device:", device.FriendlyName)
if addr, err := client.GetExternalIPAddress(); err != nil {
fmt.Fprintf(os.Stderr, " Failed to get external IP address: %v\n", err)
} else {
fmt.Fprintf(os.Stderr, " External IP address: %v\n", addr)
}
}
}

View File

@ -4,12 +4,11 @@ package gotasks
import (
"archive/zip"
"bytes"
"encoding/xml"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"path"
"path/filepath"
@ -28,6 +27,53 @@ var (
serviceURNPrefix = "urn:schemas-upnp-org:service:"
)
// DCP contains extra metadata to use when generating DCP source files.
type DCPMetadata struct {
Name string // What to name the Go DCP package.
OfficialName string // Official name for the DCP.
DocURL string // Optional - URL for futher documentation about the DCP.
XMLSpecURL string // Where to download the XML spec from.
// Any special-case functions to run against the DCP before writing it out.
Hacks []DCPHackFn
}
var dcpMetadata = []DCPMetadata{
{
Name: "internetgateway1",
OfficialName: "Internet Gateway Device v1",
DocURL: "http://upnp.org/specs/gw/UPnP-gw-InternetGatewayDevice-v1-Device.pdf",
XMLSpecURL: "http://upnp.org/specs/gw/UPnP-gw-IGD-TestFiles-20010921.zip",
},
{
Name: "internetgateway2",
OfficialName: "Internet Gateway Device v2",
DocURL: "http://upnp.org/specs/gw/UPnP-gw-InternetGatewayDevice-v2-Device.pdf",
XMLSpecURL: "http://upnp.org/specs/gw/UPnP-gw-IGD-Testfiles-20110224.zip",
Hacks: []DCPHackFn{
func(dcp *DCP) error {
missingURN := "urn:schemas-upnp-org:service:WANIPv6FirewallControl:1"
if _, ok := dcp.ServiceTypes[missingURN]; ok {
return nil
}
urnParts, err := extractURNParts(missingURN, serviceURNPrefix)
if err != nil {
return err
}
dcp.ServiceTypes[missingURN] = urnParts
return nil
},
},
},
{
Name: "av1",
OfficialName: "MediaServer v1 and MediaRenderer v1",
DocURL: "http://upnp.org/specs/av/av1/",
XMLSpecURL: "http://upnp.org/specs/av/UPnP-av-TestFiles-20070927.zip",
},
}
type DCPHackFn func(*DCP) error
// NAME
// specgen - generates Go code from the UPnP specification files.
//
@ -35,104 +81,90 @@ var (
// The specification is available for download from:
//
// OPTIONS
// -s, --spec_filename=<upnpresources.zip>
// Path to the specification file, available from http://upnp.org/resources/upnpresources.zip
// -s, --specs_dir=<spec directory>
// Path to the specification storage directory. This is used to find (and download if not present) the specification ZIP files. Defaults to 'specs'
// -o, --out_dir=<output directory>
// Path to the output directory. This is is where the DCP source files will be placed. Should normally correspond to the directory for github.com/huin/goupnp/dcps
// Path to the output directory. This is is where the DCP source files will be placed. Should normally correspond to the directory for github.com/huin/goupnp/dcps. Defaults to '../dcps'
// --nogofmt
// Disable passing the output through gofmt. Do this if debugging code output problems and needing to see the generated code prior to being passed through gofmt.
func TaskSpecgen(t *tasking.T) {
specFilename := t.Flags.String("spec-filename")
if specFilename == "" {
specFilename = t.Flags.String("s")
}
if specFilename == "" {
t.Fatal("--spec_filename is required")
}
outDir := t.Flags.String("out-dir")
if outDir == "" {
outDir = t.Flags.String("o")
}
if outDir == "" {
log.Fatal("--out_dir is required")
specsDir := fallbackStrValue("specs", t.Flags.String("specs_dir"), t.Flags.String("s"))
if err := os.MkdirAll(specsDir, os.ModePerm); err != nil {
t.Fatalf("Could not create specs-dir %q: %v\n", specsDir, err)
}
outDir := fallbackStrValue("../dcps", t.Flags.String("out_dir"), t.Flags.String("o"))
useGofmt := !t.Flags.Bool("nogofmt")
specArchive, err := openZipfile(specFilename)
if err != nil {
t.Fatalf("Error opening spec file: %v", err)
}
defer specArchive.Close()
dcpCol := newDcpsCollection()
for _, f := range globFiles("standardizeddcps/*/*.zip", specArchive.Reader) {
dirName := strings.TrimPrefix(f.Name, "standardizeddcps/")
slashIndex := strings.Index(dirName, "/")
if slashIndex == -1 {
// Should not happen.
t.Logf("Could not find / in %q", dirName)
return
NEXT_DCP:
for _, d := range dcpMetadata {
specFilename := filepath.Join(specsDir, d.Name+".zip")
err := acquireFile(specFilename, d.XMLSpecURL)
if err != nil {
t.Logf("Could not acquire spec for %s, skipping: %v\n", d.Name, err)
continue NEXT_DCP
}
dirName = dirName[:slashIndex]
dcp := dcpCol.dcpForDir(dirName)
if dcp == nil {
t.Logf("No alias defined for directory %q: skipping %s\n", dirName, f.Name)
continue
} else {
t.Logf("Alias found for directory %q: processing %s\n", dirName, f.Name)
dcp := newDCP(d)
if err := dcp.processZipFile(specFilename); err != nil {
log.Printf("Error processing spec for %s in file %q: %v", d.Name, specFilename, err)
continue NEXT_DCP
}
dcp.processZipFile(f)
}
for _, dcp := range dcpCol.dcpByAlias {
for i, hack := range d.Hacks {
if err := hack(dcp); err != nil {
log.Printf("Error with Hack[%d] for %s: %v", i, d.Name, err)
continue NEXT_DCP
}
}
dcp.writePackage(outDir, useGofmt)
if err := dcp.writePackage(outDir, useGofmt); err != nil {
log.Printf("Error writing package %q: %v", dcp.Metadata.Name, err)
continue NEXT_DCP
}
}
}
// DCP contains extra metadata to use when generating DCP source files.
type DCPMetadata struct {
Name string // What to name the Go DCP package.
OfficialName string // Official name for the DCP.
DocURL string // Optional - URL for futher documentation about the DCP.
}
var dcpMetadataByDir = map[string]DCPMetadata{
"Internet Gateway_1": {
Name: "internetgateway1",
OfficialName: "Internet Gateway Device v1",
DocURL: "http://upnp.org/specs/gw/UPnP-gw-InternetGatewayDevice-v1-Device.pdf",
},
"Internet Gateway_2": {
Name: "internetgateway2",
OfficialName: "Internet Gateway Device v2",
DocURL: "http://upnp.org/specs/gw/UPnP-gw-InternetGatewayDevice-v2-Device.pdf",
},
}
type dcpCollection struct {
dcpByAlias map[string]*DCP
}
func newDcpsCollection() *dcpCollection {
c := &dcpCollection{
dcpByAlias: make(map[string]*DCP),
func fallbackStrValue(defaultValue string, values ...string) string {
for _, v := range values {
if v != "" {
return v
}
}
for _, metadata := range dcpMetadataByDir {
c.dcpByAlias[metadata.Name] = newDCP(metadata)
}
return c
return defaultValue
}
func (c *dcpCollection) dcpForDir(dirName string) *DCP {
metadata, ok := dcpMetadataByDir[dirName]
if !ok {
func acquireFile(specFilename string, xmlSpecURL string) error {
if f, err := os.Open(specFilename); err != nil {
if !os.IsNotExist(err) {
return err
}
} else {
f.Close()
return nil
}
return c.dcpByAlias[metadata.Name]
resp, err := http.Get(xmlSpecURL)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("could not download spec %q from %q: ",
specFilename, xmlSpecURL, resp.Status)
}
tmpFilename := specFilename + ".download"
w, err := os.Create(tmpFilename)
if err != nil {
return err
}
defer w.Close()
_, err = io.Copy(w, resp.Body)
if err != nil {
return err
}
return os.Rename(tmpFilename, specFilename)
}
// DCP collects together information about a UPnP Device Control Protocol.
@ -151,33 +183,37 @@ func newDCP(metadata DCPMetadata) *DCP {
}
}
func (dcp *DCP) processZipFile(file *zip.File) {
archive, err := openChildZip(file)
func (dcp *DCP) processZipFile(filename string) error {
archive, err := zip.OpenReader(filename)
if err != nil {
log.Println("Error reading child zip file:", err)
return
return fmt.Errorf("error reading zip file %q: %v", filename, err)
}
defer archive.Close()
for _, deviceFile := range globFiles("*/device/*.xml", archive) {
dcp.processDeviceFile(deviceFile)
if err := dcp.processDeviceFile(deviceFile); err != nil {
return err
}
}
for _, scpdFile := range globFiles("*/service/*.xml", archive) {
dcp.processSCPDFile(scpdFile)
if err := dcp.processSCPDFile(scpdFile); err != nil {
return err
}
}
return nil
}
func (dcp *DCP) processDeviceFile(file *zip.File) {
func (dcp *DCP) processDeviceFile(file *zip.File) error {
var device goupnp.Device
if err := unmarshalXmlFile(file, &device); err != nil {
log.Printf("Error decoding device XML from file %q: %v", file.Name, err)
return
return fmt.Errorf("error decoding device XML from file %q: %v", file.Name, err)
}
var mainErr error
device.VisitDevices(func(d *goupnp.Device) {
t := strings.TrimSpace(d.DeviceType)
if t != "" {
u, err := extractURNParts(t, deviceURNPrefix)
if err != nil {
log.Println(err)
return
mainErr = err
}
dcp.DeviceTypes[t] = u
}
@ -185,11 +221,11 @@ func (dcp *DCP) processDeviceFile(file *zip.File) {
device.VisitServices(func(s *goupnp.Service) {
u, err := extractURNParts(s.ServiceType, serviceURNPrefix)
if err != nil {
log.Println(err)
return
mainErr = err
}
dcp.ServiceTypes[s.ServiceType] = u
})
return mainErr
}
func (dcp *DCP) writePackage(outDir string, useGofmt bool) error {
@ -217,22 +253,21 @@ func (dcp *DCP) writePackage(outDir string, useGofmt bool) error {
return output.Close()
}
func (dcp *DCP) processSCPDFile(file *zip.File) {
func (dcp *DCP) processSCPDFile(file *zip.File) error {
scpd := new(scpd.SCPD)
if err := unmarshalXmlFile(file, scpd); err != nil {
log.Printf("Error decoding SCPD XML from file %q: %v", file.Name, err)
return
return fmt.Errorf("error decoding SCPD XML from file %q: %v", file.Name, err)
}
scpd.Clean()
urnParts, err := urnPartsFromSCPDFilename(file.Name)
if err != nil {
log.Printf("Could not recognize SCPD filename %q: %v", file.Name, err)
return
return fmt.Errorf("could not recognize SCPD filename %q: %v", file.Name, err)
}
dcp.Services = append(dcp.Services, SCPDWithURN{
URNParts: urnParts,
SCPD: scpd,
})
return nil
}
type SCPDWithURN struct {
@ -240,7 +275,19 @@ type SCPDWithURN struct {
SCPD *scpd.SCPD
}
func (s *SCPDWithURN) WrapArgument(arg scpd.Argument) (*argumentWrapper, error) {
func (s *SCPDWithURN) WrapArguments(args []*scpd.Argument) (argumentWrapperList, error) {
wrappedArgs := make(argumentWrapperList, len(args))
for i, arg := range args {
wa, err := s.wrapArgument(arg)
if err != nil {
return nil, err
}
wrappedArgs[i] = wa
}
return wrappedArgs, nil
}
func (s *SCPDWithURN) wrapArgument(arg *scpd.Argument) (*argumentWrapper, error) {
relVar := s.SCPD.GetStateVariable(arg.RelatedStateVariable)
if relVar == nil {
return nil, fmt.Errorf("no such state variable: %q, for argument %q", arg.RelatedStateVariable, arg.Name)
@ -250,7 +297,7 @@ func (s *SCPDWithURN) WrapArgument(arg scpd.Argument) (*argumentWrapper, error)
return nil, fmt.Errorf("unknown data type: %q, for state variable %q, for argument %q", relVar.DataType.Type, arg.RelatedStateVariable, arg.Name)
}
return &argumentWrapper{
Argument: arg,
Argument: *arg,
relVar: relVar,
conv: cnv,
}, nil
@ -266,6 +313,12 @@ func (arg *argumentWrapper) AsParameter() string {
return fmt.Sprintf("%s %s", arg.Name, arg.conv.ExtType)
}
func (arg *argumentWrapper) HasDoc() bool {
rng := arg.relVar.AllowedValueRange
return ((rng != nil && (rng.Minimum != "" || rng.Maximum != "" || rng.Step != "")) ||
len(arg.relVar.AllowedValues) > 0)
}
func (arg *argumentWrapper) Document() string {
relVar := arg.relVar
if rng := relVar.AllowedValueRange; rng != nil {
@ -295,6 +348,17 @@ func (arg *argumentWrapper) Unmarshal(objVar string) string {
return fmt.Sprintf("soap.Unmarshal%s(%s.%s)", arg.conv.FuncSuffix, objVar, arg.Name)
}
type argumentWrapperList []*argumentWrapper
func (args argumentWrapperList) HasDoc() bool {
for _, arg := range args {
if arg.HasDoc() {
return true
}
}
return false
}
type conv struct {
FuncSuffix string
ExtType string
@ -325,49 +389,10 @@ var typeConvs = map[string]conv{
"boolean": conv{"Boolean", "bool"},
"bin.base64": conv{"BinBase64", "[]byte"},
"bin.hex": conv{"BinHex", "[]byte"},
"uri": conv{"URI", "*url.URL"},
}
type closeableZipReader struct {
io.Closer
*zip.Reader
}
func openZipfile(filename string) (*closeableZipReader, error) {
file, err := os.Open(filename)
if err != nil {
return nil, err
}
fi, err := file.Stat()
if err != nil {
return nil, err
}
archive, err := zip.NewReader(file, fi.Size())
if err != nil {
return nil, err
}
return &closeableZipReader{
Closer: file,
Reader: archive,
}, nil
}
// openChildZip opens a zip file within another zip file.
func openChildZip(file *zip.File) (*zip.Reader, error) {
zipFile, err := file.Open()
if err != nil {
return nil, err
}
defer zipFile.Close()
zipBytes, err := ioutil.ReadAll(zipFile)
if err != nil {
return nil, err
}
return zip.NewReader(bytes.NewReader(zipBytes), int64(len(zipBytes)))
}
func globFiles(pattern string, archive *zip.Reader) []*zip.File {
func globFiles(pattern string, archive *zip.ReadCloser) []*zip.File {
var files []*zip.File
for _, f := range archive.File {
if matched, err := path.Match(pattern, f.Name); err != nil {
@ -435,14 +460,14 @@ var packageTmpl = template.Must(template.New("package").Parse(`{{$name := .Metad
// {{if .Metadata.DocURL}}
// This DCP is documented in detail at: {{.Metadata.DocURL}}{{end}}
//
// Typically, use one of the New* functions to discover services on the local
// network.
// Typically, use one of the New* functions to create clients for services.
package {{$name}}
// Generated file - do not edit by hand. See README.md
import (
"net/url"
"time"
"github.com/huin/goupnp"
@ -484,38 +509,77 @@ func New{{$srvIdent}}Clients() (clients []*{{$srvIdent}}, errors []error, err er
if genericClients, errors, err = goupnp.NewServiceClients({{$srv.Const}}); err != nil {
return
}
clients = make([]*{{$srvIdent}}, len(genericClients))
clients = new{{$srvIdent}}ClientsFromGenericClients(genericClients)
return
}
// New{{$srvIdent}}ClientsByURL discovers instances of the service at the given
// URL, and returns clients to any that are found. An error is returned if
// there was an error probing the service.
//
// This is a typical entry calling point into this package when reusing an
// previously discovered service URL.
func New{{$srvIdent}}ClientsByURL(loc *url.URL) ([]*{{$srvIdent}}, error) {
genericClients, err := goupnp.NewServiceClientsByURL(loc, {{$srv.Const}})
if err != nil {
return nil, err
}
return new{{$srvIdent}}ClientsFromGenericClients(genericClients), nil
}
// New{{$srvIdent}}ClientsFromRootDevice discovers instances of the service in
// a given root device, and returns clients to any that are found. An error is
// returned if there was not at least one instance of the service within the
// device. The location parameter is simply assigned to the Location attribute
// of the wrapped ServiceClient(s).
//
// This is a typical entry calling point into this package when reusing an
// previously discovered root device.
func New{{$srvIdent}}ClientsFromRootDevice(rootDevice *goupnp.RootDevice, loc *url.URL) ([]*{{$srvIdent}}, error) {
genericClients, err := goupnp.NewServiceClientsFromRootDevice(rootDevice, loc, {{$srv.Const}})
if err != nil {
return nil, err
}
return new{{$srvIdent}}ClientsFromGenericClients(genericClients), nil
}
func new{{$srvIdent}}ClientsFromGenericClients(genericClients []goupnp.ServiceClient) []*{{$srvIdent}} {
clients := make([]*{{$srvIdent}}, len(genericClients))
for i := range genericClients {
clients[i] = &{{$srvIdent}}{genericClients[i]}
}
return
return clients
}
{{range .SCPD.Actions}}{{/* loops over *SCPDWithURN values */}}
{{$inargs := .InputArguments}}{{$outargs := .OutputArguments}}
// {{if $inargs}}Arguments:{{range $inargs}}{{$argWrap := $srv.WrapArgument .}}
{{$winargs := $srv.WrapArguments .InputArguments}}
{{$woutargs := $srv.WrapArguments .OutputArguments}}
{{if $winargs.HasDoc}}
//
// * {{.Name}}: {{$argWrap.Document}}{{end}}{{end}}
// Arguments:{{range $winargs}}{{if .HasDoc}}
//
// {{if $outargs}}Return values:{{range $outargs}}{{$argWrap := $srv.WrapArgument .}}
// * {{.Name}}: {{.Document}}{{end}}{{end}}{{end}}
{{if $woutargs.HasDoc}}
//
// * {{.Name}}: {{$argWrap.Document}}{{end}}{{end}}
func (client *{{$srvIdent}}) {{.Name}}({{range $inargs}}{{/*
*/}}{{$argWrap := $srv.WrapArgument .}}{{$argWrap.AsParameter}}, {{end}}{{/*
*/}}) ({{range $outargs}}{{/*
*/}}{{$argWrap := $srv.WrapArgument .}}{{$argWrap.AsParameter}}, {{end}} err error) {
// Return values:{{range $woutargs}}{{if .HasDoc}}
//
// * {{.Name}}: {{.Document}}{{end}}{{end}}{{end}}
func (client *{{$srvIdent}}) {{.Name}}({{range $winargs}}{{/*
*/}}{{.AsParameter}}, {{end}}{{/*
*/}}) ({{range $woutargs}}{{/*
*/}}{{.AsParameter}}, {{end}} err error) {
// Request structure.
request := {{if $inargs}}&{{template "argstruct" $inargs}}{{"{}"}}{{else}}{{"interface{}(nil)"}}{{end}}
request := {{if $winargs}}&{{template "argstruct" $winargs}}{{"{}"}}{{else}}{{"interface{}(nil)"}}{{end}}
// BEGIN Marshal arguments into request.
{{range $inargs}}{{$argWrap := $srv.WrapArgument .}}
if request.{{.Name}}, err = {{$argWrap.Marshal}}; err != nil {
{{range $winargs}}
if request.{{.Name}}, err = {{.Marshal}}; err != nil {
return
}{{end}}
// END Marshal arguments into request.
// Response structure.
response := {{if $outargs}}&{{template "argstruct" $outargs}}{{"{}"}}{{else}}{{"interface{}(nil)"}}{{end}}
response := {{if $woutargs}}&{{template "argstruct" $woutargs}}{{"{}"}}{{else}}{{"interface{}(nil)"}}{{end}}
// Perform the SOAP call.
if err = client.SOAPClient.PerformAction({{$srv.URNParts.Const}}, "{{.Name}}", request, response); err != nil {
@ -523,8 +587,8 @@ func (client *{{$srvIdent}}) {{.Name}}({{range $inargs}}{{/*
}
// BEGIN Unmarshal arguments from response.
{{range $outargs}}{{$argWrap := $srv.WrapArgument .}}
if {{.Name}}, err = {{$argWrap.Unmarshal "response"}}; err != nil {
{{range $woutargs}}
if {{.Name}}, err = {{.Unmarshal "response"}}; err != nil {
return
}{{end}}
// END Unmarshal arguments from response.

View File

@ -20,6 +20,7 @@ import (
"net/http"
"net/url"
"time"
"golang.org/x/net/html/charset"
"github.com/huin/goupnp/httpu"
@ -38,8 +39,16 @@ func (err ContextError) Error() string {
// MaybeRootDevice contains either a RootDevice or an error.
type MaybeRootDevice struct {
// Set iff Err == nil.
Root *RootDevice
Err error
// The location the device was discovered at. This can be used with
// DeviceByURL, assuming the device is still present. A location represents
// the discovery of a device, regardless of if there was an error probing it.
Location *url.URL
// Any error encountered probing a discovered device.
Err error
}
// DiscoverDevices attempts to find targets of the given type. This is
@ -67,30 +76,37 @@ func DiscoverDevices(searchTarget string) ([]MaybeRootDevice, error) {
maybe.Err = ContextError{"unexpected bad location from search", err}
continue
}
locStr := loc.String()
root := new(RootDevice)
if err := requestXml(locStr, DeviceXMLNamespace, root); err != nil {
maybe.Err = ContextError{fmt.Sprintf("error requesting root device details from %q", locStr), err}
continue
}
var urlBaseStr string
if root.URLBaseStr != "" {
urlBaseStr = root.URLBaseStr
maybe.Location = loc
if root, err := DeviceByURL(loc); err != nil {
maybe.Err = err
} else {
urlBaseStr = locStr
maybe.Root = root
}
urlBase, err := url.Parse(urlBaseStr)
if err != nil {
maybe.Err = ContextError{fmt.Sprintf("error parsing location URL %q", locStr), err}
continue
}
root.SetURLBase(urlBase)
maybe.Root = root
}
return results, nil
}
func DeviceByURL(loc *url.URL) (*RootDevice, error) {
locStr := loc.String()
root := new(RootDevice)
if err := requestXml(locStr, DeviceXMLNamespace, root); err != nil {
return nil, ContextError{fmt.Sprintf("error requesting root device details from %q", locStr), err}
}
var urlBaseStr string
if root.URLBaseStr != "" {
urlBaseStr = root.URLBaseStr
} else {
urlBaseStr = locStr
}
urlBase, err := url.Parse(urlBaseStr)
if err != nil {
return nil, ContextError{fmt.Sprintf("error parsing location URL %q", locStr), err}
}
root.SetURLBase(urlBase)
return root, nil
}
func requestXml(url string, defaultSpace string, doc interface{}) error {
timeout := time.Duration(3 * time.Second)
client := http.Client{

View File

@ -2,18 +2,26 @@ package goupnp
import (
"fmt"
"net/url"
"github.com/huin/goupnp/soap"
)
// ServiceClient is a SOAP client, root device and the service for the SOAP
// client rolled into one value. The root device and service are intended to be
// informational.
// client rolled into one value. The root device, location, and service are
// intended to be informational. Location can be used to later recreate a
// ServiceClient with NewServiceClientByURL if the service is still present;
// bypassing the discovery process.
type ServiceClient struct {
SOAPClient *soap.SOAPClient
RootDevice *RootDevice
Location *url.URL
Service *Service
}
// NewServiceClients discovers services, and returns clients for them. err will
// report any error with the discovery process (blocking any device/service
// discovery), errors reports errors on a per-root-device basis.
func NewServiceClients(searchTarget string) (clients []ServiceClient, errors []error, err error) {
var maybeRootDevices []MaybeRootDevice
if maybeRootDevices, err = DiscoverDevices(searchTarget); err != nil {
@ -28,26 +36,50 @@ func NewServiceClients(searchTarget string) (clients []ServiceClient, errors []e
continue
}
device := &maybeRootDevice.Root.Device
srvs := device.FindService(searchTarget)
if len(srvs) == 0 {
errors = append(errors, fmt.Errorf("goupnp: service %q not found within device %q (UDN=%q)",
searchTarget, device.FriendlyName, device.UDN))
deviceClients, err := NewServiceClientsFromRootDevice(maybeRootDevice.Root, maybeRootDevice.Location, searchTarget)
if err != nil {
errors = append(errors, err)
continue
}
for _, srv := range srvs {
clients = append(clients, ServiceClient{
SOAPClient: srv.NewSOAPClient(),
RootDevice: maybeRootDevice.Root,
Service: srv,
})
}
clients = append(clients, deviceClients...)
}
return
}
// NewServiceClientsByURL creates client(s) for the given service URN, for a
// root device at the given URL.
func NewServiceClientsByURL(loc *url.URL, searchTarget string) ([]ServiceClient, error) {
rootDevice, err := DeviceByURL(loc)
if err != nil {
return nil, err
}
return NewServiceClientsFromRootDevice(rootDevice, loc, searchTarget)
}
// NewServiceClientsFromDevice creates client(s) for the given service URN, in
// a given root device. The loc parameter is simply assigned to the
// Location attribute of the returned ServiceClient(s).
func NewServiceClientsFromRootDevice(rootDevice *RootDevice, loc *url.URL, searchTarget string) ([]ServiceClient, error) {
device := &rootDevice.Device
srvs := device.FindService(searchTarget)
if len(srvs) == 0 {
return nil, fmt.Errorf("goupnp: service %q not found within device %q (UDN=%q)",
searchTarget, device.FriendlyName, device.UDN)
}
clients := make([]ServiceClient, 0, len(srvs))
for _, srv := range srvs {
clients = append(clients, ServiceClient{
SOAPClient: srv.NewSOAPClient(),
RootDevice: rootDevice,
Location: loc,
Service: srv,
})
}
return clients, nil
}
// GetServiceClient returns the ServiceClient itself. This is provided so that the
// service client attributes can be accessed via an interface method on a
// wrapping type.

View File

@ -1,85 +0,0 @@
package soap
import (
"bytes"
"io/ioutil"
"net/http"
"net/url"
"reflect"
"testing"
)
type capturingRoundTripper struct {
err error
resp *http.Response
capturedReq *http.Request
}
func (rt *capturingRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
rt.capturedReq = req
return rt.resp, rt.err
}
func TestActionInputs(t *testing.T) {
url, err := url.Parse("http://example.com/soap")
if err != nil {
t.Fatal(err)
}
rt := &capturingRoundTripper{
err: nil,
resp: &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewBufferString(`
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<u:myactionResponse xmlns:u="mynamespace">
<A>valueA</A>
<B>valueB</B>
</u:myactionResponse>
</s:Body>
</s:Envelope>
`)),
},
}
client := SOAPClient{
EndpointURL: *url,
HTTPClient: http.Client{
Transport: rt,
},
}
type In struct {
Foo string
Bar string `soap:"bar"`
}
type Out struct {
A string
B string
}
in := In{"foo", "bar"}
gotOut := Out{}
err = client.PerformAction("mynamespace", "myaction", &in, &gotOut)
if err != nil {
t.Fatal(err)
}
wantBody := (soapPrefix +
`<u:myaction xmlns:u="mynamespace">` +
`<Foo>foo</Foo>` +
`<bar>bar</bar>` +
`</u:myaction>` +
soapSuffix)
body, err := ioutil.ReadAll(rt.capturedReq.Body)
if err != nil {
t.Fatal(err)
}
gotBody := string(body)
if wantBody != gotBody {
t.Errorf("Bad request body\nwant: %q\n got: %q", wantBody, gotBody)
}
wantOut := Out{"valueA", "valueB"}
if !reflect.DeepEqual(wantOut, gotOut) {
t.Errorf("Bad output\nwant: %+v\n got: %+v", wantOut, gotOut)
}
}

View File

@ -5,6 +5,7 @@ import (
"encoding/hex"
"errors"
"fmt"
"net/url"
"regexp"
"strconv"
"strings"
@ -506,3 +507,13 @@ func MarshalBinHex(v []byte) (string, error) {
func UnmarshalBinHex(s string) ([]byte, error) {
return hex.DecodeString(s)
}
// MarshalURI marshals *url.URL to SOAP "uri" type.
func MarshalURI(v *url.URL) (string, error) {
return v.String(), nil
}
// UnmarshalURI unmarshals *url.URL from the SOAP "uri" type.
func UnmarshalURI(s string) (*url.URL, error) {
return url.Parse(s)
}

View File

@ -1,481 +0,0 @@
package soap
import (
"bytes"
"math"
"testing"
"time"
)
type convTest interface {
Marshal() (string, error)
Unmarshal(string) (interface{}, error)
Equal(result interface{}) bool
}
// duper is an interface that convTest values may optionally also implement to
// generate another convTest for a value in an otherwise identical testCase.
type duper interface {
Dupe(tag string) []convTest
}
type testCase struct {
value convTest
str string
wantMarshalErr bool
wantUnmarshalErr bool
noMarshal bool
noUnMarshal bool
tag string
}
type Ui1Test uint8
func (v Ui1Test) Marshal() (string, error) {
return MarshalUi1(uint8(v))
}
func (v Ui1Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalUi1(s)
}
func (v Ui1Test) Equal(result interface{}) bool {
return uint8(v) == result.(uint8)
}
func (v Ui1Test) Dupe(tag string) []convTest {
if tag == "dupe" {
return []convTest{
Ui2Test(v),
Ui4Test(v),
}
}
return nil
}
type Ui2Test uint16
func (v Ui2Test) Marshal() (string, error) {
return MarshalUi2(uint16(v))
}
func (v Ui2Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalUi2(s)
}
func (v Ui2Test) Equal(result interface{}) bool {
return uint16(v) == result.(uint16)
}
type Ui4Test uint32
func (v Ui4Test) Marshal() (string, error) {
return MarshalUi4(uint32(v))
}
func (v Ui4Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalUi4(s)
}
func (v Ui4Test) Equal(result interface{}) bool {
return uint32(v) == result.(uint32)
}
type I1Test int8
func (v I1Test) Marshal() (string, error) {
return MarshalI1(int8(v))
}
func (v I1Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalI1(s)
}
func (v I1Test) Equal(result interface{}) bool {
return int8(v) == result.(int8)
}
func (v I1Test) Dupe(tag string) []convTest {
if tag == "dupe" {
return []convTest{
I2Test(v),
I4Test(v),
}
}
return nil
}
type I2Test int16
func (v I2Test) Marshal() (string, error) {
return MarshalI2(int16(v))
}
func (v I2Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalI2(s)
}
func (v I2Test) Equal(result interface{}) bool {
return int16(v) == result.(int16)
}
type I4Test int32
func (v I4Test) Marshal() (string, error) {
return MarshalI4(int32(v))
}
func (v I4Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalI4(s)
}
func (v I4Test) Equal(result interface{}) bool {
return int32(v) == result.(int32)
}
type IntTest int64
func (v IntTest) Marshal() (string, error) {
return MarshalInt(int64(v))
}
func (v IntTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalInt(s)
}
func (v IntTest) Equal(result interface{}) bool {
return int64(v) == result.(int64)
}
type Fixed14_4Test float64
func (v Fixed14_4Test) Marshal() (string, error) {
return MarshalFixed14_4(float64(v))
}
func (v Fixed14_4Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalFixed14_4(s)
}
func (v Fixed14_4Test) Equal(result interface{}) bool {
return math.Abs(float64(v)-result.(float64)) < 0.001
}
type CharTest rune
func (v CharTest) Marshal() (string, error) {
return MarshalChar(rune(v))
}
func (v CharTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalChar(s)
}
func (v CharTest) Equal(result interface{}) bool {
return rune(v) == result.(rune)
}
type DateTest struct{ time.Time }
func (v DateTest) Marshal() (string, error) {
return MarshalDate(time.Time(v.Time))
}
func (v DateTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalDate(s)
}
func (v DateTest) Equal(result interface{}) bool {
return v.Time.Equal(result.(time.Time))
}
func (v DateTest) Dupe(tag string) []convTest {
if tag != "no:dateTime" {
return []convTest{DateTimeTest{v.Time}}
}
return nil
}
type TimeOfDayTest struct {
TimeOfDay
}
func (v TimeOfDayTest) Marshal() (string, error) {
return MarshalTimeOfDay(v.TimeOfDay)
}
func (v TimeOfDayTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalTimeOfDay(s)
}
func (v TimeOfDayTest) Equal(result interface{}) bool {
return v.TimeOfDay == result.(TimeOfDay)
}
func (v TimeOfDayTest) Dupe(tag string) []convTest {
if tag != "no:time.tz" {
return []convTest{TimeOfDayTzTest{v.TimeOfDay}}
}
return nil
}
type TimeOfDayTzTest struct {
TimeOfDay
}
func (v TimeOfDayTzTest) Marshal() (string, error) {
return MarshalTimeOfDayTz(v.TimeOfDay)
}
func (v TimeOfDayTzTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalTimeOfDayTz(s)
}
func (v TimeOfDayTzTest) Equal(result interface{}) bool {
return v.TimeOfDay == result.(TimeOfDay)
}
type DateTimeTest struct{ time.Time }
func (v DateTimeTest) Marshal() (string, error) {
return MarshalDateTime(time.Time(v.Time))
}
func (v DateTimeTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalDateTime(s)
}
func (v DateTimeTest) Equal(result interface{}) bool {
return v.Time.Equal(result.(time.Time))
}
func (v DateTimeTest) Dupe(tag string) []convTest {
if tag != "no:dateTime.tz" {
return []convTest{DateTimeTzTest{v.Time}}
}
return nil
}
type DateTimeTzTest struct{ time.Time }
func (v DateTimeTzTest) Marshal() (string, error) {
return MarshalDateTimeTz(time.Time(v.Time))
}
func (v DateTimeTzTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalDateTimeTz(s)
}
func (v DateTimeTzTest) Equal(result interface{}) bool {
return v.Time.Equal(result.(time.Time))
}
type BooleanTest bool
func (v BooleanTest) Marshal() (string, error) {
return MarshalBoolean(bool(v))
}
func (v BooleanTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalBoolean(s)
}
func (v BooleanTest) Equal(result interface{}) bool {
return bool(v) == result.(bool)
}
type BinBase64Test []byte
func (v BinBase64Test) Marshal() (string, error) {
return MarshalBinBase64([]byte(v))
}
func (v BinBase64Test) Unmarshal(s string) (interface{}, error) {
return UnmarshalBinBase64(s)
}
func (v BinBase64Test) Equal(result interface{}) bool {
return bytes.Equal([]byte(v), result.([]byte))
}
type BinHexTest []byte
func (v BinHexTest) Marshal() (string, error) {
return MarshalBinHex([]byte(v))
}
func (v BinHexTest) Unmarshal(s string) (interface{}, error) {
return UnmarshalBinHex(s)
}
func (v BinHexTest) Equal(result interface{}) bool {
return bytes.Equal([]byte(v), result.([]byte))
}
func Test(t *testing.T) {
const time010203 time.Duration = (1*3600 + 2*60 + 3) * time.Second
const time0102 time.Duration = (1*3600 + 2*60) * time.Second
const time01 time.Duration = (1 * 3600) * time.Second
const time235959 time.Duration = (23*3600 + 59*60 + 59) * time.Second
// Fake out the local time for the implementation.
localLoc = time.FixedZone("Fake/Local", 6*3600)
defer func() {
localLoc = time.Local
}()
tests := []testCase{
// ui1
{str: "", value: Ui1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: " ", value: Ui1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: "abc", value: Ui1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: "-1", value: Ui1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: "0", value: Ui1Test(0), tag: "dupe"},
{str: "1", value: Ui1Test(1), tag: "dupe"},
{str: "255", value: Ui1Test(255), tag: "dupe"},
{str: "256", value: Ui1Test(0), wantUnmarshalErr: true, noMarshal: true},
// ui2
{str: "65535", value: Ui2Test(65535)},
{str: "65536", value: Ui2Test(0), wantUnmarshalErr: true, noMarshal: true},
// ui4
{str: "4294967295", value: Ui4Test(4294967295)},
{str: "4294967296", value: Ui4Test(0), wantUnmarshalErr: true, noMarshal: true},
// i1
{str: "", value: I1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: " ", value: I1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: "abc", value: I1Test(0), wantUnmarshalErr: true, noMarshal: true, tag: "dupe"},
{str: "0", value: I1Test(0), tag: "dupe"},
{str: "-1", value: I1Test(-1), tag: "dupe"},
{str: "127", value: I1Test(127), tag: "dupe"},
{str: "-128", value: I1Test(-128), tag: "dupe"},
{str: "128", value: I1Test(0), wantUnmarshalErr: true, noMarshal: true},
{str: "-129", value: I1Test(0), wantUnmarshalErr: true, noMarshal: true},
// i2
{str: "32767", value: I2Test(32767)},
{str: "-32768", value: I2Test(-32768)},
{str: "32768", value: I2Test(0), wantUnmarshalErr: true, noMarshal: true},
{str: "-32769", value: I2Test(0), wantUnmarshalErr: true, noMarshal: true},
// i4
{str: "2147483647", value: I4Test(2147483647)},
{str: "-2147483648", value: I4Test(-2147483648)},
{str: "2147483648", value: I4Test(0), wantUnmarshalErr: true, noMarshal: true},
{str: "-2147483649", value: I4Test(0), wantUnmarshalErr: true, noMarshal: true},
// int
{str: "9223372036854775807", value: IntTest(9223372036854775807)},
{str: "-9223372036854775808", value: IntTest(-9223372036854775808)},
{str: "9223372036854775808", value: IntTest(0), wantUnmarshalErr: true, noMarshal: true},
{str: "-9223372036854775809", value: IntTest(0), wantUnmarshalErr: true, noMarshal: true},
// fixed.14.4
{str: "0.0000", value: Fixed14_4Test(0)},
{str: "1.0000", value: Fixed14_4Test(1)},
{str: "1.2346", value: Fixed14_4Test(1.23456)},
{str: "-1.0000", value: Fixed14_4Test(-1)},
{str: "-1.2346", value: Fixed14_4Test(-1.23456)},
{str: "10000000000000.0000", value: Fixed14_4Test(1e13)},
{str: "100000000000000.0000", value: Fixed14_4Test(1e14), wantMarshalErr: true, wantUnmarshalErr: true},
{str: "-10000000000000.0000", value: Fixed14_4Test(-1e13)},
{str: "-100000000000000.0000", value: Fixed14_4Test(-1e14), wantMarshalErr: true, wantUnmarshalErr: true},
// char
{str: "a", value: CharTest('a')},
{str: "z", value: CharTest('z')},
{str: "\u1234", value: CharTest(0x1234)},
{str: "aa", value: CharTest(0), wantMarshalErr: true, wantUnmarshalErr: true},
{str: "", value: CharTest(0), wantMarshalErr: true, wantUnmarshalErr: true},
// date
{str: "2013-10-08", value: DateTest{time.Date(2013, 10, 8, 0, 0, 0, 0, localLoc)}, tag: "no:dateTime"},
{str: "20131008", value: DateTest{time.Date(2013, 10, 8, 0, 0, 0, 0, localLoc)}, noMarshal: true, tag: "no:dateTime"},
{str: "2013-10-08T10:30:50", value: DateTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime"},
{str: "2013-10-08T10:30:50Z", value: DateTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime"},
{str: "", value: DateTest{}, wantMarshalErr: true, wantUnmarshalErr: true, noMarshal: true},
{str: "-1", value: DateTest{}, wantUnmarshalErr: true, noMarshal: true},
// time
{str: "00:00:00", value: TimeOfDayTest{TimeOfDay{FromMidnight: 0}}},
{str: "000000", value: TimeOfDayTest{TimeOfDay{FromMidnight: 0}}, noMarshal: true},
{str: "24:00:00", value: TimeOfDayTest{TimeOfDay{FromMidnight: 24 * time.Hour}}, noMarshal: true}, // ISO8601 special case
{str: "24:01:00", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "24:00:01", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "25:00:00", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "00:60:00", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "00:00:60", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "01:02:03", value: TimeOfDayTest{TimeOfDay{FromMidnight: time010203}}},
{str: "010203", value: TimeOfDayTest{TimeOfDay{FromMidnight: time010203}}, noMarshal: true},
{str: "23:59:59", value: TimeOfDayTest{TimeOfDay{FromMidnight: time235959}}},
{str: "235959", value: TimeOfDayTest{TimeOfDay{FromMidnight: time235959}}, noMarshal: true},
{str: "01:02", value: TimeOfDayTest{TimeOfDay{FromMidnight: time0102}}, noMarshal: true},
{str: "0102", value: TimeOfDayTest{TimeOfDay{FromMidnight: time0102}}, noMarshal: true},
{str: "01", value: TimeOfDayTest{TimeOfDay{FromMidnight: time01}}, noMarshal: true},
{str: "foo 01:02:03", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "foo\n01:02:03", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03 foo", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03\nfoo", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03Z", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03+01", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03+01:23", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03+0123", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03-01", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03-01:23", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
{str: "01:02:03-0123", value: TimeOfDayTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:time.tz"},
// time.tz
{str: "24:00:01", value: TimeOfDayTzTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "01Z", value: TimeOfDayTzTest{TimeOfDay{time01, true, 0}}, noMarshal: true},
{str: "01:02:03Z", value: TimeOfDayTzTest{TimeOfDay{time010203, true, 0}}},
{str: "01+01", value: TimeOfDayTzTest{TimeOfDay{time01, true, 3600}}, noMarshal: true},
{str: "01:02:03+01", value: TimeOfDayTzTest{TimeOfDay{time010203, true, 3600}}, noMarshal: true},
{str: "01:02:03+01:23", value: TimeOfDayTzTest{TimeOfDay{time010203, true, 3600 + 23*60}}},
{str: "01:02:03+0123", value: TimeOfDayTzTest{TimeOfDay{time010203, true, 3600 + 23*60}}, noMarshal: true},
{str: "01:02:03-01", value: TimeOfDayTzTest{TimeOfDay{time010203, true, -3600}}, noMarshal: true},
{str: "01:02:03-01:23", value: TimeOfDayTzTest{TimeOfDay{time010203, true, -(3600 + 23*60)}}},
{str: "01:02:03-0123", value: TimeOfDayTzTest{TimeOfDay{time010203, true, -(3600 + 23*60)}}, noMarshal: true},
// dateTime
{str: "2013-10-08T00:00:00", value: DateTimeTest{time.Date(2013, 10, 8, 0, 0, 0, 0, localLoc)}, tag: "no:dateTime.tz"},
{str: "20131008", value: DateTimeTest{time.Date(2013, 10, 8, 0, 0, 0, 0, localLoc)}, noMarshal: true},
{str: "2013-10-08T10:30:50", value: DateTimeTest{time.Date(2013, 10, 8, 10, 30, 50, 0, localLoc)}, tag: "no:dateTime.tz"},
{str: "2013-10-08T10:30:50T", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true},
{str: "2013-10-08T10:30:50+01", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime.tz"},
{str: "2013-10-08T10:30:50+01:23", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime.tz"},
{str: "2013-10-08T10:30:50+0123", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime.tz"},
{str: "2013-10-08T10:30:50-01", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime.tz"},
{str: "2013-10-08T10:30:50-01:23", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime.tz"},
{str: "2013-10-08T10:30:50-0123", value: DateTimeTest{}, wantUnmarshalErr: true, noMarshal: true, tag: "no:dateTime.tz"},
// dateTime.tz
{str: "2013-10-08T10:30:50", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, localLoc)}, noMarshal: true},
{str: "2013-10-08T10:30:50+01", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, time.FixedZone("+01:00", 3600))}, noMarshal: true},
{str: "2013-10-08T10:30:50+01:23", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, time.FixedZone("+01:23", 3600+23*60))}},
{str: "2013-10-08T10:30:50+0123", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, time.FixedZone("+01:23", 3600+23*60))}, noMarshal: true},
{str: "2013-10-08T10:30:50-01", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, time.FixedZone("-01:00", -3600))}, noMarshal: true},
{str: "2013-10-08T10:30:50-01:23", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, time.FixedZone("-01:23", -(3600+23*60)))}},
{str: "2013-10-08T10:30:50-0123", value: DateTimeTzTest{time.Date(2013, 10, 8, 10, 30, 50, 0, time.FixedZone("-01:23", -(3600+23*60)))}, noMarshal: true},
// boolean
{str: "0", value: BooleanTest(false)},
{str: "1", value: BooleanTest(true)},
{str: "false", value: BooleanTest(false), noMarshal: true},
{str: "true", value: BooleanTest(true), noMarshal: true},
{str: "no", value: BooleanTest(false), noMarshal: true},
{str: "yes", value: BooleanTest(true), noMarshal: true},
{str: "", value: BooleanTest(false), noMarshal: true, wantUnmarshalErr: true},
{str: "other", value: BooleanTest(false), noMarshal: true, wantUnmarshalErr: true},
{str: "2", value: BooleanTest(false), noMarshal: true, wantUnmarshalErr: true},
{str: "-1", value: BooleanTest(false), noMarshal: true, wantUnmarshalErr: true},
// bin.base64
{str: "", value: BinBase64Test{}},
{str: "YQ==", value: BinBase64Test("a")},
{str: "TG9uZ2VyIFN0cmluZy4=", value: BinBase64Test("Longer String.")},
{str: "TG9uZ2VyIEFsaWduZWQu", value: BinBase64Test("Longer Aligned.")},
// bin.hex
{str: "", value: BinHexTest{}},
{str: "61", value: BinHexTest("a")},
{str: "4c6f6e67657220537472696e672e", value: BinHexTest("Longer String.")},
{str: "4C6F6E67657220537472696E672E", value: BinHexTest("Longer String."), noMarshal: true},
}
// Generate extra test cases from convTests that implement duper.
var extras []testCase
for i := range tests {
if duper, ok := tests[i].value.(duper); ok {
dupes := duper.Dupe(tests[i].tag)
for _, duped := range dupes {
dupedCase := testCase(tests[i])
dupedCase.value = duped
extras = append(extras, dupedCase)
}
}
}
tests = append(tests, extras...)
for _, test := range tests {
if test.noMarshal {
} else if resultStr, err := test.value.Marshal(); err != nil && !test.wantMarshalErr {
t.Errorf("For %T marshal %v, want %q, got error: %v", test.value, test.value, test.str, err)
} else if err == nil && test.wantMarshalErr {
t.Errorf("For %T marshal %v, want error, got %q", test.value, test.value, resultStr)
} else if err == nil && resultStr != test.str {
t.Errorf("For %T marshal %v, want %q, got %q", test.value, test.value, test.str, resultStr)
}
if test.noUnMarshal {
} else if resultValue, err := test.value.Unmarshal(test.str); err != nil && !test.wantUnmarshalErr {
t.Errorf("For %T unmarshal %q, want %v, got error: %v", test.value, test.str, test.value, err)
} else if err == nil && test.wantUnmarshalErr {
t.Errorf("For %T unmarshal %q, want error, got %v", test.value, test.str, resultValue)
} else if err == nil && !test.value.Equal(resultValue) {
t.Errorf("For %T unmarshal %q, want %v, got %v", test.value, test.str, test.value, resultValue)
}
}
}

View File

@ -21,6 +21,40 @@ var (
maxAgeRx = regexp.MustCompile("max-age=([0-9]+)")
)
const (
EventAlive = EventType(iota)
EventUpdate
EventByeBye
)
type EventType int8
func (et EventType) String() string {
switch et {
case EventAlive:
return "EventAlive"
case EventUpdate:
return "EventUpdate"
case EventByeBye:
return "EventByeBye"
default:
return fmt.Sprintf("EventUnknown(%d)", int8(et))
}
}
type Update struct {
// The USN of the service.
USN string
// What happened.
EventType EventType
// The entry, which is nil if the service was not known and
// EventType==EventByeBye. The contents of this must not be modified as it is
// shared with the registry and other listeners. Once created, the Registry
// does not modify the Entry value - any updates are replaced with a new
// Entry value.
Entry *Entry
}
type Entry struct {
// The address that the entry data was actually received from.
RemoteAddr string
@ -32,7 +66,7 @@ type Entry struct {
Server string
Host string
// Location of the UPnP root device description.
Location *url.URL
Location url.URL
// Despite BOOTID,CONFIGID being required fields, apparently they are not
// always set by devices. Set to -1 if not present.
@ -83,7 +117,7 @@ func newEntryFromRequest(r *http.Request) (*Entry, error) {
NT: r.Header.Get("NT"),
Server: r.Header.Get("SERVER"),
Host: r.Header.Get("HOST"),
Location: loc,
Location: *loc,
BootID: bootID,
ConfigID: configID,
SearchPort: uint16(searchPort),
@ -125,17 +159,73 @@ func parseUpnpIntHeader(headers http.Header, headerName string, def int32) (int3
var _ httpu.Handler = new(Registry)
// Registry maintains knowledge of discovered devices and services.
//
// NOTE: the interface for this is experimental and may change, or go away
// entirely.
type Registry struct {
lock sync.Mutex
byUSN map[string]*Entry
listenersLock sync.RWMutex
listeners map[chan<- Update]struct{}
}
func NewRegistry() *Registry {
return &Registry{
byUSN: make(map[string]*Entry),
byUSN: make(map[string]*Entry),
listeners: make(map[chan<- Update]struct{}),
}
}
// NewServerAndRegistry is a convenience function to create a registry, and an
// httpu server to pass it messages. Call ListenAndServe on the server for
// messages to be processed.
func NewServerAndRegistry() (*httpu.Server, *Registry) {
reg := NewRegistry()
srv := &httpu.Server{
Addr: ssdpUDP4Addr,
Multicast: true,
Handler: reg,
}
return srv, reg
}
func (reg *Registry) AddListener(c chan<- Update) {
reg.listenersLock.Lock()
defer reg.listenersLock.Unlock()
reg.listeners[c] = struct{}{}
}
func (reg *Registry) RemoveListener(c chan<- Update) {
reg.listenersLock.Lock()
defer reg.listenersLock.Unlock()
delete(reg.listeners, c)
}
func (reg *Registry) sendUpdate(u Update) {
reg.listenersLock.RLock()
defer reg.listenersLock.RUnlock()
for c := range reg.listeners {
c <- u
}
}
// GetService returns known service (or device) entries for the given service
// URN.
func (reg *Registry) GetService(serviceURN string) []*Entry {
// Currently assumes that the map is small, so we do a linear search rather
// than indexed to avoid maintaining two maps.
var results []*Entry
reg.lock.Lock()
defer reg.lock.Unlock()
for _, entry := range reg.byUSN {
if entry.NT == serviceURN {
results = append(results, entry)
}
}
return results
}
// ServeMessage implements httpu.Handler, and uses SSDP NOTIFY requests to
// maintain the registry of devices and services.
func (reg *Registry) ServeMessage(r *http.Request) {
@ -156,7 +246,9 @@ func (reg *Registry) ServeMessage(r *http.Request) {
default:
err = fmt.Errorf("unknown NTS value: %q", nts)
}
log.Printf("In %s request from %s: %v", nts, r.RemoteAddr, err)
if err != nil {
log.Printf("goupnp/ssdp: failed to handle %s message from %s: %v", nts, r.RemoteAddr, err)
}
}
func (reg *Registry) handleNTSAlive(r *http.Request) error {
@ -166,9 +258,14 @@ func (reg *Registry) handleNTSAlive(r *http.Request) error {
}
reg.lock.Lock()
defer reg.lock.Unlock()
reg.byUSN[entry.USN] = entry
reg.lock.Unlock()
reg.sendUpdate(Update{
USN: entry.USN,
EventType: EventAlive,
Entry: entry,
})
return nil
}
@ -185,18 +282,31 @@ func (reg *Registry) handleNTSUpdate(r *http.Request) error {
entry.BootID = nextBootID
reg.lock.Lock()
defer reg.lock.Unlock()
reg.byUSN[entry.USN] = entry
reg.lock.Unlock()
reg.sendUpdate(Update{
USN: entry.USN,
EventType: EventUpdate,
Entry: entry,
})
return nil
}
func (reg *Registry) handleNTSByebye(r *http.Request) error {
reg.lock.Lock()
defer reg.lock.Unlock()
usn := r.Header.Get("USN")
delete(reg.byUSN, r.Header.Get("USN"))
reg.lock.Lock()
entry := reg.byUSN[usn]
delete(reg.byUSN, usn)
reg.lock.Unlock()
reg.sendUpdate(Update{
USN: usn,
EventType: EventByeBye,
Entry: entry,
})
return nil
}

View File

@ -0,0 +1,9 @@
Copyright (c) Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
MIT License (Expat)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,4 +1,4 @@
// +build darwin freebsd
// +build darwin freebsd openbsd netbsd
package isatty

View File

@ -0,0 +1 @@
Paul Borman <borman@google.com>

View File

@ -1,4 +1,4 @@
Copyright (c) 2009 Google Inc. All rights reserved.
Copyright (c) 2009,2014 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are

30
Godeps/_workspace/src/github.com/pborman/uuid/json.go generated vendored Normal file
View File

@ -0,0 +1,30 @@
// Copyright 2014 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import "errors"
func (u UUID) MarshalJSON() ([]byte, error) {
if len(u) == 0 {
return []byte(`""`), nil
}
return []byte(`"` + u.String() + `"`), nil
}
func (u *UUID) UnmarshalJSON(data []byte) error {
if len(data) == 0 || string(data) == `""` {
return nil
}
if len(data) < 2 || data[0] != '"' || data[len(data)-1] != '"' {
return errors.New("invalid UUID format")
}
data = data[1 : len(data)-1]
uu := Parse(string(data))
if uu == nil {
return errors.New("invalid UUID format")
}
*u = uu
return nil
}

View File

@ -0,0 +1,32 @@
// Copyright 2014 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"encoding/json"
"reflect"
"testing"
)
var testUUID = Parse("f47ac10b-58cc-0372-8567-0e02b2c3d479")
func TestJSON(t *testing.T) {
type S struct {
ID1 UUID
ID2 UUID
}
s1 := S{ID1: testUUID}
data, err := json.Marshal(&s1)
if err != nil {
t.Fatal(err)
}
var s2 S
if err := json.Unmarshal(data, &s2); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(&s1, &s2) {
t.Errorf("got %#v, want %#v", s2, s1)
}
}

View File

@ -0,0 +1,66 @@
// Copyright 2014 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"flag"
"runtime"
"testing"
"time"
)
// This test is only run when --regressions is passed on the go test line.
var regressions = flag.Bool("regressions", false, "run uuid regression tests")
// TestClockSeqRace tests for a particular race condition of returning two
// identical Version1 UUIDs. The duration of 1 minute was chosen as the race
// condition, before being fixed, nearly always occured in under 30 seconds.
func TestClockSeqRace(t *testing.T) {
if !*regressions {
t.Skip("skipping regression tests")
}
duration := time.Minute
done := make(chan struct{})
defer close(done)
ch := make(chan UUID, 10000)
ncpu := runtime.NumCPU()
switch ncpu {
case 0, 1:
// We can't run the test effectively.
t.Skip("skipping race test, only one CPU detected")
return
default:
runtime.GOMAXPROCS(ncpu)
}
for i := 0; i < ncpu; i++ {
go func() {
for {
select {
case <-done:
return
case ch <- NewUUID():
}
}
}()
}
uuids := make(map[string]bool)
cnt := 0
start := time.Now()
for u := range ch {
s := u.String()
if uuids[s] {
t.Errorf("duplicate uuid after %d in %v: %s", cnt, time.Since(start), s)
return
}
uuids[s] = true
if time.Since(start) > duration {
return
}
cnt++
}
}

40
Godeps/_workspace/src/github.com/pborman/uuid/sql.go generated vendored Normal file
View File

@ -0,0 +1,40 @@
// Copyright 2015 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"errors"
"fmt"
)
// Scan implements sql.Scanner so UUIDs can be read from databases transparently
// Currently, database types that map to string and []byte are supported. Please
// consult database-specific driver documentation for matching types.
func (uuid *UUID) Scan(src interface{}) error {
switch src.(type) {
case string:
// see uuid.Parse for required string format
parsed := Parse(src.(string))
if parsed == nil {
return errors.New("Scan: invalid UUID format")
}
*uuid = parsed
case []byte:
// assumes a simple slice of bytes, just check validity and store
u := UUID(src.([]byte))
if u.Variant() == Invalid {
return errors.New("Scan: invalid UUID format")
}
*uuid = u
default:
return fmt.Errorf("Scan: unable to scan type %T into UUID", src)
}
return nil
}

View File

@ -0,0 +1,53 @@
// Copyright 2015 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"strings"
"testing"
)
func TestScan(t *testing.T) {
var stringTest string = "f47ac10b-58cc-0372-8567-0e02b2c3d479"
var byteTest []byte = Parse(stringTest)
var badTypeTest int = 6
var invalidTest string = "f47ac10b-58cc-0372-8567-0e02b2c3d4"
var invalidByteTest []byte = Parse(invalidTest)
var uuid UUID
err := (&uuid).Scan(stringTest)
if err != nil {
t.Fatal(err)
}
err = (&uuid).Scan(byteTest)
if err != nil {
t.Fatal(err)
}
err = (&uuid).Scan(badTypeTest)
if err == nil {
t.Error("int correctly parsed and shouldn't have")
}
if !strings.Contains(err.Error(), "unable to scan type") {
t.Error("attempting to parse an int returned an incorrect error message")
}
err = (&uuid).Scan(invalidTest)
if err == nil {
t.Error("invalid uuid was parsed without error")
}
if !strings.Contains(err.Error(), "invalid UUID") {
t.Error("attempting to parse an invalid UUID returned an incorrect error message")
}
err = (&uuid).Scan(invalidByteTest)
if err == nil {
t.Error("invalid byte uuid was parsed without error")
}
if !strings.Contains(err.Error(), "invalid UUID") {
t.Error("attempting to parse an invalid byte UUID returned an incorrect error message")
}
}

View File

@ -40,15 +40,15 @@ func (t Time) UnixTime() (sec, nsec int64) {
}
// GetTime returns the current Time (100s of nanoseconds since 15 Oct 1582) and
// adjusts the clock sequence as needed. An error is returned if the current
// time cannot be determined.
func GetTime() (Time, error) {
// clock sequence as well as adjusting the clock sequence as needed. An error
// is returned if the current time cannot be determined.
func GetTime() (Time, uint16, error) {
defer mu.Unlock()
mu.Lock()
return getTime()
}
func getTime() (Time, error) {
func getTime() (Time, uint16, error) {
t := timeNow()
// If we don't have a clock sequence already, set one.
@ -63,7 +63,7 @@ func getTime() (Time, error) {
clock_seq = ((clock_seq + 1) & 0x3fff) | 0x8000
}
lasttime = now
return Time(now), nil
return Time(now), clock_seq, nil
}
// ClockSequence returns the current clock sequence, generating one if not

View File

@ -19,7 +19,7 @@ func NewUUID() UUID {
SetNodeInterface("")
}
now, err := GetTime()
now, seq, err := GetTime()
if err != nil {
return nil
}
@ -34,7 +34,7 @@ func NewUUID() UUID {
binary.BigEndian.PutUint32(uuid[0:], time_low)
binary.BigEndian.PutUint16(uuid[4:], time_mid)
binary.BigEndian.PutUint16(uuid[6:], time_hi)
binary.BigEndian.PutUint16(uuid[8:], clock_seq)
binary.BigEndian.PutUint16(uuid[8:], seq)
copy(uuid[10:], nodeID)
return uuid

112
Makefile
View File

@ -2,25 +2,118 @@
# with Go source code. If you know what GOPATH is then you probably
# don't need to bother with make.
.PHONY: geth evm mist all test travis-test-with-coverage clean
.PHONY: geth geth-cross evm all test travis-test-with-coverage xgo clean
.PHONY: geth-linux geth-linux-arm geth-linux-386 geth-linux-amd64
.PHONY: geth-darwin geth-darwin-386 geth-darwin-amd64
.PHONY: geth-windows geth-windows-386 geth-windows-amd64
.PHONY: geth-android geth-android-16 geth-android-21
GOBIN = build/bin
MODE ?= default
GO ?= latest
geth:
build/env.sh go install -v $(shell build/ldflags.sh) ./cmd/geth
build/env.sh go install -v $(shell build/flags.sh) ./cmd/geth
@echo "Done building."
@echo "Run \"$(GOBIN)/geth\" to launch geth."
geth-cross: geth-linux geth-darwin geth-windows geth-android
@echo "Full cross compilation done:"
@ls -l $(GOBIN)/geth-*
geth-linux: xgo geth-linux-arm geth-linux-386 geth-linux-amd64
@echo "Linux cross compilation done:"
@ls -l $(GOBIN)/geth-linux-*
geth-linux-386: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=linux/386 -v $(shell build/flags.sh) ./cmd/geth
@echo "Linux 386 cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep 386
geth-linux-amd64: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=linux/amd64 -v $(shell build/flags.sh) ./cmd/geth
@echo "Linux amd64 cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep amd64
geth-linux-arm: geth-linux-arm-5 geth-linux-arm-6 geth-linux-arm-7 geth-linux-arm64
@echo "Linux ARM cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep arm
geth-linux-arm-5: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=linux/arm-5 -v $(shell build/flags.sh) ./cmd/geth
@echo "Linux ARMv5 cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep arm-5
geth-linux-arm-6: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=linux/arm-6 -v $(shell build/flags.sh) ./cmd/geth
@echo "Linux ARMv6 cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep arm-6
geth-linux-arm-7: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=linux/arm-7 -v $(shell build/flags.sh) ./cmd/geth
@echo "Linux ARMv7 cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep arm-7
geth-linux-arm64: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=linux/arm64 -v $(shell build/flags.sh) ./cmd/geth
@echo "Linux ARM64 cross compilation done:"
@ls -l $(GOBIN)/geth-linux-* | grep arm64
geth-darwin: geth-darwin-386 geth-darwin-amd64
@echo "Darwin cross compilation done:"
@ls -l $(GOBIN)/geth-darwin-*
geth-darwin-386: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=darwin/386 -v $(shell build/flags.sh) ./cmd/geth
@echo "Darwin 386 cross compilation done:"
@ls -l $(GOBIN)/geth-darwin-* | grep 386
geth-darwin-amd64: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=darwin/amd64 -v $(shell build/flags.sh) ./cmd/geth
@echo "Darwin amd64 cross compilation done:"
@ls -l $(GOBIN)/geth-darwin-* | grep amd64
geth-windows: xgo geth-windows-386 geth-windows-amd64
@echo "Windows cross compilation done:"
@ls -l $(GOBIN)/geth-windows-*
geth-windows-386: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=windows/386 -v $(shell build/flags.sh) ./cmd/geth
@echo "Windows 386 cross compilation done:"
@ls -l $(GOBIN)/geth-windows-* | grep 386
geth-windows-amd64: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=windows/amd64 -v $(shell build/flags.sh) ./cmd/geth
@echo "Windows amd64 cross compilation done:"
@ls -l $(GOBIN)/geth-windows-* | grep amd64
geth-android: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=android/* -v $(shell build/flags.sh) ./cmd/geth
@echo "Android cross compilation done:"
@ls -l $(GOBIN)/geth-android-*
geth-ios: geth-ios-arm-7 geth-ios-arm64
@echo "iOS cross compilation done:"
@ls -l $(GOBIN)/geth-ios-*
geth-ios-arm-7: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=ios/arm-7 -v $(shell build/flags.sh) ./cmd/geth
@echo "iOS ARMv7 cross compilation done:"
@ls -l $(GOBIN)/geth-ios-* | grep arm-7
geth-ios-arm64: xgo
build/env.sh $(GOBIN)/xgo --go=$(GO) --buildmode=$(MODE) --dest=$(GOBIN) --targets=ios-7.0/arm64 -v $(shell build/flags.sh) ./cmd/geth
@echo "iOS ARM64 cross compilation done:"
@ls -l $(GOBIN)/geth-ios-* | grep arm64
evm:
build/env.sh $(GOROOT)/bin/go install -v $(shell build/ldflags.sh) ./cmd/evm
build/env.sh $(GOROOT)/bin/go install -v $(shell build/flags.sh) ./cmd/evm
@echo "Done building."
@echo "Run \"$(GOBIN)/evm to start the evm."
mist:
build/env.sh go install -v $(shell build/ldflags.sh) ./cmd/mist
@echo "Done building."
@echo "Run \"$(GOBIN)/mist --asset_path=cmd/mist/assets\" to launch mist."
all:
build/env.sh go install -v $(shell build/ldflags.sh) ./...
build/env.sh go install -v $(shell build/flags.sh) ./...
test: all
build/env.sh go test ./...
@ -28,5 +121,8 @@ test: all
travis-test-with-coverage: all
build/env.sh build/test-global-coverage.sh
xgo:
build/env.sh go get github.com/karalabe/xgo
clean:
rm -fr build/_workspace/pkg/ Godeps/_workspace/pkg $(GOBIN)/*

View File

@ -30,7 +30,7 @@ For prerequisites and detailed build instructions please read the
[Installation Instructions](https://github.com/ethereum/go-ethereum/wiki/Building-Ethereum)
on the wiki.
Building geth requires two external dependencies, Go and GMP.
Building geth requires both a Go and a C compiler.
You can install them using your favourite package manager.
Once the dependencies are installed, run

1
VERSION Normal file
View File

@ -0,0 +1 @@
1.3.3

View File

@ -36,7 +36,7 @@ import (
type Method struct {
Name string
Const bool
Input []Argument
Inputs []Argument
Return Type // not yet implemented
}
@ -49,9 +49,9 @@ type Method struct {
// Please note that "int" is substitute for its canonical representation "int256"
func (m Method) String() (out string) {
out += m.Name
types := make([]string, len(m.Input))
types := make([]string, len(m.Inputs))
i := 0
for _, input := range m.Input {
for _, input := range m.Inputs {
types[i] = input.Type.String()
i++
}
@ -104,7 +104,7 @@ func (abi ABI) pack(name string, args ...interface{}) ([]byte, error) {
var ret []byte
for i, a := range args {
input := method.Input[i]
input := method.Inputs[i]
packed, err := input.Type.pack(a)
if err != nil {
@ -129,8 +129,8 @@ func (abi ABI) Pack(name string, args ...interface{}) ([]byte, error) {
}
// start with argument count match
if len(args) != len(method.Input) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(method.Input))
if len(args) != len(method.Inputs) {
return nil, fmt.Errorf("argument count mismatch: %d for %d", len(args), len(method.Inputs))
}
arguments, err := abi.pack(name, args...)

View File

@ -18,35 +18,38 @@ package abi
import (
"bytes"
"fmt"
"log"
"math/big"
"reflect"
"strings"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
)
const jsondata = `
[
{ "name" : "balance", "const" : true },
{ "name" : "send", "const" : false, "input" : [ { "name" : "amount", "type" : "uint256" } ] }
{ "name" : "send", "const" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] }
]`
const jsondata2 = `
[
{ "name" : "balance", "const" : true },
{ "name" : "send", "const" : false, "input" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "name" : "test", "const" : false, "input" : [ { "name" : "number", "type" : "uint32" } ] },
{ "name" : "string", "const" : false, "input" : [ { "name" : "input", "type" : "string" } ] },
{ "name" : "bool", "const" : false, "input" : [ { "name" : "input", "type" : "bool" } ] },
{ "name" : "address", "const" : false, "input" : [ { "name" : "input", "type" : "address" } ] },
{ "name" : "string32", "const" : false, "input" : [ { "name" : "input", "type" : "string32" } ] },
{ "name" : "uint64[2]", "const" : false, "input" : [ { "name" : "input", "type" : "uint64[2]" } ] },
{ "name" : "uint64[]", "const" : false, "input" : [ { "name" : "input", "type" : "uint64[]" } ] },
{ "name" : "foo", "const" : false, "input" : [ { "name" : "input", "type" : "uint32" } ] },
{ "name" : "bar", "const" : false, "input" : [ { "name" : "input", "type" : "uint32" }, { "name" : "string", "type" : "uint16" } ] },
{ "name" : "slice", "const" : false, "input" : [ { "name" : "input", "type" : "uint32[2]" } ] },
{ "name" : "slice256", "const" : false, "input" : [ { "name" : "input", "type" : "uint256[2]" } ] }
{ "name" : "send", "const" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] },
{ "name" : "test", "const" : false, "inputs" : [ { "name" : "number", "type" : "uint32" } ] },
{ "name" : "string", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "string" } ] },
{ "name" : "bool", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "bool" } ] },
{ "name" : "address", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "address" } ] },
{ "name" : "string32", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "string32" } ] },
{ "name" : "uint64[2]", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "uint64[2]" } ] },
{ "name" : "uint64[]", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "uint64[]" } ] },
{ "name" : "foo", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32" } ] },
{ "name" : "bar", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32" }, { "name" : "string", "type" : "uint16" } ] },
{ "name" : "slice", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "uint32[2]" } ] },
{ "name" : "slice256", "const" : false, "inputs" : [ { "name" : "inputs", "type" : "uint256[2]" } ] }
]`
func TestType(t *testing.T) {
@ -344,3 +347,49 @@ func TestPackSliceBig(t *testing.T) {
t.Errorf("expected %x got %x", sig, packed)
}
}
func ExampleJSON() {
const definition = `[{"constant":true,"inputs":[{"name":"","type":"address"}],"name":"isBar","outputs":[{"name":"","type":"bool"}],"type":"function"}]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
log.Fatalln(err)
}
out, err := abi.Pack("isBar", common.HexToAddress("01"))
if err != nil {
log.Fatalln(err)
}
fmt.Printf("%x\n", out)
// Output:
// 1f2c40920000000000000000000000000000000000000000000000000000000000000001
}
func TestBytes(t *testing.T) {
const definition = `[
{ "name" : "balance", "const" : true, "inputs" : [ { "name" : "address", "type" : "bytes20" } ] },
{ "name" : "send", "const" : false, "inputs" : [ { "name" : "amount", "type" : "uint256" } ] }
]`
abi, err := JSON(strings.NewReader(definition))
if err != nil {
t.Fatal(err)
}
ok := make([]byte, 20)
_, err = abi.Pack("balance", ok)
if err != nil {
t.Error(err)
}
toosmall := make([]byte, 19)
_, err = abi.Pack("balance", toosmall)
if err != nil {
t.Error(err)
}
toobig := make([]byte, 21)
_, err = abi.Pack("balance", toobig)
if err == nil {
t.Error("expected error")
}
}

View File

@ -43,7 +43,7 @@ type Type struct {
stringKind string // holds the unparsed string for deriving signatures
}
// New type returns a fully parsed Type given by the input string or an error if it can't be parsed.
// NewType returns a fully parsed Type given by the input string or an error if it can't be parsed.
//
// Strings can be in the format of:
//
@ -130,6 +130,10 @@ func NewType(t string) (typ Type, err error) {
if vsize > 0 {
typ.Size = 32
}
case "bytes":
typ.Kind = reflect.Slice
typ.Type = byte_ts
typ.Size = vsize
default:
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
@ -200,7 +204,13 @@ func (t Type) pack(v interface{}) ([]byte, error) {
} else {
return common.LeftPadBytes(common.Big0.Bytes(), 32), nil
}
case reflect.Array:
if v, ok := value.Interface().(common.Address); ok {
return t.pack(v[:])
} else if v, ok := value.Interface().(common.Hash); ok {
return t.pack(v[:])
}
}
panic("unreached")
return nil, fmt.Errorf("ABI: bad input given %T", value.Kind())
}

22
build/flags.sh Executable file
View File

@ -0,0 +1,22 @@
#!/bin/sh
set -e
if [ ! -f "build/env.sh" ]; then
echo "$0 must be run from the root of the repository."
exit 2
fi
# Since Go 1.5, the separator char for link time assignments
# is '=' and using ' ' prints a warning. However, Go < 1.5 does
# not support using '='.
sep=$(go version | awk '{ if ($3 >= "go1.5" || index($3, "devel")) print "="; else print " "; }' -)
# set gitCommit when running from a Git checkout.
if [ -f ".git/HEAD" ]; then
echo "-ldflags '-X main.gitCommit$sep$(git rev-parse HEAD)'"
fi
if [ ! -z "$GO_OPENCL" ]; then
echo "-tags opencl"
fi

View File

@ -1,13 +0,0 @@
#!/bin/sh
set -e
if [ ! -f "build/env.sh" ]; then
echo "$0 must be run from the root of the repository."
exit 2
fi
# set gitCommit when running from a Git checkout.
if [ -f ".git/HEAD" ]; then
echo "-ldflags '-X main.gitCommit $(git rev-parse HEAD)'"
fi

View File

@ -46,9 +46,10 @@ var (
skipPrefixes = []string{
// boring stuff
"Godeps/", "tests/files/", "build/",
// don't relicense vendored packages
// don't relicense vendored sources
"crypto/sha3/", "crypto/ecies/", "logger/glog/",
"crypto/curve.go",
"trie/arc.go",
}
// paths with this prefix are licensed as GPL. all other files are LGPL.

View File

@ -80,12 +80,17 @@ var (
Name: "sysstat",
Usage: "display system stats",
}
VerbosityFlag = cli.IntFlag{
Name: "verbosity",
Usage: "sets the verbosity level",
}
)
func init() {
app = utils.NewApp("0.2", "the evm command line interface")
app.Flags = []cli.Flag{
DebugFlag,
VerbosityFlag,
ForceJitFlag,
DisableJitFlag,
SysStatFlag,
@ -105,9 +110,10 @@ func run(ctx *cli.Context) {
vm.EnableJit = !ctx.GlobalBool(DisableJitFlag.Name)
glog.SetToStderr(true)
glog.SetV(ctx.GlobalInt(VerbosityFlag.Name))
db, _ := ethdb.NewMemDatabase()
statedb := state.New(common.Hash{}, db)
statedb, _ := state.New(common.Hash{}, db)
sender := statedb.CreateAccount(common.StringToAddress("sender"))
receiver := statedb.CreateAccount(common.StringToAddress("receiver"))
receiver.SetCode(common.Hex2Bytes(ctx.GlobalString(CodeFlag.Name)))
@ -179,18 +185,20 @@ func NewEnv(state *state.StateDB, transactor common.Address, value *big.Int) *VM
}
}
func (self *VMEnv) State() *state.StateDB { return self.state }
func (self *VMEnv) Origin() common.Address { return *self.transactor }
func (self *VMEnv) BlockNumber() *big.Int { return common.Big0 }
func (self *VMEnv) Coinbase() common.Address { return *self.transactor }
func (self *VMEnv) Time() *big.Int { return self.time }
func (self *VMEnv) Difficulty() *big.Int { return common.Big1 }
func (self *VMEnv) BlockHash() []byte { return make([]byte, 32) }
func (self *VMEnv) Value() *big.Int { return self.value }
func (self *VMEnv) GasLimit() *big.Int { return big.NewInt(1000000000) }
func (self *VMEnv) VmType() vm.Type { return vm.StdVmTy }
func (self *VMEnv) Depth() int { return 0 }
func (self *VMEnv) SetDepth(i int) { self.depth = i }
func (self *VMEnv) Db() vm.Database { return self.state }
func (self *VMEnv) MakeSnapshot() vm.Database { return self.state.Copy() }
func (self *VMEnv) SetSnapshot(db vm.Database) { self.state.Set(db.(*state.StateDB)) }
func (self *VMEnv) Origin() common.Address { return *self.transactor }
func (self *VMEnv) BlockNumber() *big.Int { return common.Big0 }
func (self *VMEnv) Coinbase() common.Address { return *self.transactor }
func (self *VMEnv) Time() *big.Int { return self.time }
func (self *VMEnv) Difficulty() *big.Int { return common.Big1 }
func (self *VMEnv) BlockHash() []byte { return make([]byte, 32) }
func (self *VMEnv) Value() *big.Int { return self.value }
func (self *VMEnv) GasLimit() *big.Int { return big.NewInt(1000000000) }
func (self *VMEnv) VmType() vm.Type { return vm.StdVmTy }
func (self *VMEnv) Depth() int { return 0 }
func (self *VMEnv) SetDepth(i int) { self.depth = i }
func (self *VMEnv) GetHash(n uint64) common.Hash {
if self.block.Number().Cmp(big.NewInt(int64(n))) == 0 {
return self.block.Hash()
@ -203,34 +211,29 @@ func (self *VMEnv) AddStructLog(log vm.StructLog) {
func (self *VMEnv) StructLogs() []vm.StructLog {
return self.logs
}
func (self *VMEnv) AddLog(log *state.Log) {
func (self *VMEnv) AddLog(log *vm.Log) {
self.state.AddLog(log)
}
func (self *VMEnv) CanTransfer(from vm.Account, balance *big.Int) bool {
return from.Balance().Cmp(balance) >= 0
func (self *VMEnv) CanTransfer(from common.Address, balance *big.Int) bool {
return self.state.GetBalance(from).Cmp(balance) >= 0
}
func (self *VMEnv) Transfer(from, to vm.Account, amount *big.Int) error {
return vm.Transfer(from, to, amount)
func (self *VMEnv) Transfer(from, to vm.Account, amount *big.Int) {
core.Transfer(from, to, amount)
}
func (self *VMEnv) vm(addr *common.Address, data []byte, gas, price, value *big.Int) *core.Execution {
return core.NewExecution(self, addr, data, gas, price, value)
func (self *VMEnv) Call(caller vm.ContractRef, addr common.Address, data []byte, gas, price, value *big.Int) ([]byte, error) {
self.Gas = gas
return core.Call(self, caller, addr, data, gas, price, value)
}
func (self *VMEnv) Call(caller vm.ContextRef, addr common.Address, data []byte, gas, price, value *big.Int) ([]byte, error) {
exe := self.vm(&addr, data, gas, price, value)
ret, err := exe.Call(addr, caller)
self.Gas = exe.Gas
return ret, err
}
func (self *VMEnv) CallCode(caller vm.ContextRef, addr common.Address, data []byte, gas, price, value *big.Int) ([]byte, error) {
a := caller.Address()
exe := self.vm(&a, data, gas, price, value)
return exe.Call(addr, caller)
func (self *VMEnv) CallCode(caller vm.ContractRef, addr common.Address, data []byte, gas, price, value *big.Int) ([]byte, error) {
return core.CallCode(self, caller, addr, data, gas, price, value)
}
func (self *VMEnv) Create(caller vm.ContextRef, data []byte, gas, price, value *big.Int) ([]byte, error, vm.ContextRef) {
exe := self.vm(nil, data, gas, price, value)
return exe.Create(caller)
func (self *VMEnv) DelegateCall(caller vm.ContractRef, addr common.Address, data []byte, gas, price *big.Int) ([]byte, error) {
return core.DelegateCall(self, caller, addr, data, gas, price)
}
func (self *VMEnv) Create(caller vm.ContractRef, data []byte, gas, price, value *big.Int) ([]byte, common.Address, error) {
return core.Create(self, caller, data, gas, price, value)
}

View File

@ -22,7 +22,6 @@ import (
"github.com/codegangsta/cli"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/tests"
@ -92,7 +91,6 @@ func runBlockTest(ctx *cli.Context) {
if err != nil {
utils.Fatalf("%v", err)
}
defer ethereum.Stop()
if rpc {
fmt.Println("Block Test post state validated, starting RPC interface.")
startEth(ctx, ethereum)
@ -103,34 +101,35 @@ func runBlockTest(ctx *cli.Context) {
func runOneBlockTest(ctx *cli.Context, test *tests.BlockTest) (*eth.Ethereum, error) {
cfg := utils.MakeEthConfig(ClientIdentifier, Version, ctx)
cfg.NewDB = func(path string) (common.Database, error) { return ethdb.NewMemDatabase() }
db, _ := ethdb.NewMemDatabase()
cfg.NewDB = func(path string) (ethdb.Database, error) { return db, nil }
cfg.MaxPeers = 0 // disable network
cfg.Shh = false // disable whisper
cfg.NAT = nil // disable port mapping
ethereum, err := eth.New(cfg)
if err != nil {
return nil, err
}
// if err := ethereum.Start(); err != nil {
// return nil, err
// }
// import the genesis block
ethereum.ResetWithGenesisBlock(test.Genesis)
// import pre accounts
statedb, err := test.InsertPreState(ethereum)
_, err = test.InsertPreState(db, cfg.AccountManager)
if err != nil {
return ethereum, fmt.Errorf("InsertPreState: %v", err)
}
if err := test.TryBlocksInsert(ethereum.ChainManager()); err != nil {
cm := ethereum.BlockChain()
validBlocks, err := test.TryBlocksInsert(cm)
if err != nil {
return ethereum, fmt.Errorf("Block Test load error: %v", err)
}
if err := test.ValidatePostState(statedb); err != nil {
newDB, err := cm.State()
if err != nil {
return ethereum, fmt.Errorf("Block Test get state error: %v", err)
}
if err := test.ValidatePostState(newDB); err != nil {
return ethereum, fmt.Errorf("post state validation failed: %v", err)
}
return ethereum, nil
return ethereum, test.ValidateImportedHeaders(cm, validBlocks)
}

View File

@ -29,6 +29,7 @@ import (
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/logger/glog"
)
@ -178,7 +179,11 @@ func dump(ctx *cli.Context) {
fmt.Println("{}")
utils.Fatalf("block not found")
} else {
state := state.New(block.Root(), chainDb)
state, err := state.New(block.Root(), chainDb)
if err != nil {
utils.Fatalf("could not create new state: %v", err)
return
}
fmt.Printf("%s\n", state.Dump())
}
}
@ -191,7 +196,7 @@ func hashish(x string) bool {
return err != nil
}
func closeAll(dbs ...common.Database) {
func closeAll(dbs ...ethdb.Database) {
for _, db := range dbs {
db.Close()
}

View File

@ -30,7 +30,6 @@ import (
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/docserver"
"github.com/ethereum/go-ethereum/common/natspec"
"github.com/ethereum/go-ethereum/common/registrar"
"github.com/ethereum/go-ethereum/eth"
@ -45,9 +44,12 @@ import (
"github.com/robertkrimen/otto"
)
var passwordRegexp = regexp.MustCompile("personal.[nu]")
const passwordRepl = ""
var (
passwordRegexp = regexp.MustCompile("personal.[nu]")
leadingSpace = regexp.MustCompile("^ ")
onlyws = regexp.MustCompile("^\\s*$")
exit = regexp.MustCompile("^\\s*exit\\s*;*\\s*$")
)
type prompter interface {
AppendHistory(string)
@ -74,7 +76,6 @@ func (r dumbterm) PasswordPrompt(p string) (string, error) {
func (r dumbterm) AppendHistory(string) {}
type jsre struct {
ds *docserver.DocServer
re *re.JSRE
ethereum *eth.Ethereum
xeth *xeth.XEth
@ -121,7 +122,7 @@ func keywordCompleter(line string) []string {
}
func apiWordCompleter(line string, pos int) (head string, completions []string, tail string) {
if len(line) == 0 {
if len(line) == 0 || pos == 0 {
return "", nil, ""
}
@ -145,14 +146,13 @@ func apiWordCompleter(line string, pos int) (head string, completions []string,
return begin, completionWords, end
}
func newLightweightJSRE(libPath string, client comms.EthereumClient, interactive bool) *jsre {
func newLightweightJSRE(docRoot string, client comms.EthereumClient, datadir string, interactive bool) *jsre {
js := &jsre{ps1: "> "}
js.wait = make(chan *big.Int)
js.client = client
js.ds = docserver.New("/")
// update state in separare forever blocks
js.re = re.New(libPath)
js.re = re.New(docRoot)
if err := js.apiBindings(js); err != nil {
utils.Fatalf("Unable to initialize console - %v", err)
}
@ -161,14 +161,14 @@ func newLightweightJSRE(libPath string, client comms.EthereumClient, interactive
js.prompter = dumbterm{bufio.NewReader(os.Stdin)}
} else {
lr := liner.NewLiner()
js.withHistory(func(hist *os.File) { lr.ReadHistory(hist) })
js.withHistory(datadir, func(hist *os.File) { lr.ReadHistory(hist) })
lr.SetCtrlCAborts(true)
js.loadAutoCompletion()
lr.SetWordCompleter(apiWordCompleter)
lr.SetTabCompletionStyle(liner.TabPrints)
js.prompter = lr
js.atexit = func() {
js.withHistory(func(hist *os.File) { hist.Truncate(0); lr.WriteHistory(hist) })
js.withHistory(datadir, func(hist *os.File) { hist.Truncate(0); lr.WriteHistory(hist) })
lr.Close()
close(js.wait)
}
@ -176,14 +176,13 @@ func newLightweightJSRE(libPath string, client comms.EthereumClient, interactive
return js
}
func newJSRE(ethereum *eth.Ethereum, libPath, corsDomain string, client comms.EthereumClient, interactive bool, f xeth.Frontend) *jsre {
func newJSRE(ethereum *eth.Ethereum, docRoot, corsDomain string, client comms.EthereumClient, interactive bool, f xeth.Frontend) *jsre {
js := &jsre{ethereum: ethereum, ps1: "> "}
// set default cors domain used by startRpc from CLI flag
js.corsDomain = corsDomain
if f == nil {
f = js
}
js.ds = docserver.New("/")
js.xeth = xeth.New(ethereum, f)
js.wait = js.xeth.UpdateState()
js.client = client
@ -194,7 +193,7 @@ func newJSRE(ethereum *eth.Ethereum, libPath, corsDomain string, client comms.Et
}
// update state in separare forever blocks
js.re = re.New(libPath)
js.re = re.New(docRoot)
if err := js.apiBindings(f); err != nil {
utils.Fatalf("Unable to connect - %v", err)
}
@ -203,14 +202,14 @@ func newJSRE(ethereum *eth.Ethereum, libPath, corsDomain string, client comms.Et
js.prompter = dumbterm{bufio.NewReader(os.Stdin)}
} else {
lr := liner.NewLiner()
js.withHistory(func(hist *os.File) { lr.ReadHistory(hist) })
js.withHistory(ethereum.DataDir, func(hist *os.File) { lr.ReadHistory(hist) })
lr.SetCtrlCAborts(true)
js.loadAutoCompletion()
lr.SetWordCompleter(apiWordCompleter)
lr.SetTabCompletionStyle(liner.TabPrints)
js.prompter = lr
js.atexit = func() {
js.withHistory(func(hist *os.File) { hist.Truncate(0); lr.WriteHistory(hist) })
js.withHistory(ethereum.DataDir, func(hist *os.File) { hist.Truncate(0); lr.WriteHistory(hist) })
lr.Close()
close(js.wait)
}
@ -244,14 +243,14 @@ func (self *jsre) batch(statement string) {
// show summary of current geth instance
func (self *jsre) welcome() {
self.re.Run(`
(function () {
console.log('instance: ' + web3.version.client);
console.log(' datadir: ' + admin.datadir);
console.log("coinbase: " + eth.coinbase);
var ts = 1000 * eth.getBlock(eth.blockNumber).timestamp;
console.log("at block: " + eth.blockNumber + " (" + new Date(ts) + ")");
})();
`)
(function () {
console.log('instance: ' + web3.version.client);
console.log(' datadir: ' + admin.datadir);
console.log("coinbase: " + eth.coinbase);
var ts = 1000 * eth.getBlock(eth.blockNumber).timestamp;
console.log("at block: " + eth.blockNumber + " (" + new Date(ts) + ")");
})();
`)
if modules, err := self.supportedApis(); err == nil {
loadedModules := make([]string, 0)
for api, version := range modules {
@ -330,13 +329,21 @@ func (js *jsre) apiBindings(f xeth.Frontend) error {
utils.Fatalf("Error setting namespaces: %v", err)
}
js.re.Run(`var GlobalRegistrar = eth.contract(` + registrar.GlobalRegistrarAbi + `); registrar = GlobalRegistrar.at("` + registrar.GlobalRegistrarAddr + `");`)
js.re.Run(`var GlobalRegistrar = eth.contract(` + registrar.GlobalRegistrarAbi + `); registrar = GlobalRegistrar.at("` + registrar.GlobalRegistrarAddr + `");`)
return nil
}
func (self *jsre) AskPassword() (string, bool) {
pass, err := self.PasswordPrompt("Passphrase: ")
if err != nil {
return "", false
}
return pass, true
}
func (self *jsre) ConfirmTransaction(tx string) bool {
if self.ethereum.NatSpec {
notice := natspec.GetNotice(self.xeth, tx, self.ds)
notice := natspec.GetNotice(self.xeth, tx, self.ethereum.HTTPClient())
fmt.Println(notice)
answer, _ := self.Prompt("Confirm Transaction [y/n]")
return strings.HasPrefix(strings.Trim(answer, " "), "y")
@ -405,18 +412,17 @@ func (self *jsre) interactive() {
fmt.Println("caught interrupt, exiting")
return
case input, ok := <-inputln:
if !ok || indentCount <= 0 && input == "exit" {
if !ok || indentCount <= 0 && exit.MatchString(input) {
return
}
if input == "" {
if onlyws.MatchString(input) {
continue
}
str += input + "\n"
self.setIndent()
if indentCount <= 0 {
hist := hidepassword(str[:len(str)-1])
if len(hist) > 0 {
self.AppendHistory(hist)
if mustLogInHistory(str) {
self.AppendHistory(str[:len(str)-1])
}
self.parseInput(str)
str = ""
@ -425,20 +431,13 @@ func (self *jsre) interactive() {
}
}
func hidepassword(input string) string {
if passwordRegexp.MatchString(input) {
return passwordRepl
} else {
return input
}
func mustLogInHistory(input string) bool {
return len(input) == 0 ||
passwordRegexp.MatchString(input) ||
!leadingSpace.MatchString(input)
}
func (self *jsre) withHistory(op func(*os.File)) {
datadir := common.DefaultDataDir()
if self.ethereum != nil {
datadir = self.ethereum.DataDir
}
func (self *jsre) withHistory(datadir string, op func(*os.File)) {
hist, err := os.OpenFile(filepath.Join(datadir, "history"), os.O_RDWR|os.O_CREATE, os.ModePerm)
if err != nil {
fmt.Printf("unable to open history file: %v\n", err)

View File

@ -31,7 +31,7 @@ import (
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/compiler"
"github.com/ethereum/go-ethereum/common/docserver"
"github.com/ethereum/go-ethereum/common/httpclient"
"github.com/ethereum/go-ethereum/common/natspec"
"github.com/ethereum/go-ethereum/common/registrar"
"github.com/ethereum/go-ethereum/core"
@ -62,7 +62,7 @@ var (
type testjethre struct {
*jsre
lastConfirm string
ds *docserver.DocServer
client *httpclient.HTTPClient
}
func (self *testjethre) UnlockAccount(acc []byte) bool {
@ -75,7 +75,7 @@ func (self *testjethre) UnlockAccount(acc []byte) bool {
func (self *testjethre) ConfirmTransaction(tx string) bool {
if self.ethereum.NatSpec {
self.lastConfirm = natspec.GetNotice(self.xeth, tx, self.ds)
self.lastConfirm = natspec.GetNotice(self.xeth, tx, self.client)
}
return true
}
@ -92,7 +92,7 @@ func testREPL(t *testing.T, config func(*eth.Config)) (string, *testjethre, *eth
db, _ := ethdb.NewMemDatabase()
core.WriteGenesisBlockForTesting(db, common.HexToAddress(testAddress), common.String2Big(testBalance))
core.WriteGenesisBlockForTesting(db, core.GenesisAccount{common.HexToAddress(testAddress), common.String2Big(testBalance)})
ks := crypto.NewKeyStorePlain(filepath.Join(tmp, "keystore"))
am := accounts.NewManager(ks)
conf := &eth.Config{
@ -101,9 +101,10 @@ func testREPL(t *testing.T, config func(*eth.Config)) (string, *testjethre, *eth
AccountManager: am,
MaxPeers: 0,
Name: "test",
DocRoot: "/",
SolcPath: testSolcPath,
PowTest: true,
NewDB: func(path string) (common.Database, error) { return db, nil },
NewDB: func(path string) (ethdb.Database, error) { return db, nil },
}
if config != nil {
config(conf)
@ -130,8 +131,7 @@ func testREPL(t *testing.T, config func(*eth.Config)) (string, *testjethre, *eth
assetPath := filepath.Join(os.Getenv("GOPATH"), "src", "github.com", "ethereum", "go-ethereum", "cmd", "mist", "assets", "ext")
client := comms.NewInProcClient(codec.JSON)
ds := docserver.New("/")
tf := &testjethre{ds: ds}
tf := &testjethre{client: ethereum.HTTPClient()}
repl := newJSRE(ethereum, assetPath, "", client, false, tf)
tf.jsre = repl
return tmp, tf, ethereum
@ -196,7 +196,7 @@ func TestBlockChain(t *testing.T) {
tmpfile := filepath.Join(extmp, "export.chain")
tmpfileq := strconv.Quote(tmpfile)
ethereum.ChainManager().Reset()
ethereum.BlockChain().Reset()
checkEvalJSON(t, repl, `admin.exportChain(`+tmpfileq+`)`, `true`)
if _, err := os.Stat(tmpfile); err != nil {
@ -468,8 +468,7 @@ func processTxs(repl *testjethre, t *testing.T, expTxc int) bool {
t.Errorf("incorrect number of pending transactions, expected %v, got %v", expTxc, txc)
return false
}
err = repl.ethereum.StartMining(runtime.NumCPU())
err = repl.ethereum.StartMining(runtime.NumCPU(), "")
if err != nil {
t.Errorf("unexpected error mining: %v", err)
return false

View File

@ -48,10 +48,10 @@ import (
const (
ClientIdentifier = "Geth"
Version = "1.1.2"
Version = "1.3.4"
VersionMajor = 1
VersionMinor = 1
VersionPatch = 2
VersionMinor = 3
VersionPatch = 4
)
var (
@ -74,7 +74,7 @@ func init() {
{
Action: blockRecovery,
Name: "recover",
Usage: "attempts to recover a corrupted database by setting a new block by number or hash. See help recover.",
Usage: "Attempts to recover a corrupted database by setting a new block by number or hash",
Description: `
The recover commands will attempt to read out the last
block based on that.
@ -99,6 +99,22 @@ The makedag command generates an ethash DAG in /tmp/dag.
This command exists to support the system testing project.
Regular users do not need to execute it.
`,
},
{
Action: gpuinfo,
Name: "gpuinfo",
Usage: "gpuinfo",
Description: `
Prints OpenCL device info for all found GPUs.
`,
},
{
Action: gpubench,
Name: "gpubench",
Usage: "benchmark GPU",
Description: `
Runs quick benchmark on first GPU found.
`,
},
{
@ -155,8 +171,12 @@ It is safe to transfer the entire directory or the individual keys therein
between ethereum nodes by simply copying.
Make sure you backup your keys regularly.
In order to use your account to send transactions, you need to unlock them using the
'--unlock' option. The argument is a comma
In order to use your account to send transactions, you need to unlock them using
the '--unlock' option. The argument is a space separated list of addresses or
indexes. If used non-interactively with a passwordfile, the file should contain
the respective passwords one per line. If you unlock n accounts and the password
file contains less than n entries, then the last password is meant to apply to
all remaining accounts.
And finally. DO NOT FORGET YOUR PASSWORD.
`,
@ -206,7 +226,7 @@ format to the newest format or change the password for an account.
For non-interactive use the passphrase can be specified with the --password flag:
ethereum --password <passwordfile> account new
ethereum --password <passwordfile> account update <address>
Since only one password can be given, only format update can be performed,
changing your password is only possible interactively.
@ -283,7 +303,9 @@ JavaScript API. See https://github.com/ethereum/go-ethereum/wiki/Javascipt-Conso
utils.DataDirFlag,
utils.BlockchainVersionFlag,
utils.OlympicFlag,
utils.FastSyncFlag,
utils.CacheFlag,
utils.LightKDFFlag,
utils.JSpathFlag,
utils.ListenPortFlag,
utils.MaxPeersFlag,
@ -292,6 +314,7 @@ JavaScript API. See https://github.com/ethereum/go-ethereum/wiki/Javascipt-Conso
utils.GasPriceFlag,
utils.MinerThreadsFlag,
utils.MiningEnabledFlag,
utils.MiningGPUFlag,
utils.AutoDAGFlag,
utils.NATFlag,
utils.NatspecEnabledFlag,
@ -307,6 +330,8 @@ JavaScript API. See https://github.com/ethereum/go-ethereum/wiki/Javascipt-Conso
utils.IPCPathFlag,
utils.ExecFlag,
utils.WhisperEnabledFlag,
utils.DevModeFlag,
utils.TestNetFlag,
utils.VMDebugFlag,
utils.VMForceJitFlag,
utils.VMJitCacheFlag,
@ -315,10 +340,8 @@ JavaScript API. See https://github.com/ethereum/go-ethereum/wiki/Javascipt-Conso
utils.RPCCORSDomainFlag,
utils.VerbosityFlag,
utils.BacktraceAtFlag,
utils.LogToStdErrFlag,
utils.LogVModuleFlag,
utils.LogFileFlag,
utils.LogJSONFlag,
utils.PProfEanbledFlag,
utils.PProfPortFlag,
utils.MetricsEnabledFlag,
@ -329,9 +352,11 @@ JavaScript API. See https://github.com/ethereum/go-ethereum/wiki/Javascipt-Conso
utils.GpobaseStepDownFlag,
utils.GpobaseStepUpFlag,
utils.GpobaseCorrectionFactorFlag,
utils.ExtraDataFlag,
}
app.Before = func(ctx *cli.Context) error {
utils.SetupLogger(ctx)
utils.SetupNetwork(ctx)
utils.SetupVM(ctx)
if ctx.GlobalBool(utils.PProfEanbledFlag.Name) {
utils.StartPProf(ctx)
@ -351,6 +376,14 @@ func main() {
}
}
// makeExtra resolves extradata for the miner from a flag or returns a default.
func makeExtra(ctx *cli.Context) []byte {
if ctx.GlobalIsSet(utils.ExtraDataFlag.Name) {
return []byte(ctx.GlobalString(utils.ExtraDataFlag.Name))
}
return makeDefaultExtra()
}
func makeDefaultExtra() []byte {
var clientInfo = struct {
Version uint
@ -368,18 +401,12 @@ func makeDefaultExtra() []byte {
glog.V(logger.Debug).Infof("extra: %x\n", extra)
return nil
}
return extra
}
func run(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
if ctx.GlobalBool(utils.OlympicFlag.Name) {
utils.InitOlympic()
}
cfg := utils.MakeEthConfig(ClientIdentifier, nodeNameVersion, ctx)
cfg.ExtraData = makeDefaultExtra()
cfg.ExtraData = makeExtra(ctx)
ethereum, err := eth.New(cfg)
if err != nil {
@ -392,15 +419,13 @@ func run(ctx *cli.Context) {
}
func attach(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
var client comms.EthereumClient
var err error
if ctx.Args().Present() {
client, err = comms.ClientFromEndpoint(ctx.Args().First(), codec.JSON)
} else {
cfg := comms.IpcConfig{
Endpoint: ctx.GlobalString(utils.IPCPathFlag.Name),
Endpoint: utils.IpcSocketPath(ctx),
}
client, err = comms.NewIpcClient(cfg, codec.JSON)
}
@ -412,6 +437,7 @@ func attach(ctx *cli.Context) {
repl := newLightweightJSRE(
ctx.GlobalString(utils.JSpathFlag.Name),
client,
ctx.GlobalString(utils.DataDirFlag.Name),
true,
)
@ -424,9 +450,9 @@ func attach(ctx *cli.Context) {
}
func console(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
cfg := utils.MakeEthConfig(ClientIdentifier, nodeNameVersion, ctx)
cfg.ExtraData = makeExtra(ctx)
ethereum, err := eth.New(cfg)
if err != nil {
utils.Fatalf("%v", err)
@ -456,8 +482,6 @@ func console(ctx *cli.Context) {
}
func execJSFiles(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
cfg := utils.MakeEthConfig(ClientIdentifier, nodeNameVersion, ctx)
ethereum, err := eth.New(cfg)
if err != nil {
@ -482,42 +506,37 @@ func execJSFiles(ctx *cli.Context) {
ethereum.WaitForShutdown()
}
func unlockAccount(ctx *cli.Context, am *accounts.Manager, addr string, i int) (addrHex, auth string) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
func unlockAccount(ctx *cli.Context, am *accounts.Manager, addr string, i int, inputpassphrases []string) (addrHex, auth string, passphrases []string) {
var err error
passphrases = inputpassphrases
addrHex, err = utils.ParamToAddress(addr, am)
if err == nil {
// Attempt to unlock the account 3 times
attempts := 3
for tries := 0; tries < attempts; tries++ {
msg := fmt.Sprintf("Unlocking account %s | Attempt %d/%d", addr, tries+1, attempts)
auth = getPassPhrase(ctx, msg, false, i)
auth, passphrases = getPassPhrase(ctx, msg, false, i, passphrases)
err = am.Unlock(common.HexToAddress(addrHex), auth)
if err == nil {
if err == nil || passphrases != nil {
break
}
}
}
if err != nil {
utils.Fatalf("Unlock account failed '%v'", err)
utils.Fatalf("Unlock account '%s' (%v) failed: %v", addr, addrHex, err)
}
fmt.Printf("Account '%s' unlocked.\n", addr)
fmt.Printf("Account '%s' (%v) unlocked.\n", addr, addrHex)
return
}
func blockRecovery(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
arg := ctx.Args().First()
if len(ctx.Args()) < 1 && len(arg) > 0 {
if len(ctx.Args()) < 1 {
glog.Fatal("recover requires block number or hash")
}
arg := ctx.Args().First()
cfg := utils.MakeEthConfig(ClientIdentifier, nodeNameVersion, ctx)
utils.CheckLegalese(cfg.DataDir)
blockDb, err := ethdb.NewLDBDatabase(filepath.Join(cfg.DataDir, "blockchain"), cfg.DatabaseCache)
if err != nil {
glog.Fatalln("could not open db:", err)
@ -525,17 +544,16 @@ func blockRecovery(ctx *cli.Context) {
var block *types.Block
if arg[0] == '#' {
block = core.GetBlockByNumber(blockDb, common.String2Big(arg[1:]).Uint64())
block = core.GetBlock(blockDb, core.GetCanonicalHash(blockDb, common.String2Big(arg[1:]).Uint64()))
} else {
block = core.GetBlockByHash(blockDb, common.HexToHash(arg))
block = core.GetBlock(blockDb, common.HexToHash(arg))
}
if block == nil {
glog.Fatalln("block not found. Recovery failed")
}
err = core.WriteHead(blockDb, block)
if err != nil {
if err = core.WriteHeadBlockHash(blockDb, block.Hash()); err != nil {
glog.Fatalln("block write err", err)
}
glog.Infof("Recovery succesful. New HEAD %x\n", block.Hash())
@ -548,12 +566,13 @@ func startEth(ctx *cli.Context, eth *eth.Ethereum) {
am := eth.AccountManager()
account := ctx.GlobalString(utils.UnlockedAccountFlag.Name)
accounts := strings.Split(account, " ")
var passphrases []string
for i, account := range accounts {
if len(account) > 0 {
if account == "primary" {
utils.Fatalf("the 'primary' keyword is deprecated. You can use integer indexes, but the indexes are not permanent, they can change if you add external keys, export your keys or copy your keystore to another node.")
}
unlockAccount(ctx, am, account, i)
_, _, passphrases = unlockAccount(ctx, am, account, i, passphrases)
}
}
// Start auxiliary services if enabled.
@ -568,15 +587,16 @@ func startEth(ctx *cli.Context, eth *eth.Ethereum) {
}
}
if ctx.GlobalBool(utils.MiningEnabledFlag.Name) {
if err := eth.StartMining(ctx.GlobalInt(utils.MinerThreadsFlag.Name)); err != nil {
err := eth.StartMining(
ctx.GlobalInt(utils.MinerThreadsFlag.Name),
ctx.GlobalString(utils.MiningGPUFlag.Name))
if err != nil {
utils.Fatalf("%v", err)
}
}
}
func accountList(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
am := utils.MakeAccountManager(ctx)
accts, err := am.Accounts()
if err != nil {
@ -587,7 +607,7 @@ func accountList(ctx *cli.Context) {
}
}
func getPassPhrase(ctx *cli.Context, desc string, confirmation bool, i int) (passphrase string) {
func getPassPhrase(ctx *cli.Context, desc string, confirmation bool, i int, inputpassphrases []string) (passphrase string, passphrases []string) {
passfile := ctx.GlobalString(utils.PasswordFileFlag.Name)
if len(passfile) == 0 {
fmt.Println(desc)
@ -607,14 +627,17 @@ func getPassPhrase(ctx *cli.Context, desc string, confirmation bool, i int) (pas
passphrase = auth
} else {
passbytes, err := ioutil.ReadFile(passfile)
if err != nil {
utils.Fatalf("Unable to read password file '%s': %v", passfile, err)
passphrases = inputpassphrases
if passphrases == nil {
passbytes, err := ioutil.ReadFile(passfile)
if err != nil {
utils.Fatalf("Unable to read password file '%s': %v", passfile, err)
}
// this is backwards compatible if the same password unlocks several accounts
// it also has the consequence that trailing newlines will not count as part
// of the password, so --password <(echo -n 'pass') will now work without -n
passphrases = strings.Split(string(passbytes), "\n")
}
// this is backwards compatible if the same password unlocks several accounts
// it also has the consequence that trailing newlines will not count as part
// of the password, so --password <(echo -n 'pass') will now work without -n
passphrases := strings.Split(string(passbytes), "\n")
if i >= len(passphrases) {
passphrase = passphrases[len(passphrases)-1]
} else {
@ -625,10 +648,8 @@ func getPassPhrase(ctx *cli.Context, desc string, confirmation bool, i int) (pas
}
func accountCreate(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
am := utils.MakeAccountManager(ctx)
passphrase := getPassPhrase(ctx, "Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0)
passphrase, _ := getPassPhrase(ctx, "Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0, nil)
acct, err := am.NewAccount(passphrase)
if err != nil {
utils.Fatalf("Could not create the account: %v", err)
@ -637,16 +658,14 @@ func accountCreate(ctx *cli.Context) {
}
func accountUpdate(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
am := utils.MakeAccountManager(ctx)
arg := ctx.Args().First()
if len(arg) == 0 {
utils.Fatalf("account address or index must be given as argument")
}
addr, authFrom := unlockAccount(ctx, am, arg, 0)
authTo := getPassPhrase(ctx, "Please give a new password. Do not forget this password.", true, 0)
addr, authFrom, passphrases := unlockAccount(ctx, am, arg, 0, nil)
authTo, _ := getPassPhrase(ctx, "Please give a new password. Do not forget this password.", true, 0, passphrases)
err := am.Update(common.HexToAddress(addr), authFrom, authTo)
if err != nil {
utils.Fatalf("Could not update the account: %v", err)
@ -654,8 +673,6 @@ func accountUpdate(ctx *cli.Context) {
}
func importWallet(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
keyfile := ctx.Args().First()
if len(keyfile) == 0 {
utils.Fatalf("keyfile must be given as argument")
@ -666,7 +683,7 @@ func importWallet(ctx *cli.Context) {
}
am := utils.MakeAccountManager(ctx)
passphrase := getPassPhrase(ctx, "", false, 0)
passphrase, _ := getPassPhrase(ctx, "", false, 0, nil)
acct, err := am.ImportPreSaleKey(keyJson, passphrase)
if err != nil {
@ -676,14 +693,12 @@ func importWallet(ctx *cli.Context) {
}
func accountImport(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
keyfile := ctx.Args().First()
if len(keyfile) == 0 {
utils.Fatalf("keyfile must be given as argument")
}
am := utils.MakeAccountManager(ctx)
passphrase := getPassPhrase(ctx, "Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0)
passphrase, _ := getPassPhrase(ctx, "Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0, nil)
acct, err := am.Import(keyfile, passphrase)
if err != nil {
utils.Fatalf("Could not create the account: %v", err)
@ -692,8 +707,6 @@ func accountImport(ctx *cli.Context) {
}
func makedag(ctx *cli.Context) {
utils.CheckLegalese(ctx.GlobalString(utils.DataDirFlag.Name))
args := ctx.Args()
wrongArgs := func() {
utils.Fatalf(`Usage: geth makedag <block number> <outputdir>`)
@ -722,6 +735,29 @@ func makedag(ctx *cli.Context) {
}
}
func gpuinfo(ctx *cli.Context) {
eth.PrintOpenCLDevices()
}
func gpubench(ctx *cli.Context) {
args := ctx.Args()
wrongArgs := func() {
utils.Fatalf(`Usage: geth gpubench <gpu number>`)
}
switch {
case len(args) == 1:
n, err := strconv.ParseUint(args[0], 0, 64)
if err != nil {
wrongArgs()
}
eth.GPUBench(n)
case len(args) == 0:
eth.GPUBench(0)
default:
wrongArgs()
}
}
func version(c *cli.Context) {
fmt.Println(ClientIdentifier)
fmt.Println("Version:", Version)

View File

@ -289,7 +289,7 @@ func updateChart(metric string, data []float64, base *int, chart *termui.LineCha
}
}
unit, scale := 0, 1.0
for high >= 1000 {
for high >= 1000 && unit+1 < len(dataUnits) {
high, unit, scale = high/1000, unit+1, scale*1000
}
// If the unit changes, re-create the chart (hack to set max height...)

213
cmd/geth/usage.go Normal file
View File

@ -0,0 +1,213 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Contains the geth command usage template and generator.
package main
import (
"io"
"github.com/codegangsta/cli"
"github.com/ethereum/go-ethereum/cmd/utils"
)
// AppHelpTemplate is the test template for the default, global app help topic.
var AppHelpTemplate = `NAME:
{{.App.Name}} - {{.App.Usage}}
USAGE:
{{.App.HelpName}} [options]{{if .App.Commands}} command [command options]{{end}} {{if .App.ArgsUsage}}{{.App.ArgsUsage}}{{else}}[arguments...]{{end}}
{{if .App.Version}}
VERSION:
{{.App.Version}}
{{end}}{{if len .App.Authors}}
AUTHOR(S):
{{range .App.Authors}}{{ . }}{{end}}
{{end}}{{if .App.Commands}}
COMMANDS:
{{range .App.Commands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
{{end}}{{end}}{{if .FlagGroups}}
{{range .FlagGroups}}{{.Name}} OPTIONS:
{{range .Flags}}{{.}}
{{end}}
{{end}}{{end}}{{if .App.Copyright }}
COPYRIGHT:
{{.App.Copyright}}
{{end}}
`
// flagGroup is a collection of flags belonging to a single topic.
type flagGroup struct {
Name string
Flags []cli.Flag
}
// AppHelpFlagGroups is the application flags, grouped by functionality.
var AppHelpFlagGroups = []flagGroup{
{
Name: "ETHEREUM",
Flags: []cli.Flag{
utils.DataDirFlag,
utils.NetworkIdFlag,
utils.OlympicFlag,
utils.TestNetFlag,
utils.DevModeFlag,
utils.GenesisFileFlag,
utils.IdentityFlag,
utils.FastSyncFlag,
utils.LightKDFFlag,
utils.CacheFlag,
utils.BlockchainVersionFlag,
},
},
{
Name: "ACCOUNT",
Flags: []cli.Flag{
utils.UnlockedAccountFlag,
utils.PasswordFileFlag,
},
},
{
Name: "API AND CONSOLE",
Flags: []cli.Flag{
utils.RPCEnabledFlag,
utils.RPCListenAddrFlag,
utils.RPCPortFlag,
utils.RpcApiFlag,
utils.IPCDisabledFlag,
utils.IPCApiFlag,
utils.IPCPathFlag,
utils.RPCCORSDomainFlag,
utils.JSpathFlag,
utils.ExecFlag,
},
},
{
Name: "NETWORKING",
Flags: []cli.Flag{
utils.BootnodesFlag,
utils.ListenPortFlag,
utils.MaxPeersFlag,
utils.MaxPendingPeersFlag,
utils.NATFlag,
utils.NoDiscoverFlag,
utils.NodeKeyFileFlag,
utils.NodeKeyHexFlag,
},
},
{
Name: "MINER",
Flags: []cli.Flag{
utils.MiningEnabledFlag,
utils.MinerThreadsFlag,
utils.MiningGPUFlag,
utils.AutoDAGFlag,
utils.EtherbaseFlag,
utils.GasPriceFlag,
utils.ExtraDataFlag,
},
},
{
Name: "GAS PRICE ORACLE",
Flags: []cli.Flag{
utils.GpoMinGasPriceFlag,
utils.GpoMaxGasPriceFlag,
utils.GpoFullBlockRatioFlag,
utils.GpobaseStepDownFlag,
utils.GpobaseStepUpFlag,
utils.GpobaseCorrectionFactorFlag,
},
},
{
Name: "VIRTUAL MACHINE",
Flags: []cli.Flag{
utils.VMDebugFlag,
utils.VMEnableJitFlag,
utils.VMForceJitFlag,
utils.VMJitCacheFlag,
},
},
{
Name: "LOGGING AND DEBUGGING",
Flags: []cli.Flag{
utils.VerbosityFlag,
utils.LogVModuleFlag,
utils.BacktraceAtFlag,
utils.LogFileFlag,
utils.PProfEanbledFlag,
utils.PProfPortFlag,
utils.MetricsEnabledFlag,
},
},
{
Name: "EXPERIMENTAL",
Flags: []cli.Flag{
utils.WhisperEnabledFlag,
utils.NatspecEnabledFlag,
},
},
{
Name: "MISCELLANEOUS",
Flags: []cli.Flag{
utils.SolcPathFlag,
},
},
}
func init() {
// Override the default app help template
cli.AppHelpTemplate = AppHelpTemplate
// Define a one shot struct to pass to the usage template
type helpData struct {
App interface{}
FlagGroups []flagGroup
}
// Override the default app help printer, but only for the global app help
originalHelpPrinter := cli.HelpPrinter
cli.HelpPrinter = func(w io.Writer, tmpl string, data interface{}) {
if tmpl == AppHelpTemplate {
// Iterate over all the flags and add any uncategorized ones
categorized := make(map[string]struct{})
for _, group := range AppHelpFlagGroups {
for _, flag := range group.Flags {
categorized[flag.String()] = struct{}{}
}
}
uncategorized := []cli.Flag{}
for _, flag := range data.(*cli.App).Flags {
if _, ok := categorized[flag.String()]; !ok {
uncategorized = append(uncategorized, flag)
}
}
if len(uncategorized) > 0 {
// Append all ungategorized options to the misc group
miscs := len(AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags)
AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags = append(AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags, uncategorized...)
// Make sure they are removed afterwards
defer func() {
AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags = AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags[:miscs]
}()
}
// Render out custom usage screen
originalHelpPrinter(w, tmpl, helpData{data, AppHelpFlagGroups})
} else {
originalHelpPrinter(w, tmpl, data)
}
}
}

View File

@ -21,8 +21,6 @@ import (
"bufio"
"fmt"
"io"
"math"
"math/big"
"os"
"os/signal"
"regexp"
@ -34,7 +32,6 @@ import (
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/logger"
"github.com/ethereum/go-ethereum/logger/glog"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/peterh/liner"
)
@ -43,7 +40,9 @@ const (
importBatchSize = 2500
)
var interruptCallbacks = []func(os.Signal){}
var (
interruptCallbacks = []func(os.Signal){}
)
func openLogFile(Datadir string, filename string) *os.File {
path := common.AbsolutePath(Datadir, filename)
@ -96,16 +95,6 @@ func PromptPassword(prompt string, warnTerm bool) (string, error) {
return input, err
}
func CheckLegalese(datadir string) {
// check "first run"
if !common.FileExist(datadir) {
r, _ := PromptConfirm(legalese)
if !r {
Fatalf("Must accept to continue. Shutting down...\n")
}
}
}
// Fatalf formats a message to standard error and exits the program.
// The message is also printed to standard output if standard error
// is redirected to a different file.
@ -146,16 +135,6 @@ func StartEthereum(ethereum *eth.Ethereum) {
}()
}
func InitOlympic() {
params.DurationLimit = big.NewInt(8)
params.GenesisGasLimit = big.NewInt(3141592)
params.MinGasLimit = big.NewInt(125000)
params.MaximumExtraDataSize = big.NewInt(1024)
NetworkIdFlag.Value = 0
core.BlockReward = big.NewInt(1.5e+18)
core.ExpDiffPeriod = big.NewInt(math.MaxInt64)
}
func FormatTransactionData(data string) []byte {
d := common.StringToByteFunc(data, func(s string) (ret []byte) {
slice := regexp.MustCompile("\\n|\\s").Split(s, 1000000000)
@ -169,7 +148,7 @@ func FormatTransactionData(data string) []byte {
return d
}
func ImportChain(chain *core.ChainManager, fn string) error {
func ImportChain(chain *core.BlockChain, fn string) error {
// Watch for Ctrl-C while the import is running.
// If a signal is received, the import will stop at the next batch.
interrupt := make(chan os.Signal, 1)
@ -244,7 +223,7 @@ func ImportChain(chain *core.ChainManager, fn string) error {
return nil
}
func hasAllBlocks(chain *core.ChainManager, bs []*types.Block) bool {
func hasAllBlocks(chain *core.BlockChain, bs []*types.Block) bool {
for _, b := range bs {
if !chain.HasBlock(b.Hash()) {
return false
@ -253,21 +232,21 @@ func hasAllBlocks(chain *core.ChainManager, bs []*types.Block) bool {
return true
}
func ExportChain(chainmgr *core.ChainManager, fn string) error {
func ExportChain(blockchain *core.BlockChain, fn string) error {
glog.Infoln("Exporting blockchain to", fn)
fh, err := os.OpenFile(fn, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.ModePerm)
if err != nil {
return err
}
defer fh.Close()
if err := chainmgr.Export(fh); err != nil {
if err := blockchain.Export(fh); err != nil {
return err
}
glog.Infoln("Exported blockchain to", fn)
return nil
}
func ExportAppendChain(chainmgr *core.ChainManager, fn string, first uint64, last uint64) error {
func ExportAppendChain(blockchain *core.BlockChain, fn string, first uint64, last uint64) error {
glog.Infoln("Exporting blockchain to", fn)
// TODO verify mode perms
fh, err := os.OpenFile(fn, os.O_CREATE|os.O_APPEND|os.O_WRONLY, os.ModePerm)
@ -275,7 +254,7 @@ func ExportAppendChain(chainmgr *core.ChainManager, fn string, first uint64, las
return err
}
defer fh.Close()
if err := chainmgr.ExportN(fh, first, last); err != nil {
if err := blockchain.ExportN(fh, first, last); err != nil {
return err
}
glog.Infoln("Exported blockchain to", fn)

View File

@ -20,6 +20,7 @@ import (
"crypto/ecdsa"
"fmt"
"log"
"math"
"math/big"
"net"
"net/http"
@ -42,6 +43,7 @@ import (
"github.com/ethereum/go-ethereum/logger/glog"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/p2p/nat"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc/api"
"github.com/ethereum/go-ethereum/rpc/codec"
"github.com/ethereum/go-ethereum/rpc/comms"
@ -99,27 +101,29 @@ var (
// General settings
DataDirFlag = DirectoryFlag{
Name: "datadir",
Usage: "Data directory to be used",
Usage: "Data directory for the databases and keystore",
Value: DirectoryString{common.DefaultDataDir()},
}
NetworkIdFlag = cli.IntFlag{
Name: "networkid",
Usage: "Network Id (integer)",
Usage: "Network identifier (integer, 0=Olympic, 1=Frontier, 2=Morden)",
Value: eth.NetworkId,
}
BlockchainVersionFlag = cli.IntFlag{
Name: "blockchainversion",
Usage: "Blockchain version (integer)",
Value: core.BlockChainVersion,
OlympicFlag = cli.BoolFlag{
Name: "olympic",
Usage: "Olympic network: pre-configured pre-release test network",
}
GenesisNonceFlag = cli.IntFlag{
Name: "genesisnonce",
Usage: "Sets the genesis nonce",
Value: 42,
TestNetFlag = cli.BoolFlag{
Name: "testnet",
Usage: "Morden network: pre-configured test network with modified starting nonces (replay protection)",
}
DevModeFlag = cli.BoolFlag{
Name: "dev",
Usage: "Developer mode: pre-configured private network with several debugging flags",
}
GenesisFileFlag = cli.StringFlag{
Name: "genesis",
Usage: "Inserts/Overwrites the genesis block (json format)",
Usage: "Insert/overwrite the genesis block (JSON format)",
}
IdentityFlag = cli.StringFlag{
Name: "identity",
@ -129,49 +133,71 @@ var (
Name: "natspec",
Usage: "Enable NatSpec confirmation notice",
}
DocRootFlag = DirectoryFlag{
Name: "docroot",
Usage: "Document Root for HTTPClient file scheme",
Value: DirectoryString{common.HomeDir()},
}
CacheFlag = cli.IntFlag{
Name: "cache",
Usage: "Megabytes of memory allocated to internal caching",
Usage: "Megabytes of memory allocated to internal caching (min 16MB / database forced)",
Value: 0,
}
OlympicFlag = cli.BoolFlag{
Name: "olympic",
Usage: "Use olympic style protocol",
BlockchainVersionFlag = cli.IntFlag{
Name: "blockchainversion",
Usage: "Blockchain version (integer)",
Value: core.BlockChainVersion,
}
// miner settings
MinerThreadsFlag = cli.IntFlag{
Name: "minerthreads",
Usage: "Number of miner threads",
Value: runtime.NumCPU(),
FastSyncFlag = cli.BoolFlag{
Name: "fast",
Usage: "Enable fast syncing through state downloads",
}
LightKDFFlag = cli.BoolFlag{
Name: "lightkdf",
Usage: "Reduce key-derivation RAM & CPU usage at some expense of KDF strength",
}
// Miner settings
// TODO: refactor CPU vs GPU mining flags
MiningEnabledFlag = cli.BoolFlag{
Name: "mine",
Usage: "Enable mining",
}
MinerThreadsFlag = cli.IntFlag{
Name: "minerthreads",
Usage: "Number of CPU threads to use for mining",
Value: runtime.NumCPU(),
}
MiningGPUFlag = cli.StringFlag{
Name: "minergpus",
Usage: "List of GPUs to use for mining (e.g. '0,1' will use the first two GPUs found)",
}
AutoDAGFlag = cli.BoolFlag{
Name: "autodag",
Usage: "Enable automatic DAG pregeneration",
}
EtherbaseFlag = cli.StringFlag{
Name: "etherbase",
Usage: "Public address for block mining rewards. By default the address first created is used",
Usage: "Public address for block mining rewards (default = first account created)",
Value: "0",
}
GasPriceFlag = cli.StringFlag{
Name: "gasprice",
Usage: "Sets the minimal gasprice when mining transactions",
Value: new(big.Int).Mul(big.NewInt(50), common.Shannon).String(),
Usage: "Minimal gas price to accept for mining a transactions",
Value: new(big.Int).Mul(big.NewInt(20), common.Shannon).String(),
}
ExtraDataFlag = cli.StringFlag{
Name: "extradata",
Usage: "Block extra data set by the miner (default = client version)",
}
// Account settings
UnlockedAccountFlag = cli.StringFlag{
Name: "unlock",
Usage: "Unlock the account given until this program exits (prompts for password). '--unlock n' unlocks the n-th account in order or creation.",
Usage: "Unlock an account (may be creation index) until this program exits (prompts for password)",
Value: "",
}
PasswordFileFlag = cli.StringFlag{
Name: "password",
Usage: "Path to password file to use with options and subcommands needing a password",
Usage: "Password file to use with options/subcommands needing a pass phrase",
Value: "",
}
@ -195,32 +221,24 @@ var (
}
// logging and debug settings
LogFileFlag = cli.StringFlag{
Name: "logfile",
Usage: "Send log output to a file",
}
VerbosityFlag = cli.IntFlag{
Name: "verbosity",
Usage: "Logging verbosity: 0-6 (0=silent, 1=error, 2=warn, 3=info, 4=core, 5=debug, 6=debug detail)",
Value: int(logger.InfoLevel),
}
LogJSONFlag = cli.StringFlag{
Name: "logjson",
Usage: "Send json structured log output to a file or '-' for standard output (default: no json output)",
LogFileFlag = cli.StringFlag{
Name: "logfile",
Usage: "Log output file within the data dir (default = no log file generated)",
Value: "",
}
LogToStdErrFlag = cli.BoolFlag{
Name: "logtostderr",
Usage: "Logs are written to standard error instead of to files.",
}
LogVModuleFlag = cli.GenericFlag{
Name: "vmodule",
Usage: "The syntax of the argument is a comma-separated list of pattern=N, where pattern is a literal file name (minus the \".go\" suffix) or \"glob\" pattern and N is a log verbosity level.",
Usage: "Per-module verbosity: comma-separated list of <module>=<level>, where <module> is file literal or a glog pattern",
Value: glog.GetVModule(),
}
BacktraceAtFlag = cli.GenericFlag{
Name: "backtrace_at",
Usage: "If set to a file and line number (e.g., \"block.go:271\") holding a logging statement, a stack trace will be logged",
Name: "backtrace",
Usage: "Request a stack trace at a specific logging statement (e.g. \"block.go:271\")",
Value: glog.GetTraceLocation(),
}
PProfEanbledFlag = cli.BoolFlag{
@ -229,37 +247,37 @@ var (
}
PProfPortFlag = cli.IntFlag{
Name: "pprofport",
Usage: "Port on which the profiler should listen",
Usage: "Profile server listening port",
Value: 6060,
}
MetricsEnabledFlag = cli.BoolFlag{
Name: metrics.MetricsEnabledFlag,
Usage: "Enables metrics collection and reporting",
Usage: "Enable metrics collection and reporting",
}
// RPC settings
RPCEnabledFlag = cli.BoolFlag{
Name: "rpc",
Usage: "Enable the JSON-RPC server",
Usage: "Enable the HTTP-RPC server",
}
RPCListenAddrFlag = cli.StringFlag{
Name: "rpcaddr",
Usage: "Listening address for the JSON-RPC server",
Usage: "HTTP-RPC server listening interface",
Value: "127.0.0.1",
}
RPCPortFlag = cli.IntFlag{
Name: "rpcport",
Usage: "Port on which the JSON-RPC server should listen",
Usage: "HTTP-RPC server listening port",
Value: 8545,
}
RPCCORSDomainFlag = cli.StringFlag{
Name: "rpccorsdomain",
Usage: "Domain on which to send Access-Control-Allow-Origin header",
Usage: "Domains from which to accept cross origin requests (browser enforced)",
Value: "",
}
RpcApiFlag = cli.StringFlag{
Name: "rpcapi",
Usage: "Specify the API's which are offered over the HTTP RPC interface",
Usage: "API's offered over the HTTP-RPC interface",
Value: comms.DefaultHttpRpcApis,
}
IPCDisabledFlag = cli.BoolFlag{
@ -268,7 +286,7 @@ var (
}
IPCApiFlag = cli.StringFlag{
Name: "ipcapi",
Usage: "Specify the API's which are offered over the IPC interface",
Usage: "API's offered over the IPC-RPC interface",
Value: comms.DefaultIpcApis,
}
IPCPathFlag = DirectoryFlag{
@ -278,7 +296,7 @@ var (
}
ExecFlag = cli.StringFlag{
Name: "exec",
Usage: "Execute javascript statement (only in combination with console/attach)",
Usage: "Execute JavaScript statement (only in combination with console/attach)",
}
// Network Settings
MaxPeersFlag = cli.IntFlag{
@ -298,7 +316,7 @@ var (
}
BootnodesFlag = cli.StringFlag{
Name: "bootnodes",
Usage: "Space-separated enode URLs for p2p discovery bootstrap",
Usage: "Space-separated enode URLs for P2P discovery bootstrap",
Value: "",
}
NodeKeyFileFlag = cli.StringFlag{
@ -320,23 +338,25 @@ var (
}
WhisperEnabledFlag = cli.BoolFlag{
Name: "shh",
Usage: "Enable whisper",
Usage: "Enable Whisper",
}
// ATM the url is left to the user and deployment to
JSpathFlag = cli.StringFlag{
Name: "jspath",
Usage: "JS library path to be used with console and js subcommands",
Usage: "JavaSript root path for `loadScript` and document root for `admin.httpGet`",
Value: ".",
}
SolcPathFlag = cli.StringFlag{
Name: "solc",
Usage: "solidity compiler to be used",
Usage: "Solidity compiler command to be used",
Value: "solc",
}
// Gas price oracle settings
GpoMinGasPriceFlag = cli.StringFlag{
Name: "gpomin",
Usage: "Minimum suggested gas price",
Value: new(big.Int).Mul(big.NewInt(50), common.Shannon).String(),
Value: new(big.Int).Mul(big.NewInt(20), common.Shannon).String(),
}
GpoMaxGasPriceFlag = cli.StringFlag{
Name: "gpomax",
@ -404,19 +424,18 @@ func MakeEthConfig(clientID, version string, ctx *cli.Context) *eth.Config {
if err != nil {
glog.V(logger.Error).Infoln("WARNING: No etherbase set and no accounts found as default")
}
return &eth.Config{
// Assemble the entire eth configuration and return
cfg := &eth.Config{
Name: common.MakeName(clientID, version),
DataDir: ctx.GlobalString(DataDirFlag.Name),
GenesisNonce: ctx.GlobalInt(GenesisNonceFlag.Name),
DataDir: MustDataDir(ctx),
GenesisFile: ctx.GlobalString(GenesisFileFlag.Name),
FastSync: ctx.GlobalBool(FastSyncFlag.Name),
BlockChainVersion: ctx.GlobalInt(BlockchainVersionFlag.Name),
DatabaseCache: ctx.GlobalInt(CacheFlag.Name),
SkipBcVersionCheck: false,
NetworkId: ctx.GlobalInt(NetworkIdFlag.Name),
LogFile: ctx.GlobalString(LogFileFlag.Name),
Verbosity: ctx.GlobalInt(VerbosityFlag.Name),
LogJSON: ctx.GlobalString(LogJSONFlag.Name),
Etherbase: common.HexToAddress(etherbase),
MinerThreads: ctx.GlobalInt(MinerThreadsFlag.Name),
AccountManager: am,
@ -427,6 +446,7 @@ func MakeEthConfig(clientID, version string, ctx *cli.Context) *eth.Config {
Olympic: ctx.GlobalBool(OlympicFlag.Name),
NAT: MakeNAT(ctx),
NatSpec: ctx.GlobalBool(NatspecEnabledFlag.Name),
DocRoot: ctx.GlobalString(DocRootFlag.Name),
Discovery: !ctx.GlobalBool(NoDiscoverFlag.Name),
NodeKey: MakeNodeKey(ctx),
Shh: ctx.GlobalBool(WhisperEnabledFlag.Name),
@ -442,6 +462,48 @@ func MakeEthConfig(clientID, version string, ctx *cli.Context) *eth.Config {
SolcPath: ctx.GlobalString(SolcPathFlag.Name),
AutoDAG: ctx.GlobalBool(AutoDAGFlag.Name) || ctx.GlobalBool(MiningEnabledFlag.Name),
}
if ctx.GlobalBool(DevModeFlag.Name) && ctx.GlobalBool(TestNetFlag.Name) {
glog.Fatalf("%s and %s are mutually exclusive\n", DevModeFlag.Name, TestNetFlag.Name)
}
if ctx.GlobalBool(TestNetFlag.Name) {
// testnet is always stored in the testnet folder
cfg.DataDir += "/testnet"
cfg.NetworkId = 2
cfg.TestNet = true
// overwrite homestead block
params.HomesteadBlock = params.TestNetHomesteadBlock
}
if ctx.GlobalBool(VMEnableJitFlag.Name) {
cfg.Name += "/JIT"
}
if ctx.GlobalBool(DevModeFlag.Name) {
if !ctx.GlobalIsSet(VMDebugFlag.Name) {
cfg.VmDebug = true
}
if !ctx.GlobalIsSet(MaxPeersFlag.Name) {
cfg.MaxPeers = 0
}
if !ctx.GlobalIsSet(GasPriceFlag.Name) {
cfg.GasPrice = new(big.Int)
}
if !ctx.GlobalIsSet(ListenPortFlag.Name) {
cfg.Port = "0" // auto port
}
if !ctx.GlobalIsSet(WhisperEnabledFlag.Name) {
cfg.Shh = true
}
if !ctx.GlobalIsSet(DataDirFlag.Name) {
cfg.DataDir = os.TempDir() + "/ethereum_dev_mode"
}
cfg.PowTest = true
cfg.DevMode = true
glog.V(logger.Info).Infoln("dev mode enabled")
}
return cfg
}
// SetupLogger configures glog from the logging-related command line flags.
@ -452,6 +514,20 @@ func SetupLogger(ctx *cli.Context) {
glog.SetLogDir(ctx.GlobalString(LogFileFlag.Name))
}
// SetupNetwork configures the system for either the main net or some test network.
func SetupNetwork(ctx *cli.Context) {
switch {
case ctx.GlobalBool(OlympicFlag.Name):
params.DurationLimit = big.NewInt(8)
params.GenesisGasLimit = big.NewInt(3141592)
params.MinGasLimit = big.NewInt(125000)
params.MaximumExtraDataSize = big.NewInt(1024)
NetworkIdFlag.Value = 0
core.BlockReward = big.NewInt(1.5e+18)
core.ExpDiffPeriod = big.NewInt(math.MaxInt64)
}
}
// SetupVM configured the VM package's global settings
func SetupVM(ctx *cli.Context) {
vm.EnableJit = ctx.GlobalBool(VMEnableJitFlag.Name)
@ -460,8 +536,8 @@ func SetupVM(ctx *cli.Context) {
}
// MakeChain creates a chain manager from set command line flags.
func MakeChain(ctx *cli.Context) (chain *core.ChainManager, chainDb common.Database) {
datadir := ctx.GlobalString(DataDirFlag.Name)
func MakeChain(ctx *cli.Context) (chain *core.BlockChain, chainDb ethdb.Database) {
datadir := MustDataDir(ctx)
cache := ctx.GlobalInt(CacheFlag.Name)
var err error
@ -469,7 +545,6 @@ func MakeChain(ctx *cli.Context) (chain *core.ChainManager, chainDb common.Datab
Fatalf("Could not open database: %v", err)
}
if ctx.GlobalBool(OlympicFlag.Name) {
InitOlympic()
_, err := core.WriteTestNetGenesisBlock(chainDb, 42)
if err != nil {
glog.Fatalln(err)
@ -479,23 +554,40 @@ func MakeChain(ctx *cli.Context) (chain *core.ChainManager, chainDb common.Datab
eventMux := new(event.TypeMux)
pow := ethash.New()
//genesis := core.GenesisBlock(uint64(ctx.GlobalInt(GenesisNonceFlag.Name)), blockDB)
chain, err = core.NewChainManager(chainDb, pow, eventMux)
chain, err = core.NewBlockChain(chainDb, pow, eventMux)
if err != nil {
Fatalf("Could not start chainmanager: %v", err)
}
proc := core.NewBlockProcessor(chainDb, pow, chain, eventMux)
chain.SetProcessor(proc)
return chain, chainDb
}
// MakeChain creates an account manager from set command line flags.
func MakeAccountManager(ctx *cli.Context) *accounts.Manager {
dataDir := ctx.GlobalString(DataDirFlag.Name)
ks := crypto.NewKeyStorePassphrase(filepath.Join(dataDir, "keystore"))
dataDir := MustDataDir(ctx)
if ctx.GlobalBool(TestNetFlag.Name) {
dataDir += "/testnet"
}
scryptN := crypto.StandardScryptN
scryptP := crypto.StandardScryptP
if ctx.GlobalBool(LightKDFFlag.Name) {
scryptN = crypto.LightScryptN
scryptP = crypto.LightScryptP
}
ks := crypto.NewKeyStorePassphrase(filepath.Join(dataDir, "keystore"), scryptN, scryptP)
return accounts.NewManager(ks)
}
// MustDataDir retrieves the currently requested data directory, terminating if
// none (or the empty string) is specified.
func MustDataDir(ctx *cli.Context) string {
if path := ctx.GlobalString(DataDirFlag.Name); path != "" {
return path
}
Fatalf("Cannot determine default data directory, please set manually (--datadir)")
return ""
}
func IpcSocketPath(ctx *cli.Context) (ipcpath string) {
if runtime.GOOS == "windows" {
ipcpath = common.DefaultIpcPath()
@ -520,17 +612,14 @@ func StartIPC(eth *eth.Ethereum, ctx *cli.Context) error {
Endpoint: IpcSocketPath(ctx),
}
initializer := func(conn net.Conn) (shared.EthereumApi, error) {
initializer := func(conn net.Conn) (comms.Stopper, shared.EthereumApi, error) {
fe := useragent.NewRemoteFrontend(conn, eth.AccountManager())
xeth := xeth.New(eth, fe)
codec := codec.JSON
apis, err := api.ParseApiString(ctx.GlobalString(IPCApiFlag.Name), codec, xeth, eth)
apis, err := api.ParseApiString(ctx.GlobalString(IPCApiFlag.Name), codec.JSON, xeth, eth)
if err != nil {
return nil, err
return nil, nil, err
}
return api.Merge(apis...), nil
return xeth, api.Merge(apis...), nil
}
return comms.StartIpc(config, codec.JSON, initializer)

View File

@ -1,41 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package utils
const (
legalese = `
=======================================
Disclaimer of Liabilites and Warranties
=======================================
THE USER EXPRESSLY KNOWS AND AGREES THAT THE USER IS USING THE ETHEREUM PLATFORM AT THE USERS SOLE
RISK. THE USER REPRESENTS THAT THE USER HAS AN ADEQUATE UNDERSTANDING OF THE RISKS, USAGE AND
INTRICACIES OF CRYPTOGRAPHIC TOKENS AND BLOCKCHAIN-BASED OPEN SOURCE SOFTWARE, ETH PLATFORM AND ETH.
THE USER ACKNOWLEDGES AND AGREES THAT, TO THE FULLEST EXTENT PERMITTED BY ANY APPLICABLE LAW, THE
DISCLAIMERS OF LIABILITY CONTAINED HEREIN APPLY TO ANY AND ALL DAMAGES OR INJURY WHATSOEVER CAUSED
BY OR RELATED TO RISKS OF, USE OF, OR INABILITY TO USE, ETH OR THE ETHEREUM PLATFORM UNDER ANY CAUSE
OR ACTION WHATSOEVER OF ANY KIND IN ANY JURISDICTION, INCLUDING, WITHOUT LIMITATION, ACTIONS FOR
BREACH OF WARRANTY, BREACH OF CONTRACT OR TORT (INCLUDING NEGLIGENCE) AND THAT NEITHER STIFTUNG
ETHEREUM NOR ETHEREUM TEAM SHALL BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY OR
CONSEQUENTIAL DAMAGES, INCLUDING FOR LOSS OF PROFITS, GOODWILL OR DATA. SOME JURISDICTIONS DO NOT
ALLOW THE EXCLUSION OF CERTAIN WARRANTIES OR THE LIMITATION OR EXCLUSION OF LIABILITY FOR CERTAIN
TYPES OF DAMAGES. THEREFORE, SOME OF THE ABOVE LIMITATIONS IN THIS SECTION MAY NOT APPLY TO A USER.
IN PARTICULAR, NOTHING IN THESE TERMS SHALL AFFECT THE STATUTORY RIGHTS OF ANY USER OR EXCLUDE
INJURY ARISING FROM ANY WILLFUL MISCONDUCT OR FRAUD OF STIFTUNG ETHEREUM.
Do you accept this agreement?`
)

View File

@ -1,49 +1,50 @@
# ethutil
# common
[![Build
Status](https://travis-ci.org/ethereum/go-ethereum.png?branch=master)](https://travis-ci.org/ethereum/go-ethereum)
The ethutil package contains the ethereum utility library.
The common package contains the ethereum utility library.
# Installation
`go get github.com/ethereum/ethutil-go`
As a subdirectory the main go-ethereum repository, you get it with
`go get github.com/ethereum/go-ethereum`.
# Usage
## RLP (Recursive Linear Prefix) Encoding
RLP Encoding is an encoding scheme utilized by the Ethereum project. It
encodes any native value or list to string.
RLP Encoding is an encoding scheme used by the Ethereum project. It
encodes any native value or list to a string.
More in depth information about the Encoding scheme see the [Wiki](http://wiki.ethereum.org/index.php/RLP)
article.
More in depth information about the encoding scheme see the
[Wiki](http://wiki.ethereum.org/index.php/RLP) article.
```go
rlp := ethutil.Encode("doge")
rlp := common.Encode("doge")
fmt.Printf("%q\n", rlp) // => "\0x83dog"
rlp = ethutil.Encode([]interface{}{"dog", "cat"})
rlp = common.Encode([]interface{}{"dog", "cat"})
fmt.Printf("%q\n", rlp) // => "\0xc8\0x83dog\0x83cat"
decoded := ethutil.Decode(rlp)
decoded := common.Decode(rlp)
fmt.Println(decoded) // => ["dog" "cat"]
```
## Patricia Trie
Patricie Tree is a merkle trie utilized by the Ethereum project.
Patricie Tree is a merkle trie used by the Ethereum project.
More in depth information about the (modified) Patricia Trie can be
found on the [Wiki](http://wiki.ethereum.org/index.php/Patricia_Tree).
The patricia trie uses a db as backend and could be anything as long as
it satisfies the Database interface found in `ethutil/db.go`.
it satisfies the Database interface found in `common/db.go`.
```go
db := NewDatabase()
// db, root
trie := ethutil.NewTrie(db, "")
trie := common.NewTrie(db, "")
trie.Put("puppy", "dog")
trie.Put("horse", "stallion")
@ -65,7 +66,7 @@ all (key, value) bindings.
// ... Create db/trie
// Note that RLP uses interface slices as list
value := ethutil.Encode([]interface{}{"one", 2, "three", []interface{}{42}})
value := common.Encode([]interface{}{"one", 2, "three", []interface{}{42}})
// Store the RLP encoded value of the list
trie.Put("mykey", value)
```
@ -89,7 +90,7 @@ type (e.g. `Slice()` returns []interface{}, `Uint()` return 0, etc).
`Append(v)` appends the value (v) to the current value/list.
```go
val := ethutil.NewEmptyValue().Append(1).Append("2")
val := common.NewEmptyValue().Append(1).Append("2")
val.AppendList().Append(3)
```
@ -110,7 +111,7 @@ val.AppendList().Append(3)
`Byte()` returns the value as a single byte.
```go
val := ethutil.NewValue([]interface{}{1,"2",[]interface{}{3}})
val := common.NewValue([]interface{}{1,"2",[]interface{}{3}})
val.Get(0).Uint() // => 1
val.Get(1).Str() // => "2"
s := val.Get(2) // => Value([]interface{}{3})
@ -122,7 +123,7 @@ s.Get(0).Uint() // => 3
Decoding streams of RLP data is simplified
```go
val := ethutil.NewValueFromBytes(rlpData)
val := common.NewValueFromBytes(rlpData)
val.Get(0).Uint()
```
@ -132,7 +133,7 @@ Encoding from Value to RLP is done with the `Encode` method. The
underlying value can be anything RLP can encode (int, str, lists, bytes)
```go
val := ethutil.NewValue([]interface{}{1,"2",[]interface{}{3}})
val := common.NewValue([]interface{}{1,"2",[]interface{}{3}})
rlp := val.Encode()
// Store the rlp data
Store(rlp)

View File

@ -27,6 +27,9 @@ var (
BigTrue = Big1
BigFalse = Big0
Big32 = big.NewInt(32)
Big36 = big.NewInt(36)
Big97 = big.NewInt(97)
Big98 = big.NewInt(98)
Big256 = big.NewInt(0xff)
Big257 = big.NewInt(257)
MaxBig = String2Big("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")

Some files were not shown because too many files have changed in this diff Show More