Compare commits

..

66 Commits

Author SHA1 Message Date
10a45cb59b params: 1.6.6 stable 2017-06-23 11:26:54 +02:00
cd88f69715 Merge pull request #14596 from markya0616/valid_clique_vote
consensus/clique: choose valid votes
2017-06-23 11:21:38 +03:00
46d0d04f97 Merge pull request #14685 from karalabe/ethdb-metrics-fail-fix
eth: gracefully error if database cannot be opened
2017-06-23 11:10:29 +03:00
514659a023 consensus/clique: minor cleanups 2017-06-23 11:06:38 +03:00
01c9cf1cb5 node: don't return non-nil database on error 2017-06-23 09:56:30 +02:00
b751cf3901 Merge pull request #14681 from fjl/build-fixup
travis.yml, cmd/swarm: fix Travis CI build
2017-06-23 10:30:27 +03:00
d40179f882 eth: gracefully error if database cannot be opened 2017-06-23 10:12:41 +03:00
f2c5b2cc1c travis.yml: add fakeroot to launchpad builder 2017-06-22 22:21:59 +02:00
a4277450b2 cmd/swarm: disable TestCLISwarmUp because it's flaky 2017-06-22 22:17:53 +02:00
b664bedcf2 Merge pull request #14673 from holiman/txfix
core: add testcase for txpool
2017-06-22 23:01:43 +03:00
eebde1a2e2 core: ensure transactions correctly drop on pool limiting 2017-06-22 21:03:54 +03:00
b0b3cf2eeb core: add testcase for txpool 2017-06-22 20:36:07 +03:00
c98d9b49bf Merge pull request #14677 from karalabe/miner-cli-gasprice
cmd/geth: corrently init gas price for CLI CPU mining
2017-06-22 15:54:24 +03:00
0042f13d47 eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue

Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.

State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.

Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)

* eth/downloader: remove cancelTimeout channel

* eth/downloader: retry state requests on timeout

* eth/downloader: improve comment

* eth/downloader: mark peers idle when state sync is done

* eth/downloader: move pivot block splitting to processContent

This change also ensures that pivot block receipts aren't imported
before the pivot block itself.

* eth/downloader: limit state node retries

* eth/downloader: improve state node error handling and retry check

* eth/downloader: remove maxStateNodeRetries

It fails the sync too much.

* eth/downloader: remove last use of cancelCh in statesync.go

Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.

* eth/downloader: fix leak in runStateSync

* eth/downloader: don't run processFullSyncContent in LightSync mode

* eth/downloader: improve comments

* eth/downloader: fix vet, megacheck

* eth/downloader: remove unrequested tasks anyway

* eth/downloader, trie: various polishes around duplicate items

This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.

The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.

The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.

The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.

* eth/downloader: fix state request leak

This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.

The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.

Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.

* core, eth, trie: support batched trie sync db writes

* trie: rename SyncMemCache to syncMemBatch
2017-06-22 15:26:03 +03:00
d432688886 cmd/geth: corrently init gas price for CLI CPU mining 2017-06-22 14:58:07 +03:00
58a1e13e6d Merge pull request #14657 from markya0616/refactor_clique
consensus/clique: fix typo and don't need to add snapshot into recents again
2017-06-21 17:58:15 +03:00
a1f3878ec5 swarm/test: add integration test for 'swarm up' (#14353) 2017-06-21 14:54:23 +02:00
a20a02ce0b README: document new config file option (#14348) 2017-06-21 14:53:08 +02:00
9a44e1035e cmd/evm, core/vm: add --nomemory, --nostack to evm (#14617) 2017-06-21 14:52:31 +02:00
9012863ad7 Merge pull request #14667 from fjl/swarm-fuse-cleanup
swarm/fuse: simplify externalUnmount, use subtests
2017-06-21 13:51:00 +03:00
a5d08c893d les: code refactoring (#14416)
This commit does various code refactorings:

- generalizes and moves the request retrieval/timeout/resend logic out of LesOdr
  (will be used by a subsequent PR)
- reworks the peer management logic so that all services can register with
  peerSet to get notified about added/dropped peers (also gets rid of the ugly
  getAllPeers callback in requestDistributor)
- moves peerSet, LesOdr, requestDistributor and retrieveManager initialization
  out of ProtocolManager because I believe they do not really belong there and the
  whole init process was ugly and ad-hoc
2017-06-21 12:27:38 +02:00
98e101ef8e swarm/fuse: use subtests 2017-06-21 09:56:58 +02:00
50c18e6eb8 swarm/fuse: simplify externalUnmount
The code looked for /usr/bin/diskutil on darwin, but it's actually
located in /usr/sbin. Fix that by not specifying the absolute path.
Also remove weird timeout construction and extra whitespace.
2017-06-21 09:56:58 +02:00
60e27b51bc ethclient: fix TransactionByHash pending return value. (#14663)
As per #14661 TransactionByHash always returns false for pending.
This uses blockNumber rather than blockHash to ensure that it returns
the correct value for pending and will not suffer side-effects if
eth_getTransactionByHash is fixed in future.
2017-06-21 09:53:50 +02:00
693d9ccbfb trie: more node iterator improvements (#14615)
* ethdb: remove Set

Set deadlocks immediately and isn't part of the Database interface.

* trie: add Err to Iterator

This is useful for testing because the underlying NodeIterator doesn't
need to be kept in a separate variable just to get the error.

* trie: add LeafKey to iterator, panic when not at leaf

LeafKey is useful for callers that can't interpret Path.

* trie: retry failed seek/peek in iterator Next

Instead of failing iteration irrecoverably, make it so Next retries the
pending seek or peek every time.

Smaller changes in this commit make this easier to test:

* The iterator previously returned from Next on encountering a hash
  node. This caused it to visit the same path twice.
* Path returned nibbles with terminator symbol for valueNode attached
  to fullNode, but removed it for valueNode attached to shortNode. Now
  the terminator is always present. This makes Path unique to each node
  and simplifies Leaf.

* trie: add Path to MissingNodeError

The light client trie iterator needs to know the path of the node that's
missing so it can retrieve a proof for it. NodeIterator.Path is not
sufficient because it is updated when the node is resolved and actually
visited by the iterator.

Also remove unused fields. They were added a long time ago before we
knew which fields would be needed for the light client.
2017-06-20 18:26:09 +02:00
5c53a5be66 consensus/clique: fix typo and don't add snapshot into recents again 2017-06-20 10:20:45 +08:00
431cf2a1e4 Merge pull request #14635 from necaremus/patch-1
cmd/geth: fixed a minor typo in the comments
2017-06-16 17:11:54 +03:00
4f77857f74 cmd/geth: fixed a minor typo in the comments 2017-06-16 15:03:19 +02:00
fade09a7ff eth: remove les server from protocol manager (#14625) 2017-06-15 15:28:57 +02:00
db6e695002 consensus/clique: choose valid votes 2017-06-14 16:49:33 +08:00
335abdceb1 Merge pull request #14581 from holiman/byte_opt
core/vm: improve opByte
2017-06-13 14:44:19 +03:00
732273094c Merge pull request #14604 from bas-vk/mobile-getfrom
mobile: use EIP155 signer for determining sender
2017-06-13 14:09:25 +03:00
b8793edd83 mobile: add a regression test for signer recovery 2017-06-13 13:39:39 +03:00
eb92522278 mobile: use EIP155 signer for determining sender 2017-06-13 09:13:59 +02:00
061889d4ea rlp, trie, contracts, compression, consensus: improve comments (#14580) 2017-06-12 14:45:17 +02:00
e3dfd55820 Merge pull request #14598 from konradkonrad/fix_makedag
consensus/ethash, cmd/geth: Fix `makedag` epoch
2017-06-12 13:34:26 +03:00
2fefe4baa0 consensus: Fix makedag epoch
`geth makedag <blocknumber> <path>` was creating DAGs for
`<blocknumber>/<epoch_length> + 1`, hence
it was impossible to create an epoch 0 DAG.

This fixes the calculations in `consensus/ethash/ethash.go` for
`MakeDataset` and `MakeCache`, and applies `gofmt`.
2017-06-12 11:15:16 +02:00
ac9865791a core/vm, common/math: Add doc about Byte, fix format 2017-06-08 23:16:05 +02:00
80f7c6c299 cmd/evm: add --prestate, --sender, --json flags for fuzzing (#14476) 2017-06-07 17:09:08 +02:00
bc24b7a912 core/types: use Header.Hash for block hashes (#14587)
Fixes #14586
2017-06-07 12:06:25 +02:00
1496b3aff6 common/math, core/vm: Un-expose bigEndianByteAt, use correct terms for endianness 2017-06-06 18:38:38 +02:00
1e9f86b49e cmd/swarm: fix error handling in 'swarm up' (#14557)
The error returned by client.Upload was previously being ignored due to becoming
out of scope outside the if statement. This has been fixed by instead defining a
function which returns the hash and error (rather than trying to set the hash in
each branch of the if statement).
2017-06-06 09:39:10 +02:00
65ea913e29 Merge pull request #14583 from ethersphere/core-log-fixes
core: Fix VM error logging
2017-06-06 10:31:27 +03:00
9a0e433b13 accounts: fix spelling error (#14567) 2017-06-06 09:28:47 +02:00
04d2de9119 core: Fix VM error logging
Signed-off-by: Lewis Marshall <lewis@lmars.net>
2017-06-05 23:51:32 +01:00
3285a0fda3 core/vm, common/math: Add fast getByte for bigints, improve opByte 2017-06-05 08:44:11 +02:00
6171d01b11 VERSION, params: begin Geth 1.6.6 release cycle 2017-06-01 22:08:19 +03:00
cf87713dd4 params: mark Geth v1.6.5 stable (Hat Trick) 2017-06-01 21:48:47 +03:00
ac92d7c411 Merge pull request #14570 from Arachnid/jumpdestanalysis
core/vm: Use a bitmap instead of a map for jumpdest analysis
2017-06-01 21:44:50 +03:00
d5a79934dc core/vm: Use a bitmap instead of a map for jumpdest analysis
t push --force
2017-06-01 19:14:05 +01:00
0424192e61 VERSION, params: begin geth 1.6.5 cycle 2017-06-01 17:37:44 +03:00
9c2882b2e5 params: Geth 1.6.4 stable (hotfix) 2017-06-01 17:33:17 +03:00
1a0eb903f1 internal/ethapi: initialize account mutex in lock properly 2017-06-01 17:16:12 +03:00
0036e2a747 swarm/dev: add development environment (#14332)
This PR adds a Swarm development environment which can be run in a
Docker container and provides scripts for building binaries and running
Swarm clusters.
2017-06-01 12:52:18 +02:00
727eadacca VERSION, params: begin Geth 1.6.4 release cycle 2017-06-01 11:43:57 +03:00
99cba96f26 params: release Geth 1.6.3 - Covfefe 2017-06-01 11:41:48 +03:00
f272879e5a Merge pull request #14565 from karalabe/relax-privkey-checks
accounts/keystore, crypto: don't enforce key checks on existing keyfiles
2017-06-01 11:14:11 +03:00
72dd51e25a accounts/keystore, crypto: don't enforce key checks on existing keyfiles 2017-06-01 11:11:06 +03:00
799a469000 Merge pull request #14561 from karalabe/txpool-perf-fix
core: reduce transaction reorganization overhead
2017-06-01 10:33:47 +03:00
f4d81178d8 Merge pull request #14563 from karalabe/ethstats-reduce-traffic-2
ethstats: reduce ethstats traffic by trottling reports
2017-06-01 10:33:21 +03:00
310d2e7ef4 Merge pull request #14564 from karalabe/fix-1.6-docker
cotnainers/docker: fix the legacy alpine image before dropping
2017-06-01 10:33:03 +03:00
3ecde4e2aa cotnainers/docker: fix the legacy alpine image before dropping 2017-06-01 00:21:47 +03:00
a355b401db ethstats: reduce ethstats traffic by trottling reports 2017-06-01 00:16:19 +03:00
cba33029a8 core: only reorg changed account, not all 2017-05-31 23:26:24 +03:00
9702badd83 core: don't uselessly recheck transactions on dump 2017-05-31 21:29:50 +03:00
067dc2cbf5 params, VERSION: 1.6.3 unstable 2017-05-31 05:47:35 +02:00
122 changed files with 4661 additions and 2001 deletions

View File

@ -53,6 +53,7 @@ matrix:
- debhelper
- dput
- gcc-multilib
- fakeroot
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go debsrc -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>" -upload ppa:ethereum/ethereum

View File

@ -102,6 +102,22 @@ over between the main network and test network, you should make sure to always u
for play-money and real-money. Unless you manually move accounts, Geth will by default correctly
separate the two networks and will not make any accounts available between them.*
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a configuration file via:
```
$ geth --config /path/to/your_config.toml
```
To get an idea how the file should look like you can use the `dumpconfig` subcommand to export your existing configuration:
```
$ geth --your-favourite-flags dumpconfig
```
*Note: This works only with geth v1.6.0 and above*
#### Docker quick start
One of the quickest ways to get Ethereum up and running on your machine is by using Docker:

View File

@ -1 +1 @@
1.6.2
1.6.6

View File

@ -38,7 +38,7 @@ var ErrNotSupported = errors.New("not supported")
var ErrInvalidPassphrase = errors.New("invalid passphrase")
// ErrWalletAlreadyOpen is returned if a wallet is attempted to be opened the
// secodn time.
// second time.
var ErrWalletAlreadyOpen = errors.New("wallet already open")
// ErrWalletClosed is returned if a wallet is attempted to be opened the

View File

@ -182,10 +182,8 @@ func DecryptKey(keyjson []byte, auth string) (*Key, error) {
if err != nil {
return nil, err
}
key, err := crypto.ToECDSA(keyBytes)
if err != nil {
return nil, err
}
key := crypto.ToECDSAUnsafe(keyBytes)
return &Key{
Id: uuid.UUID(keyId),
Address: crypto.PubkeyToAddress(key.PublicKey),

View File

@ -74,10 +74,8 @@ func decryptPreSaleKey(fileContent []byte, password string) (key *Key, err error
return nil, err
}
ethPriv := crypto.Keccak256(plainText)
ecKey, err := crypto.ToECDSA(ethPriv)
if err != nil {
return nil, err
}
ecKey := crypto.ToECDSAUnsafe(ethPriv)
key = &Key{
Id: nil,
Address: crypto.PubkeyToAddress(ecKey.PublicKey),

67
cmd/evm/json_logger.go Normal file
View File

@ -0,0 +1,67 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"encoding/json"
"io"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/vm"
)
type JSONLogger struct {
encoder *json.Encoder
cfg *vm.LogConfig
}
func NewJSONLogger(cfg *vm.LogConfig, writer io.Writer) *JSONLogger {
return &JSONLogger{json.NewEncoder(writer), cfg}
}
// CaptureState outputs state information on the logger.
func (l *JSONLogger) CaptureState(env *vm.EVM, pc uint64, op vm.OpCode, gas, cost uint64, memory *vm.Memory, stack *vm.Stack, contract *vm.Contract, depth int, err error) error {
log := vm.StructLog{
Pc: pc,
Op: op,
Gas: gas + cost,
GasCost: cost,
MemorySize: memory.Len(),
Storage: nil,
Depth: depth,
Err: err,
}
if !l.cfg.DisableMemory {
log.Memory = memory.Data()
}
if !l.cfg.DisableStack {
log.Stack = stack.Data()
}
return l.encoder.Encode(log)
}
// CaptureEnd is triggered at end of execution.
func (l *JSONLogger) CaptureEnd(output []byte, gasUsed uint64, t time.Duration) error {
type endLog struct {
Output string `json:"output"`
GasUsed math.HexOrDecimal64 `json:"gasUsed"`
Time time.Duration `json:"time"`
}
return l.encoder.Encode(endLog{common.Bytes2Hex(output), math.HexOrDecimal64(gasUsed), t})
}

View File

@ -90,6 +90,26 @@ var (
Name: "nogasmetering",
Usage: "disable gas metering",
}
GenesisFlag = cli.StringFlag{
Name: "prestate",
Usage: "JSON file with prestate (genesis) config",
}
MachineFlag = cli.BoolFlag{
Name: "json",
Usage: "output trace logs in machine readable format (json)",
}
SenderFlag = cli.StringFlag{
Name: "sender",
Usage: "The transaction origin",
}
DisableMemoryFlag = cli.BoolFlag{
Name: "nomemory",
Usage: "disable memory output",
}
DisableStackFlag = cli.BoolFlag{
Name: "nostack",
Usage: "disable stack output",
}
)
func init() {
@ -108,6 +128,11 @@ func init() {
MemProfileFlag,
CPUProfileFlag,
StatDumpFlag,
GenesisFlag,
MachineFlag,
SenderFlag,
DisableMemoryFlag,
DisableStackFlag,
}
app.Commands = []cli.Command{
compileCommand,

View File

@ -18,6 +18,7 @@ package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"os"
@ -29,11 +30,13 @@ import (
"github.com/ethereum/go-ethereum/cmd/evm/internal/compiler"
"github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/core/vm/runtime"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
cli "gopkg.in/urfave/cli.v1"
)
@ -45,17 +48,63 @@ var runCommand = cli.Command{
Description: `The run command runs arbitrary EVM code.`,
}
// readGenesis will read the given JSON format genesis file and return
// the initialized Genesis structure
func readGenesis(genesisPath string) *core.Genesis {
// Make sure we have a valid genesis JSON
//genesisPath := ctx.Args().First()
if len(genesisPath) == 0 {
utils.Fatalf("Must supply path to genesis JSON file")
}
file, err := os.Open(genesisPath)
if err != nil {
utils.Fatalf("Failed to read genesis file: %v", err)
}
defer file.Close()
genesis := new(core.Genesis)
if err := json.NewDecoder(file).Decode(genesis); err != nil {
utils.Fatalf("invalid genesis file: %v", err)
}
return genesis
}
func runCmd(ctx *cli.Context) error {
glogger := log.NewGlogHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(false)))
glogger.Verbosity(log.Lvl(ctx.GlobalInt(VerbosityFlag.Name)))
log.Root().SetHandler(glogger)
logconfig := &vm.LogConfig{
DisableMemory: ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
}
var (
db, _ = ethdb.NewMemDatabase()
statedb, _ = state.New(common.Hash{}, db)
sender = common.StringToAddress("sender")
logger = vm.NewStructLogger(nil)
tracer vm.Tracer
debugLogger *vm.StructLogger
statedb *state.StateDB
chainConfig *params.ChainConfig
sender = common.StringToAddress("sender")
)
if ctx.GlobalBool(MachineFlag.Name) {
tracer = NewJSONLogger(logconfig, os.Stdout)
} else if ctx.GlobalBool(DebugFlag.Name) {
debugLogger = vm.NewStructLogger(logconfig)
tracer = debugLogger
} else {
debugLogger = vm.NewStructLogger(logconfig)
}
if ctx.GlobalString(GenesisFlag.Name) != "" {
gen := readGenesis(ctx.GlobalString(GenesisFlag.Name))
_, statedb = gen.ToBlock()
chainConfig = gen.Config
} else {
var db, _ = ethdb.NewMemDatabase()
statedb, _ = state.New(common.Hash{}, db)
}
if ctx.GlobalString(SenderFlag.Name) != "" {
sender = common.HexToAddress(ctx.GlobalString(SenderFlag.Name))
}
statedb.CreateAccount(sender)
var (
@ -95,16 +144,16 @@ func runCmd(ctx *cli.Context) error {
}
code = common.Hex2Bytes(string(bytes.TrimRight(hexcode, "\n")))
}
initialGas := ctx.GlobalUint64(GasFlag.Name)
runtimeConfig := runtime.Config{
Origin: sender,
State: statedb,
GasLimit: ctx.GlobalUint64(GasFlag.Name),
GasLimit: initialGas,
GasPrice: utils.GlobalBig(ctx, PriceFlag.Name),
Value: utils.GlobalBig(ctx, ValueFlag.Name),
EVMConfig: vm.Config{
Tracer: logger,
Debug: ctx.GlobalBool(DebugFlag.Name),
Tracer: tracer,
Debug: ctx.GlobalBool(DebugFlag.Name) || ctx.GlobalBool(MachineFlag.Name),
DisableGasMetering: ctx.GlobalBool(DisableGasMeteringFlag.Name),
},
}
@ -122,15 +171,19 @@ func runCmd(ctx *cli.Context) error {
defer pprof.StopCPUProfile()
}
if chainConfig != nil {
runtimeConfig.ChainConfig = chainConfig
}
tstart := time.Now()
var leftOverGas uint64
if ctx.GlobalBool(CreateFlag.Name) {
input := append(code, common.Hex2Bytes(ctx.GlobalString(InputFlag.Name))...)
ret, _, err = runtime.Create(input, &runtimeConfig)
ret, _, leftOverGas, err = runtime.Create(input, &runtimeConfig)
} else {
receiver := common.StringToAddress("receiver")
statedb.SetCode(receiver, code)
ret, err = runtime.Call(receiver, common.Hex2Bytes(ctx.GlobalString(InputFlag.Name)), &runtimeConfig)
ret, leftOverGas, err = runtime.Call(receiver, common.Hex2Bytes(ctx.GlobalString(InputFlag.Name)), &runtimeConfig)
}
execTime := time.Since(tstart)
@ -153,8 +206,10 @@ func runCmd(ctx *cli.Context) error {
}
if ctx.GlobalBool(DebugFlag.Name) {
fmt.Fprintln(os.Stderr, "#### TRACE ####")
vm.WriteTrace(os.Stderr, logger.StructLogs())
if debugLogger != nil {
fmt.Fprintln(os.Stderr, "#### TRACE ####")
vm.WriteTrace(os.Stderr, debugLogger.StructLogs())
}
fmt.Fprintln(os.Stderr, "#### LOGS ####")
vm.WriteLogs(os.Stderr, statedb.Logs())
}
@ -167,14 +222,18 @@ heap objects: %d
allocations: %d
total allocations: %d
GC calls: %d
Gas used: %d
`, execTime, mem.HeapObjects, mem.Alloc, mem.TotalAlloc, mem.NumGC)
`, execTime, mem.HeapObjects, mem.Alloc, mem.TotalAlloc, mem.NumGC, initialGas-leftOverGas)
}
if tracer != nil {
tracer.CaptureEnd(ret, initialGas-leftOverGas, execTime)
} else {
fmt.Printf("0x%x\n", ret)
}
fmt.Printf("0x%x", ret)
if err != nil {
fmt.Printf(" error: %v", err)
fmt.Printf(" error: %v\n", err)
}
fmt.Println()
return nil
}

View File

@ -233,7 +233,7 @@ func unlockAccount(ctx *cli.Context, ks *keystore.KeyStore, address string, i in
return accounts.Account{}, ""
}
// getPassPhrase retrieves the passwor associated with an account, either fetched
// getPassPhrase retrieves the password associated with an account, either fetched
// from a list of preloaded passphrases, or requested interactively from the user.
func getPassPhrase(prompt string, confirmation bool, i int, passwords []string) string {
// If a list of passwords was supplied, retrieve from them

View File

@ -44,21 +44,21 @@ func tmpDatadirWithKeystore(t *testing.T) string {
func TestAccountListEmpty(t *testing.T) {
geth := runGeth(t, "account", "list")
geth.expectExit()
geth.ExpectExit()
}
func TestAccountList(t *testing.T) {
datadir := tmpDatadirWithKeystore(t)
geth := runGeth(t, "account", "list", "--datadir", datadir)
defer geth.expectExit()
defer geth.ExpectExit()
if runtime.GOOS == "windows" {
geth.expect(`
geth.Expect(`
Account #0: {7ef5a6135f1fd6a02593eedc869c6d41d934aef8} keystore://{{.Datadir}}\keystore\UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Account #1: {f466859ead1932d743d622cb74fc058882e8648a} keystore://{{.Datadir}}\keystore\aaa
Account #2: {289d485d9771714cce91d3393d764e1311907acc} keystore://{{.Datadir}}\keystore\zzz
`)
} else {
geth.expect(`
geth.Expect(`
Account #0: {7ef5a6135f1fd6a02593eedc869c6d41d934aef8} keystore://{{.Datadir}}/keystore/UTC--2016-03-22T12-57-55.920751759Z--7ef5a6135f1fd6a02593eedc869c6d41d934aef8
Account #1: {f466859ead1932d743d622cb74fc058882e8648a} keystore://{{.Datadir}}/keystore/aaa
Account #2: {289d485d9771714cce91d3393d764e1311907acc} keystore://{{.Datadir}}/keystore/zzz
@ -68,20 +68,20 @@ Account #2: {289d485d9771714cce91d3393d764e1311907acc} keystore://{{.Datadir}}/k
func TestAccountNew(t *testing.T) {
geth := runGeth(t, "account", "new", "--lightkdf")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
Your new account is locked with a password. Please give a password. Do not forget this password.
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foobar"}}
Repeat passphrase: {{.InputLine "foobar"}}
`)
geth.expectRegexp(`Address: \{[0-9a-f]{40}\}\n`)
geth.ExpectRegexp(`Address: \{[0-9a-f]{40}\}\n`)
}
func TestAccountNewBadRepeat(t *testing.T) {
geth := runGeth(t, "account", "new", "--lightkdf")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
Your new account is locked with a password. Please give a password. Do not forget this password.
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "something"}}
@ -95,8 +95,8 @@ func TestAccountUpdate(t *testing.T) {
geth := runGeth(t, "account", "update",
"--datadir", datadir, "--lightkdf",
"f466859ead1932d743d622cb74fc058882e8648a")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foobar"}}
@ -108,8 +108,8 @@ Repeat passphrase: {{.InputLine "foobar2"}}
func TestWalletImport(t *testing.T) {
geth := runGeth(t, "wallet", "import", "--lightkdf", "testdata/guswallet.json")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foo"}}
Address: {d4584b5f6229b7be90727b0fc8c6b91bb427821f}
@ -123,8 +123,8 @@ Address: {d4584b5f6229b7be90727b0fc8c6b91bb427821f}
func TestWalletImportBadPassword(t *testing.T) {
geth := runGeth(t, "wallet", "import", "--lightkdf", "testdata/guswallet.json")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "wrong"}}
Fatal: could not decrypt key with given passphrase
@ -137,19 +137,19 @@ func TestUnlockFlag(t *testing.T) {
"--datadir", datadir, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a",
"js", "testdata/empty.js")
geth.expect(`
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foobar"}}
`)
geth.expectExit()
geth.ExpectExit()
wantMessages := []string{
"Unlocked account",
"=0xf466859ead1932d743d622cb74fc058882e8648a",
}
for _, m := range wantMessages {
if !strings.Contains(geth.stderrText(), m) {
if !strings.Contains(geth.StderrText(), m) {
t.Errorf("stderr text does not contain %q", m)
}
}
@ -160,8 +160,8 @@ func TestUnlockFlagWrongPassword(t *testing.T) {
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "wrong1"}}
@ -180,14 +180,14 @@ func TestUnlockFlagMultiIndex(t *testing.T) {
"--datadir", datadir, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "0,2",
"js", "testdata/empty.js")
geth.expect(`
geth.Expect(`
Unlocking account 0 | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foobar"}}
Unlocking account 2 | Attempt 1/3
Passphrase: {{.InputLine "foobar"}}
`)
geth.expectExit()
geth.ExpectExit()
wantMessages := []string{
"Unlocked account",
@ -195,7 +195,7 @@ Passphrase: {{.InputLine "foobar"}}
"=0x289d485d9771714cce91d3393d764e1311907acc",
}
for _, m := range wantMessages {
if !strings.Contains(geth.stderrText(), m) {
if !strings.Contains(geth.StderrText(), m) {
t.Errorf("stderr text does not contain %q", m)
}
}
@ -207,7 +207,7 @@ func TestUnlockFlagPasswordFile(t *testing.T) {
"--datadir", datadir, "--nat", "none", "--nodiscover", "--dev",
"--password", "testdata/passwords.txt", "--unlock", "0,2",
"js", "testdata/empty.js")
geth.expectExit()
geth.ExpectExit()
wantMessages := []string{
"Unlocked account",
@ -215,7 +215,7 @@ func TestUnlockFlagPasswordFile(t *testing.T) {
"=0x289d485d9771714cce91d3393d764e1311907acc",
}
for _, m := range wantMessages {
if !strings.Contains(geth.stderrText(), m) {
if !strings.Contains(geth.StderrText(), m) {
t.Errorf("stderr text does not contain %q", m)
}
}
@ -226,8 +226,8 @@ func TestUnlockFlagPasswordFileWrongPassword(t *testing.T) {
geth := runGeth(t,
"--datadir", datadir, "--nat", "none", "--nodiscover", "--dev",
"--password", "testdata/wrong-passwords.txt", "--unlock", "0,2")
defer geth.expectExit()
geth.expect(`
defer geth.ExpectExit()
geth.Expect(`
Fatal: Failed to unlock account 0 (could not decrypt key with given passphrase)
`)
}
@ -238,14 +238,14 @@ func TestUnlockFlagAmbiguous(t *testing.T) {
"--keystore", store, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a",
"js", "testdata/empty.js")
defer geth.expectExit()
defer geth.ExpectExit()
// Helper for the expect template, returns absolute keystore path.
geth.setTemplateFunc("keypath", func(file string) string {
geth.SetTemplateFunc("keypath", func(file string) string {
abs, _ := filepath.Abs(filepath.Join(store, file))
return abs
})
geth.expect(`
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "foobar"}}
@ -257,14 +257,14 @@ Your passphrase unlocked keystore://{{keypath "1"}}
In order to avoid this warning, you need to remove the following duplicate key files:
keystore://{{keypath "2"}}
`)
geth.expectExit()
geth.ExpectExit()
wantMessages := []string{
"Unlocked account",
"=0xf466859ead1932d743d622cb74fc058882e8648a",
}
for _, m := range wantMessages {
if !strings.Contains(geth.stderrText(), m) {
if !strings.Contains(geth.StderrText(), m) {
t.Errorf("stderr text does not contain %q", m)
}
}
@ -275,14 +275,14 @@ func TestUnlockFlagAmbiguousWrongPassword(t *testing.T) {
geth := runGeth(t,
"--keystore", store, "--nat", "none", "--nodiscover", "--dev",
"--unlock", "f466859ead1932d743d622cb74fc058882e8648a")
defer geth.expectExit()
defer geth.ExpectExit()
// Helper for the expect template, returns absolute keystore path.
geth.setTemplateFunc("keypath", func(file string) string {
geth.SetTemplateFunc("keypath", func(file string) string {
abs, _ := filepath.Abs(filepath.Join(store, file))
return abs
})
geth.expect(`
geth.Expect(`
Unlocking account f466859ead1932d743d622cb74fc058882e8648a | Attempt 1/3
!! Unsupported terminal, password will be echoed.
Passphrase: {{.InputLine "wrong"}}
@ -292,5 +292,5 @@ Multiple key files exist for address f466859ead1932d743d622cb74fc058882e8648a:
Testing your passphrase against all of them...
Fatal: None of the listed files could be unlocked.
`)
geth.expectExit()
geth.ExpectExit()
}

View File

@ -47,15 +47,15 @@ func TestConsoleWelcome(t *testing.T) {
"console")
// Gather all the infos the welcome message needs to contain
geth.setTemplateFunc("goos", func() string { return runtime.GOOS })
geth.setTemplateFunc("goarch", func() string { return runtime.GOARCH })
geth.setTemplateFunc("gover", runtime.Version)
geth.setTemplateFunc("gethver", func() string { return params.Version })
geth.setTemplateFunc("niltime", func() string { return time.Unix(0, 0).Format(time.RFC1123) })
geth.setTemplateFunc("apis", func() string { return ipcAPIs })
geth.SetTemplateFunc("goos", func() string { return runtime.GOOS })
geth.SetTemplateFunc("goarch", func() string { return runtime.GOARCH })
geth.SetTemplateFunc("gover", runtime.Version)
geth.SetTemplateFunc("gethver", func() string { return params.Version })
geth.SetTemplateFunc("niltime", func() string { return time.Unix(0, 0).Format(time.RFC1123) })
geth.SetTemplateFunc("apis", func() string { return ipcAPIs })
// Verify the actual welcome message to the required template
geth.expect(`
geth.Expect(`
Welcome to the Geth JavaScript console!
instance: Geth/v{{gethver}}/{{goos}}-{{goarch}}/{{gover}}
@ -66,7 +66,7 @@ at block: 0 ({{niltime}})
> {{.InputLine "exit"}}
`)
geth.expectExit()
geth.ExpectExit()
}
// Tests that a console can be attached to a running node via various means.
@ -90,8 +90,8 @@ func TestIPCAttachWelcome(t *testing.T) {
time.Sleep(2 * time.Second) // Simple way to wait for the RPC endpoint to open
testAttachWelcome(t, geth, "ipc:"+ipc, ipcAPIs)
geth.interrupt()
geth.expectExit()
geth.Interrupt()
geth.ExpectExit()
}
func TestHTTPAttachWelcome(t *testing.T) {
@ -104,8 +104,8 @@ func TestHTTPAttachWelcome(t *testing.T) {
time.Sleep(2 * time.Second) // Simple way to wait for the RPC endpoint to open
testAttachWelcome(t, geth, "http://localhost:"+port, httpAPIs)
geth.interrupt()
geth.expectExit()
geth.Interrupt()
geth.ExpectExit()
}
func TestWSAttachWelcome(t *testing.T) {
@ -119,29 +119,29 @@ func TestWSAttachWelcome(t *testing.T) {
time.Sleep(2 * time.Second) // Simple way to wait for the RPC endpoint to open
testAttachWelcome(t, geth, "ws://localhost:"+port, httpAPIs)
geth.interrupt()
geth.expectExit()
geth.Interrupt()
geth.ExpectExit()
}
func testAttachWelcome(t *testing.T, geth *testgeth, endpoint, apis string) {
// Attach to a running geth note and terminate immediately
attach := runGeth(t, "attach", endpoint)
defer attach.expectExit()
attach.stdin.Close()
defer attach.ExpectExit()
attach.CloseStdin()
// Gather all the infos the welcome message needs to contain
attach.setTemplateFunc("goos", func() string { return runtime.GOOS })
attach.setTemplateFunc("goarch", func() string { return runtime.GOARCH })
attach.setTemplateFunc("gover", runtime.Version)
attach.setTemplateFunc("gethver", func() string { return params.Version })
attach.setTemplateFunc("etherbase", func() string { return geth.Etherbase })
attach.setTemplateFunc("niltime", func() string { return time.Unix(0, 0).Format(time.RFC1123) })
attach.setTemplateFunc("ipc", func() bool { return strings.HasPrefix(endpoint, "ipc") })
attach.setTemplateFunc("datadir", func() string { return geth.Datadir })
attach.setTemplateFunc("apis", func() string { return apis })
attach.SetTemplateFunc("goos", func() string { return runtime.GOOS })
attach.SetTemplateFunc("goarch", func() string { return runtime.GOARCH })
attach.SetTemplateFunc("gover", runtime.Version)
attach.SetTemplateFunc("gethver", func() string { return params.Version })
attach.SetTemplateFunc("etherbase", func() string { return geth.Etherbase })
attach.SetTemplateFunc("niltime", func() string { return time.Unix(0, 0).Format(time.RFC1123) })
attach.SetTemplateFunc("ipc", func() bool { return strings.HasPrefix(endpoint, "ipc") })
attach.SetTemplateFunc("datadir", func() string { return geth.Datadir })
attach.SetTemplateFunc("apis", func() string { return apis })
// Verify the actual welcome message to the required template
attach.expect(`
attach.Expect(`
Welcome to the Geth JavaScript console!
instance: Geth/v{{gethver}}/{{goos}}-{{goarch}}/{{gover}}
@ -152,7 +152,7 @@ at block: 0 ({{niltime}}){{if ipc}}
> {{.InputLine "exit" }}
`)
attach.expectExit()
attach.ExpectExit()
}
// trulyRandInt generates a crypto random integer used by the console tests to

View File

@ -112,12 +112,12 @@ func testDAOForkBlockNewChain(t *testing.T, test int, genesis string, expectBloc
if err := ioutil.WriteFile(json, []byte(genesis), 0600); err != nil {
t.Fatalf("test %d: failed to write genesis file: %v", test, err)
}
runGeth(t, "--datadir", datadir, "init", json).cmd.Wait()
runGeth(t, "--datadir", datadir, "init", json).WaitExit()
} else {
// Force chain initialization
args := []string{"--port", "0", "--maxpeers", "0", "--nodiscover", "--nat", "none", "--ipcdisable", "--datadir", datadir}
geth := runGeth(t, append(args, []string{"--exec", "2+2", "console"}...)...)
geth.cmd.Wait()
geth.WaitExit()
}
// Retrieve the DAO config flag from the database
path := filepath.Join(datadir, "geth", "chaindata")

View File

@ -97,14 +97,14 @@ func TestCustomGenesis(t *testing.T) {
if err := ioutil.WriteFile(json, []byte(tt.genesis), 0600); err != nil {
t.Fatalf("test %d: failed to write genesis file: %v", i, err)
}
runGeth(t, "--datadir", datadir, "init", json).cmd.Wait()
runGeth(t, "--datadir", datadir, "init", json).WaitExit()
// Query the custom genesis block
geth := runGeth(t,
"--datadir", datadir, "--maxpeers", "0", "--port", "0",
"--nodiscover", "--nat", "none", "--ipcdisable",
"--exec", tt.query, "console")
geth.expectRegexp(tt.result)
geth.expectExit()
geth.ExpectRegexp(tt.result)
geth.ExpectExit()
}
}

View File

@ -252,10 +252,12 @@ func startNode(ctx *cli.Context, stack *node.Node) {
}()
// Start auxiliary services if enabled
if ctx.GlobalBool(utils.MiningEnabledFlag.Name) {
// Mining only makes sense if a full Ethereum node is running
var ethereum *eth.Ethereum
if err := stack.Service(&ethereum); err != nil {
utils.Fatalf("ethereum service not running: %v", err)
}
// Use a reduced number of threads if requested
if threads := ctx.GlobalInt(utils.MinerThreadsFlag.Name); threads > 0 {
type threaded interface {
SetThreads(threads int)
@ -264,6 +266,8 @@ func startNode(ctx *cli.Context, stack *node.Node) {
th.SetThreads(threads)
}
}
// Set the gas price to the limits from the CLI and start mining
ethereum.TxPool().SetGasPrice(utils.GlobalBig(ctx, utils.GasPriceFlag.Name))
if err := ethereum.StartMining(true); err != nil {
utils.Fatalf("Failed to start mining: %v", err)
}

View File

@ -17,18 +17,13 @@
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"regexp"
"sync"
"testing"
"text/template"
"time"
"github.com/docker/docker/pkg/reexec"
"github.com/ethereum/go-ethereum/internal/cmdtest"
)
func tmpdir(t *testing.T) string {
@ -40,36 +35,37 @@ func tmpdir(t *testing.T) string {
}
type testgeth struct {
// For total convenience, all testing methods are available.
*testing.T
// template variables for expect
Datadir string
Executable string
Etherbase string
Func template.FuncMap
*cmdtest.TestCmd
removeDatadir bool
cmd *exec.Cmd
stdout *bufio.Reader
stdin io.WriteCloser
stderr *testlogger
// template variables for expect
Datadir string
Etherbase string
}
func init() {
// Run the app if we're the child process for runGeth.
if os.Getenv("GETH_TEST_CHILD") != "" {
// Run the app if we've been exec'd as "geth-test" in runGeth.
reexec.Register("geth-test", func() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
os.Exit(0)
})
}
func TestMain(m *testing.M) {
// check if we have been reexec'd
if reexec.Init() {
return
}
os.Exit(m.Run())
}
// spawns geth with the given command line args. If the args don't set --datadir, the
// child g gets a temporary data directory.
func runGeth(t *testing.T, args ...string) *testgeth {
tt := &testgeth{T: t, Executable: os.Args[0]}
tt := &testgeth{}
tt.TestCmd = cmdtest.NewTestCmd(t, tt)
for i, arg := range args {
switch {
case arg == "-datadir" || arg == "--datadir":
@ -84,215 +80,19 @@ func runGeth(t *testing.T, args ...string) *testgeth {
}
if tt.Datadir == "" {
tt.Datadir = tmpdir(t)
tt.removeDatadir = true
tt.Cleanup = func() { os.RemoveAll(tt.Datadir) }
args = append([]string{"-datadir", tt.Datadir}, args...)
// Remove the temporary datadir if something fails below.
defer func() {
if t.Failed() {
os.RemoveAll(tt.Datadir)
tt.Cleanup()
}
}()
}
// Boot "geth". This actually runs the test binary but the init function
// will prevent any tests from running.
tt.stderr = &testlogger{t: t}
tt.cmd = exec.Command(os.Args[0], args...)
tt.cmd.Env = append(os.Environ(), "GETH_TEST_CHILD=1")
tt.cmd.Stderr = tt.stderr
stdout, err := tt.cmd.StdoutPipe()
if err != nil {
t.Fatal(err)
}
tt.stdout = bufio.NewReader(stdout)
if tt.stdin, err = tt.cmd.StdinPipe(); err != nil {
t.Fatal(err)
}
if err := tt.cmd.Start(); err != nil {
t.Fatal(err)
}
// Boot "geth". This actually runs the test binary but the TestMain
// function will prevent any tests from running.
tt.Run("geth-test", args...)
return tt
}
// InputLine writes the given text to the childs stdin.
// This method can also be called from an expect template, e.g.:
//
// geth.expect(`Passphrase: {{.InputLine "password"}}`)
func (tt *testgeth) InputLine(s string) string {
io.WriteString(tt.stdin, s+"\n")
return ""
}
func (tt *testgeth) setTemplateFunc(name string, fn interface{}) {
if tt.Func == nil {
tt.Func = make(map[string]interface{})
}
tt.Func[name] = fn
}
// expect runs its argument as a template, then expects the
// child process to output the result of the template within 5s.
//
// If the template starts with a newline, the newline is removed
// before matching.
func (tt *testgeth) expect(tplsource string) {
// Generate the expected output by running the template.
tpl := template.Must(template.New("").Funcs(tt.Func).Parse(tplsource))
wantbuf := new(bytes.Buffer)
if err := tpl.Execute(wantbuf, tt); err != nil {
panic(err)
}
// Trim exactly one newline at the beginning. This makes tests look
// much nicer because all expect strings are at column 0.
want := bytes.TrimPrefix(wantbuf.Bytes(), []byte("\n"))
if err := tt.matchExactOutput(want); err != nil {
tt.Fatal(err)
}
tt.Logf("Matched stdout text:\n%s", want)
}
func (tt *testgeth) matchExactOutput(want []byte) error {
buf := make([]byte, len(want))
n := 0
tt.withKillTimeout(func() { n, _ = io.ReadFull(tt.stdout, buf) })
buf = buf[:n]
if n < len(want) || !bytes.Equal(buf, want) {
// Grab any additional buffered output in case of mismatch
// because it might help with debugging.
buf = append(buf, make([]byte, tt.stdout.Buffered())...)
tt.stdout.Read(buf[n:])
// Find the mismatch position.
for i := 0; i < n; i++ {
if want[i] != buf[i] {
return fmt.Errorf("Output mismatch at ◊:\n---------------- (stdout text)\n%s◊%s\n---------------- (expected text)\n%s",
buf[:i], buf[i:n], want)
}
}
if n < len(want) {
return fmt.Errorf("Not enough output, got until ◊:\n---------------- (stdout text)\n%s\n---------------- (expected text)\n%s◊%s",
buf, want[:n], want[n:])
}
}
return nil
}
// expectRegexp expects the child process to output text matching the
// given regular expression within 5s.
//
// Note that an arbitrary amount of output may be consumed by the
// regular expression. This usually means that expect cannot be used
// after expectRegexp.
func (tt *testgeth) expectRegexp(resource string) (*regexp.Regexp, []string) {
var (
re = regexp.MustCompile(resource)
rtee = &runeTee{in: tt.stdout}
matches []int
)
tt.withKillTimeout(func() { matches = re.FindReaderSubmatchIndex(rtee) })
output := rtee.buf.Bytes()
if matches == nil {
tt.Fatalf("Output did not match:\n---------------- (stdout text)\n%s\n---------------- (regular expression)\n%s",
output, resource)
return re, nil
}
tt.Logf("Matched stdout text:\n%s", output)
var submatch []string
for i := 0; i < len(matches); i += 2 {
submatch = append(submatch, string(output[i:i+1]))
}
return re, submatch
}
// expectExit expects the child process to exit within 5s without
// printing any additional text on stdout.
func (tt *testgeth) expectExit() {
var output []byte
tt.withKillTimeout(func() {
output, _ = ioutil.ReadAll(tt.stdout)
})
tt.cmd.Wait()
if tt.removeDatadir {
os.RemoveAll(tt.Datadir)
}
if len(output) > 0 {
tt.Errorf("Unmatched stdout text:\n%s", output)
}
}
func (tt *testgeth) interrupt() {
tt.cmd.Process.Signal(os.Interrupt)
}
// stderrText returns any stderr output written so far.
// The returned text holds all log lines after expectExit has
// returned.
func (tt *testgeth) stderrText() string {
tt.stderr.mu.Lock()
defer tt.stderr.mu.Unlock()
return tt.stderr.buf.String()
}
func (tt *testgeth) withKillTimeout(fn func()) {
timeout := time.AfterFunc(5*time.Second, func() {
tt.Log("killing the child process (timeout)")
tt.cmd.Process.Kill()
if tt.removeDatadir {
os.RemoveAll(tt.Datadir)
}
})
defer timeout.Stop()
fn()
}
// testlogger logs all written lines via t.Log and also
// collects them for later inspection.
type testlogger struct {
t *testing.T
mu sync.Mutex
buf bytes.Buffer
}
func (tl *testlogger) Write(b []byte) (n int, err error) {
lines := bytes.Split(b, []byte("\n"))
for _, line := range lines {
if len(line) > 0 {
tl.t.Logf("(stderr) %s", line)
}
}
tl.mu.Lock()
tl.buf.Write(b)
tl.mu.Unlock()
return len(b), err
}
// runeTee collects text read through it into buf.
type runeTee struct {
in interface {
io.Reader
io.ByteReader
io.RuneReader
}
buf bytes.Buffer
}
func (rtee *runeTee) Read(b []byte) (n int, err error) {
n, err = rtee.in.Read(b)
rtee.buf.Write(b[:n])
return n, err
}
func (rtee *runeTee) ReadRune() (r rune, size int, err error) {
r, size, err = rtee.in.ReadRune()
if err == nil {
rtee.buf.WriteRune(r)
}
return r, size, err
}
func (rtee *runeTee) ReadByte() (b byte, err error) {
b, err = rtee.in.ReadByte()
if err == nil {
rtee.buf.WriteByte(b)
}
return b, err
}

255
cmd/swarm/run_test.go Normal file
View File

@ -0,0 +1,255 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"io/ioutil"
"net"
"os"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/docker/docker/pkg/reexec"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/internal/cmdtest"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/swarm"
)
func init() {
// Run the app if we've been exec'd as "swarm-test" in runSwarm.
reexec.Register("swarm-test", func() {
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
os.Exit(0)
})
}
func TestMain(m *testing.M) {
// check if we have been reexec'd
if reexec.Init() {
return
}
os.Exit(m.Run())
}
func runSwarm(t *testing.T, args ...string) *cmdtest.TestCmd {
tt := cmdtest.NewTestCmd(t, nil)
// Boot "swarm". This actually runs the test binary but the TestMain
// function will prevent any tests from running.
tt.Run("swarm-test", args...)
return tt
}
type testCluster struct {
Nodes []*testNode
TmpDir string
}
// newTestCluster starts a test swarm cluster of the given size.
//
// A temporary directory is created and each node gets a data directory inside
// it.
//
// Each node listens on 127.0.0.1 with random ports for both the HTTP and p2p
// ports (assigned by first listening on 127.0.0.1:0 and then passing the ports
// as flags).
//
// When starting more than one node, they are connected together using the
// admin SetPeer RPC method.
func newTestCluster(t *testing.T, size int) *testCluster {
cluster := &testCluster{}
defer func() {
if t.Failed() {
cluster.Shutdown()
}
}()
tmpdir, err := ioutil.TempDir("", "swarm-test")
if err != nil {
t.Fatal(err)
}
cluster.TmpDir = tmpdir
// start the nodes
cluster.Nodes = make([]*testNode, 0, size)
for i := 0; i < size; i++ {
dir := filepath.Join(cluster.TmpDir, fmt.Sprintf("swarm%02d", i))
if err := os.Mkdir(dir, 0700); err != nil {
t.Fatal(err)
}
node := newTestNode(t, dir)
node.Name = fmt.Sprintf("swarm%02d", i)
cluster.Nodes = append(cluster.Nodes, node)
}
if size == 1 {
return cluster
}
// connect the nodes together
for _, node := range cluster.Nodes {
if err := node.Client.Call(nil, "admin_addPeer", cluster.Nodes[0].Enode); err != nil {
t.Fatal(err)
}
}
// wait until all nodes have the correct number of peers
outer:
for _, node := range cluster.Nodes {
var peers []*p2p.PeerInfo
for start := time.Now(); time.Since(start) < time.Minute; time.Sleep(50 * time.Millisecond) {
if err := node.Client.Call(&peers, "admin_peers"); err != nil {
t.Fatal(err)
}
if len(peers) == len(cluster.Nodes)-1 {
continue outer
}
}
t.Fatalf("%s only has %d / %d peers", node.Name, len(peers), len(cluster.Nodes)-1)
}
return cluster
}
func (c *testCluster) Shutdown() {
for _, node := range c.Nodes {
node.Shutdown()
}
os.RemoveAll(c.TmpDir)
}
type testNode struct {
Name string
Addr string
URL string
Enode string
Dir string
Client *rpc.Client
Cmd *cmdtest.TestCmd
}
const testPassphrase = "swarm-test-passphrase"
func newTestNode(t *testing.T, dir string) *testNode {
// create key
conf := &node.Config{
DataDir: dir,
IPCPath: "bzzd.ipc",
}
n, err := node.New(conf)
if err != nil {
t.Fatal(err)
}
account, err := n.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore).NewAccount(testPassphrase)
if err != nil {
t.Fatal(err)
}
node := &testNode{Dir: dir}
// use a unique IPCPath when running tests on Windows
if runtime.GOOS == "windows" {
conf.IPCPath = fmt.Sprintf("bzzd-%s.ipc", account.Address.String())
}
// assign ports
httpPort, err := assignTCPPort()
if err != nil {
t.Fatal(err)
}
p2pPort, err := assignTCPPort()
if err != nil {
t.Fatal(err)
}
// start the node
node.Cmd = runSwarm(t,
"--port", p2pPort,
"--nodiscover",
"--datadir", dir,
"--ipcpath", conf.IPCPath,
"--ethapi", "",
"--bzzaccount", account.Address.String(),
"--bzznetworkid", "321",
"--bzzport", httpPort,
"--verbosity", "6",
)
node.Cmd.InputLine(testPassphrase)
defer func() {
if t.Failed() {
node.Shutdown()
}
}()
// wait for the node to start
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
node.Client, err = rpc.Dial(conf.IPCEndpoint())
if err == nil {
break
}
}
if node.Client == nil {
t.Fatal(err)
}
// load info
var info swarm.Info
if err := node.Client.Call(&info, "bzz_info"); err != nil {
t.Fatal(err)
}
node.Addr = net.JoinHostPort("127.0.0.1", info.Port)
node.URL = "http://" + node.Addr
var nodeInfo p2p.NodeInfo
if err := node.Client.Call(&nodeInfo, "admin_nodeInfo"); err != nil {
t.Fatal(err)
}
node.Enode = fmt.Sprintf("enode://%s@127.0.0.1:%s", nodeInfo.ID, p2pPort)
return node
}
func (n *testNode) Shutdown() {
if n.Cmd != nil {
n.Cmd.Kill()
}
}
func assignTCPPort() (string, error) {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return "", err
}
l.Close()
_, port, err := net.SplitHostPort(l.Addr().String())
if err != nil {
return "", err
}
return port, nil
}

View File

@ -18,6 +18,7 @@
package main
import (
"errors"
"fmt"
"io"
"io/ioutil"
@ -87,24 +88,32 @@ func upload(ctx *cli.Context) {
if err != nil {
utils.Fatalf("Error opening file: %s", err)
}
var hash string
// define a function which either uploads a directory or single file
// based on the type of the file being uploaded
var doUpload func() (hash string, err error)
if stat.IsDir() {
if !recursive {
utils.Fatalf("Argument is a directory and recursive upload is disabled")
doUpload = func() (string, error) {
if !recursive {
return "", errors.New("Argument is a directory and recursive upload is disabled")
}
return client.UploadDirectory(file, defaultPath, "")
}
hash, err = client.UploadDirectory(file, defaultPath, "")
} else {
if mimeType == "" {
mimeType = detectMimeType(file)
doUpload = func() (string, error) {
f, err := swarm.Open(file)
if err != nil {
return "", fmt.Errorf("error opening file: %s", err)
}
defer f.Close()
if mimeType == "" {
mimeType = detectMimeType(file)
}
f.ContentType = mimeType
return client.Upload(f, "")
}
f, err := swarm.Open(file)
if err != nil {
utils.Fatalf("Error opening file: %s", err)
}
defer f.Close()
f.ContentType = mimeType
hash, err = client.Upload(f, "")
}
hash, err := doUpload()
if err != nil {
utils.Fatalf("Upload failed: %s", err)
}

78
cmd/swarm/upload_test.go Normal file
View File

@ -0,0 +1,78 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"io"
"io/ioutil"
"net/http"
"os"
"testing"
)
// TestCLISwarmUp tests that running 'swarm up' makes the resulting file
// available from all nodes via the HTTP API
func TestCLISwarmUp(t *testing.T) {
t.Skip("flaky test")
// start 3 node cluster
t.Log("starting 3 node cluster")
cluster := newTestCluster(t, 3)
defer cluster.Shutdown()
// create a tmp file
tmp, err := ioutil.TempFile("", "swarm-test")
assertNil(t, err)
defer tmp.Close()
defer os.Remove(tmp.Name())
_, err = io.WriteString(tmp, "data")
assertNil(t, err)
// upload the file with 'swarm up' and expect a hash
t.Log("uploading file with 'swarm up'")
up := runSwarm(t, "--bzzapi", cluster.Nodes[0].URL, "up", tmp.Name())
_, matches := up.ExpectRegexp(`[a-f\d]{64}`)
up.ExpectExit()
hash := matches[0]
t.Logf("file uploaded with hash %s", hash)
// get the file from the HTTP API of each node
for _, node := range cluster.Nodes {
t.Logf("getting file from %s", node.Name)
res, err := http.Get(node.URL + "/bzz:/" + hash)
assertNil(t, err)
assertHTTPResponse(t, res, http.StatusOK, "data")
}
}
func assertNil(t *testing.T, err error) {
if err != nil {
t.Fatal(err)
}
}
func assertHTTPResponse(t *testing.T, res *http.Response, expectedStatus int, expectedBody string) {
defer res.Body.Close()
if res.StatusCode != expectedStatus {
t.Fatalf("expected HTTP status %d, got %s", expectedStatus, res.Status)
}
data, err := ioutil.ReadAll(res.Body)
assertNil(t, err)
if string(data) != expectedBody {
t.Fatalf("expected HTTP body %q, got %q", expectedBody, data)
}
}

View File

@ -130,6 +130,34 @@ func PaddedBigBytes(bigint *big.Int, n int) []byte {
return ret
}
// bigEndianByteAt returns the byte at position n,
// in Big-Endian encoding
// So n==0 returns the least significant byte
func bigEndianByteAt(bigint *big.Int, n int) byte {
words := bigint.Bits()
// Check word-bucket the byte will reside in
i := n / wordBytes
if i >= len(words) {
return byte(0)
}
word := words[i]
// Offset of the byte
shift := 8 * uint(n%wordBytes)
return byte(word >> shift)
}
// Byte returns the byte at position n,
// with the supplied padlength in Little-Endian encoding.
// n==0 returns the MSB
// Example: bigint '5', padlength 32, n=31 => 5
func Byte(bigint *big.Int, padlength, n int) byte {
if n >= padlength {
return byte(0)
}
return bigEndianByteAt(bigint, padlength-1-n)
}
// ReadBits encodes the absolute value of bigint as big-endian bytes. Callers must ensure
// that buf has enough space. If buf is too short the result will be incomplete.
func ReadBits(bigint *big.Int, buf []byte) {

View File

@ -21,6 +21,8 @@ import (
"encoding/hex"
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
)
func TestHexOrDecimal256(t *testing.T) {
@ -133,8 +135,44 @@ func TestPaddedBigBytes(t *testing.T) {
}
}
func BenchmarkPaddedBigBytes(b *testing.B) {
func BenchmarkPaddedBigBytesLargePadding(b *testing.B) {
bigint := MustParseBig256("123456789123456789123456789123456789")
for i := 0; i < b.N; i++ {
PaddedBigBytes(bigint, 200)
}
}
func BenchmarkPaddedBigBytesSmallPadding(b *testing.B) {
bigint := MustParseBig256("0x18F8F8F1000111000110011100222004330052300000000000000000FEFCF3CC")
for i := 0; i < b.N; i++ {
PaddedBigBytes(bigint, 5)
}
}
func BenchmarkPaddedBigBytesSmallOnePadding(b *testing.B) {
bigint := MustParseBig256("0x18F8F8F1000111000110011100222004330052300000000000000000FEFCF3CC")
for i := 0; i < b.N; i++ {
PaddedBigBytes(bigint, 32)
}
}
func BenchmarkByteAtBrandNew(b *testing.B) {
bigint := MustParseBig256("0x18F8F8F1000111000110011100222004330052300000000000000000FEFCF3CC")
for i := 0; i < b.N; i++ {
bigEndianByteAt(bigint, 15)
}
}
func BenchmarkByteAt(b *testing.B) {
bigint := MustParseBig256("0x18F8F8F1000111000110011100222004330052300000000000000000FEFCF3CC")
for i := 0; i < b.N; i++ {
bigEndianByteAt(bigint, 15)
}
}
func BenchmarkByteAtOld(b *testing.B) {
bigint := MustParseBig256("0x18F8F8F1000111000110011100222004330052300000000000000000FEFCF3CC")
for i := 0; i < b.N; i++ {
PaddedBigBytes(bigint, 32)
}
@ -174,6 +212,65 @@ func TestU256(t *testing.T) {
}
}
func TestBigEndianByteAt(t *testing.T) {
tests := []struct {
x string
y int
exp byte
}{
{"00", 0, 0x00},
{"01", 1, 0x00},
{"00", 1, 0x00},
{"01", 0, 0x01},
{"0000000000000000000000000000000000000000000000000000000000102030", 0, 0x30},
{"0000000000000000000000000000000000000000000000000000000000102030", 1, 0x20},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 31, 0xAB},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 32, 0x00},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 500, 0x00},
}
for _, test := range tests {
v := new(big.Int).SetBytes(common.Hex2Bytes(test.x))
actual := bigEndianByteAt(v, test.y)
if actual != test.exp {
t.Fatalf("Expected [%v] %v:th byte to be %v, was %v.", test.x, test.y, test.exp, actual)
}
}
}
func TestLittleEndianByteAt(t *testing.T) {
tests := []struct {
x string
y int
exp byte
}{
{"00", 0, 0x00},
{"01", 1, 0x00},
{"00", 1, 0x00},
{"01", 0, 0x00},
{"0000000000000000000000000000000000000000000000000000000000102030", 0, 0x00},
{"0000000000000000000000000000000000000000000000000000000000102030", 1, 0x00},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 31, 0x00},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 32, 0x00},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 0, 0xAB},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 1, 0xCD},
{"00CDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff", 0, 0x00},
{"00CDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff", 1, 0xCD},
{"0000000000000000000000000000000000000000000000000000000000102030", 31, 0x30},
{"0000000000000000000000000000000000000000000000000000000000102030", 30, 0x20},
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 32, 0x0},
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 31, 0xFF},
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 0xFFFF, 0x0},
}
for _, test := range tests {
v := new(big.Int).SetBytes(common.Hex2Bytes(test.x))
actual := Byte(v, 32, test.y)
if actual != test.exp {
t.Fatalf("Expected [%v] %v:th byte to be %v, was %v.", test.x, test.y, test.exp, actual)
}
}
}
func TestS256(t *testing.T) {
tests := []struct{ x, y *big.Int }{
{x: big.NewInt(0), y: big.NewInt(0)},

View File

@ -33,102 +33,18 @@ func (s *CompressionRleSuite) TestDecompressSimple(c *checker.C) {
res, err := Decompress([]byte{token, 0xfd})
c.Assert(err, checker.IsNil)
c.Assert(res, checker.DeepEquals, exp)
// if bytes.Compare(res, exp) != 0 {
// t.Error("empty sha3", res)
// }
exp = []byte{0x56, 0xe8, 0x1f, 0x17, 0x1b, 0xcc, 0x55, 0xa6, 0xff, 0x83, 0x45, 0xe6, 0x92, 0xc0, 0xf8, 0x6e, 0x5b, 0x48, 0xe0, 0x1b, 0x99, 0x6c, 0xad, 0xc0, 0x1, 0x62, 0x2f, 0xb5, 0xe3, 0x63, 0xb4, 0x21}
res, err = Decompress([]byte{token, 0xfe})
c.Assert(err, checker.IsNil)
c.Assert(res, checker.DeepEquals, exp)
// if bytes.Compare(res, exp) != 0 {
// t.Error("0x80 sha3", res)
// }
res, err = Decompress([]byte{token, 0xff})
c.Assert(err, checker.IsNil)
c.Assert(res, checker.DeepEquals, []byte{token})
// if bytes.Compare(res, []byte{token}) != 0 {
// t.Error("token", res)
// }
res, err = Decompress([]byte{token, 12})
c.Assert(err, checker.IsNil)
c.Assert(res, checker.DeepEquals, make([]byte, 10))
// if bytes.Compare(res, make([]byte, 10)) != 0 {
// t.Error("10 * zero", res)
// }
}
// func TestDecompressMulti(t *testing.T) {
// res, err := Decompress([]byte{token, 0xfd, token, 0xfe, token, 12})
// if err != nil {
// t.Error(err)
// }
// var exp []byte
// exp = append(exp, crypto.Keccak256([]byte(""))...)
// exp = append(exp, crypto.Keccak256([]byte{0x80})...)
// exp = append(exp, make([]byte, 10)...)
// if bytes.Compare(res, res) != 0 {
// t.Error("Expected", exp, "result", res)
// }
// }
// func TestCompressSimple(t *testing.T) {
// res := Compress([]byte{0, 0, 0, 0, 0})
// if bytes.Compare(res, []byte{token, 7}) != 0 {
// t.Error("5 * zero", res)
// }
// res = Compress(crypto.Keccak256([]byte("")))
// if bytes.Compare(res, []byte{token, emptyShaToken}) != 0 {
// t.Error("empty sha", res)
// }
// res = Compress(crypto.Keccak256([]byte{0x80}))
// if bytes.Compare(res, []byte{token, emptyListShaToken}) != 0 {
// t.Error("empty list sha", res)
// }
// res = Compress([]byte{token})
// if bytes.Compare(res, []byte{token, tokenToken}) != 0 {
// t.Error("token", res)
// }
// }
// func TestCompressMulti(t *testing.T) {
// in := []byte{0, 0, 0, 0, 0}
// in = append(in, crypto.Keccak256([]byte(""))...)
// in = append(in, crypto.Keccak256([]byte{0x80})...)
// in = append(in, token)
// res := Compress(in)
// exp := []byte{token, 7, token, emptyShaToken, token, emptyListShaToken, token, tokenToken}
// if bytes.Compare(res, exp) != 0 {
// t.Error("expected", exp, "got", res)
// }
// }
// func TestCompressDecompress(t *testing.T) {
// var in []byte
// for i := 0; i < 20; i++ {
// in = append(in, []byte{0, 0, 0, 0, 0}...)
// in = append(in, crypto.Keccak256([]byte(""))...)
// in = append(in, crypto.Keccak256([]byte{0x80})...)
// in = append(in, []byte{123, 2, 19, 89, 245, 254, 255, token, 98, 233}...)
// in = append(in, token)
// }
// c := Compress(in)
// d, err := Decompress(c)
// if err != nil {
// t.Error(err)
// }
// if bytes.Compare(d, in) != 0 {
// t.Error("multi failed\n", d, "\n", in)
// }
// }

View File

@ -76,7 +76,7 @@ var (
errUnknownBlock = errors.New("unknown block")
// errInvalidCheckpointBeneficiary is returned if a checkpoint/epoch transition
// block has a beneficiary set to non zeroes.
// block has a beneficiary set to non-zeroes.
errInvalidCheckpointBeneficiary = errors.New("beneficiary in checkpoint block non-zero")
// errInvalidVote is returned if a nonce value is something else that the two
@ -84,7 +84,7 @@ var (
errInvalidVote = errors.New("vote nonce not 0x00..0 or 0xff..f")
// errInvalidCheckpointVote is returned if a checkpoint/epoch transition block
// has a vote nonce set to non zeroes.
// has a vote nonce set to non-zeroes.
errInvalidCheckpointVote = errors.New("vote nonce in checkpoint block non-zero")
// errMissingVanity is returned if a block's extra-data section is shorter than
@ -99,12 +99,12 @@ var (
// their extra-data fields.
errExtraSigners = errors.New("non-checkpoint block contains extra signer list")
// drrInvalidCheckpointSigners is returned if a checkpoint block contains an
// errInvalidCheckpointSigners is returned if a checkpoint block contains an
// invalid list of signers (i.e. non divisible by 20 bytes, or not the correct
// ones).
drrInvalidCheckpointSigners = errors.New("invalid signer list on checkpoint block")
errInvalidCheckpointSigners = errors.New("invalid signer list on checkpoint block")
// errInvalidMixDigest is returned if a block's mix digest is non zero.
// errInvalidMixDigest is returned if a block's mix digest is non-zero.
errInvalidMixDigest = errors.New("non-zero mix digest")
// errInvalidUncleHash is returned if a block contains an non-empty uncle list.
@ -122,7 +122,7 @@ var (
// be modified via out-of-range or non-contiguous headers.
errInvalidVotingChain = errors.New("invalid voting chain")
// errUnauthorized is returned if a header is signed by a non authorized entity.
// errUnauthorized is returned if a header is signed by a non-authorized entity.
errUnauthorized = errors.New("unauthorized")
)
@ -297,7 +297,7 @@ func (c *Clique) verifyHeader(chain consensus.ChainReader, header *types.Header,
return errExtraSigners
}
if checkpoint && signersBytes%common.AddressLength != 0 {
return drrInvalidCheckpointSigners
return errInvalidCheckpointSigners
}
// Ensure that the mix digest is zero as we don't have fork protection currently
if header.MixDigest != (common.Hash{}) {
@ -353,7 +353,7 @@ func (c *Clique) verifyCascadingFields(chain consensus.ChainReader, header *type
}
extraSuffix := len(header.Extra) - extraSeal
if !bytes.Equal(header.Extra[extraVanity:extraSuffix], signers) {
return drrInvalidCheckpointSigners
return errInvalidCheckpointSigners
}
}
// All basic checks passed, verify the seal and return
@ -467,7 +467,6 @@ func (c *Clique) verifySeal(chain consensus.ChainReader, header *types.Header, p
if err != nil {
return err
}
c.recents.Add(snap.Hash, snap)
// Resolve the authorization key and check against signers
signer, err := ecrecover(header, c.signatures)
@ -479,13 +478,13 @@ func (c *Clique) verifySeal(chain consensus.ChainReader, header *types.Header, p
}
for seen, recent := range snap.Recents {
if recent == signer {
// Signer is among recents, only fail if the current block doens't shift it out
// Signer is among recents, only fail if the current block doesn't shift it out
if limit := uint64(len(snap.Signers)/2 + 1); seen > number-limit {
return errUnauthorized
}
}
}
// Ensure that the difficulty corresponts to the turn-ness of the signer
// Ensure that the difficulty corresponds to the turn-ness of the signer
inturn := snap.inturn(header.Number.Uint64(), signer)
if inturn && header.Difficulty.Cmp(diffInTurn) != 0 {
return errInvalidDifficulty
@ -499,18 +498,29 @@ func (c *Clique) verifySeal(chain consensus.ChainReader, header *types.Header, p
// Prepare implements consensus.Engine, preparing all the consensus fields of the
// header for running the transactions on top.
func (c *Clique) Prepare(chain consensus.ChainReader, header *types.Header) error {
// If the block isn't a checkpoint, cast a random vote (good enough fror now)
// If the block isn't a checkpoint, cast a random vote (good enough for now)
header.Coinbase = common.Address{}
header.Nonce = types.BlockNonce{}
number := header.Number.Uint64()
// Assemble the voting snapshot to check which votes make sense
snap, err := c.snapshot(chain, number-1, header.ParentHash, nil)
if err != nil {
return err
}
if number%c.config.Epoch != 0 {
c.lock.RLock()
if len(c.proposals) > 0 {
addresses := make([]common.Address, 0, len(c.proposals))
for address := range c.proposals {
// Gather all the proposals that make sense voting on
addresses := make([]common.Address, 0, len(c.proposals))
for address, authorize := range c.proposals {
if snap.validVote(address, authorize) {
addresses = append(addresses, address)
}
}
// If there's pending proposals, cast a vote on them
if len(addresses) > 0 {
header.Coinbase = addresses[rand.Intn(len(addresses))]
if c.proposals[header.Coinbase] {
copy(header.Nonce[:], nonceAuthVote)
@ -520,11 +530,7 @@ func (c *Clique) Prepare(chain consensus.ChainReader, header *types.Header) erro
}
c.lock.RUnlock()
}
// Assemble the voting snapshot and set the correct difficulty
snap, err := c.snapshot(chain, number-1, header.ParentHash, nil)
if err != nil {
return err
}
// Set the correct difficulty
header.Difficulty = diffNoTurn
if snap.inturn(header.Number.Uint64(), c.signer) {
header.Difficulty = diffInTurn
@ -601,10 +607,10 @@ func (c *Clique) Seal(chain consensus.ChainReader, block *types.Block, stop <-ch
if _, authorized := snap.Signers[signer]; !authorized {
return nil, errUnauthorized
}
// If we're amongs the recent signers, wait for the next block
// If we're amongst the recent signers, wait for the next block
for seen, recent := range snap.Recents {
if recent == signer {
// Signer is among recents, only wait if the current block doens't shift it out
// Signer is among recents, only wait if the current block doesn't shift it out
if limit := uint64(len(snap.Signers)/2 + 1); number < limit || seen > number-limit {
log.Info("Signed recently, must wait for others")
<-stop

View File

@ -39,7 +39,7 @@ type Vote struct {
// Tally is a simple vote tally to keep the current score of votes. Votes that
// go against the proposal aren't counted since it's equivalent to not voting.
type Tally struct {
Authorize bool `json:"authorize"` // Whether the vote it about authorizing or kicking someone
Authorize bool `json:"authorize"` // Whether the vote is about authorizing or kicking someone
Votes int `json:"votes"` // Number of votes until now wanting to pass the proposal
}
@ -56,7 +56,7 @@ type Snapshot struct {
Tally map[common.Address]Tally `json:"tally"` // Current vote tally to avoid recalculating
}
// newSnapshot create a new snapshot with the specified startup parameters. This
// newSnapshot creates a new snapshot with the specified startup parameters. This
// method does not initialize the set of recent signers, so only ever use if for
// the genesis block.
func newSnapshot(config *params.CliqueConfig, sigcache *lru.ARCCache, number uint64, hash common.Hash, signers []common.Address) *Snapshot {
@ -126,11 +126,17 @@ func (s *Snapshot) copy() *Snapshot {
return cpy
}
// validVote returns whether it makes sense to cast the specified vote in the
// given snapshot context (e.g. don't try to add an already authorized signer).
func (s *Snapshot) validVote(address common.Address, authorize bool) bool {
_, signer := s.Signers[address]
return (signer && !authorize) || (!signer && authorize)
}
// cast adds a new vote into the tally.
func (s *Snapshot) cast(address common.Address, authorize bool) bool {
// Ensure the vote is meaningful
_, signer := s.Signers[address]
if (signer && authorize) || (!signer && !authorize) {
if !s.validVote(address, authorize) {
return false
}
// Cast the vote into an existing or new tally

View File

@ -243,7 +243,7 @@ func TestVoting(t *testing.T) {
},
results: []string{"A", "B"},
}, {
// Cascading changes are not allowed, only the the account being voted on may change
// Cascading changes are not allowed, only the account being voted on may change
signers: []string{"A", "B", "C", "D"},
votes: []testerVote{
{signer: "A", voted: "C", auth: false},
@ -293,7 +293,7 @@ func TestVoting(t *testing.T) {
results: []string{"A", "B", "C"},
}, {
// Ensure that pending votes don't survive authorization status changes. This
// corner case can only appear if a signer is quickly added, remove and then
// corner case can only appear if a signer is quickly added, removed and then
// readded (or the inverse), while one of the original voters dropped. If a
// past vote is left cached in the system somewhere, this will interfere with
// the final signer outcome.

View File

@ -79,8 +79,7 @@ type Engine interface {
// Finalize runs any post-transaction state modifications (e.g. block rewards)
// and assembles the final block.
//
// Note, the block header and state database might be updated to reflect any
// Note: The block header and state database might be updated to reflect any
// consensus rules that happen at finalization (e.g. block rewards).
Finalize(chain ChainReader, header *types.Header, state *state.StateDB, txs []*types.Transaction,
uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error)

View File

@ -53,7 +53,6 @@ type hasher func(dest []byte, data []byte)
// makeHasher creates a repetitive hasher, allowing the same hash data structures
// to be reused between hash runs instead of requiring new ones to be created.
//
// The returned function is not thread safe!
func makeHasher(h hash.Hash) hasher {
return func(dest []byte, data []byte) {
@ -82,7 +81,6 @@ func seedHash(block uint64) []byte {
// memory, then performing two passes of Sergio Demian Lerner's RandMemoHash
// algorithm from Strict Memory Hard Hashing Functions (2014). The output is a
// set of 524288 64-byte values.
//
// This method places the result into dest in machine byte order.
func generateCache(dest []uint32, epoch uint64, seed []byte) {
// Print some debug logs to allow analysis on low end devices
@ -220,7 +218,6 @@ func generateDatasetItem(cache []uint32, index uint32, keccak512 hasher) []byte
}
// generateDataset generates the entire ethash dataset for mining.
//
// This method places the result into dest in machine byte order.
func generateDataset(dest []uint32, epoch uint64, cache []uint32) {
// Print some debug logs to allow analysis on low end devices

View File

@ -20,7 +20,7 @@ package ethash
import "testing"
// Tests whether the dataset size calculator work correctly by cross checking the
// Tests whether the dataset size calculator works correctly by cross checking the
// hard coded lookup table with the value generated by it.
func TestSizeCalculations(t *testing.T) {
var tests []uint64

View File

@ -218,7 +218,6 @@ func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Blo
// verifyHeader checks whether a header conforms to the consensus rules of the
// stock Ethereum ethash engine.
//
// See YP section 4.3.4. "Block Header Validity"
func (ethash *Ethash) verifyHeader(chain consensus.ChainReader, header, parent *types.Header, uncle bool, seal bool) error {
// Ensure that the header's extra-data section is of a reasonable size
@ -286,7 +285,6 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainReader, header, parent *
// CalcDifficulty is the difficulty adjustment algorithm. It returns
// the difficulty that a new block should have when created at time
// given the parent block's time and difficulty.
//
// TODO (karalabe): Move the chain maker into this package and make this private!
func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int {
next := new(big.Int).Add(parent.Number, common.Big1)
@ -462,7 +460,6 @@ var (
// AccumulateRewards credits the coinbase of the given block with the mining
// reward. The total reward consists of the static block reward and rewards for
// included uncles. The coinbase of each uncle block is also rewarded.
//
// TODO (karalabe): Move the chain maker into this package and make this private!
func AccumulateRewards(state *state.StateDB, header *types.Header, uncles []*types.Header) {
reward := new(big.Int).Set(blockReward)

View File

@ -308,14 +308,14 @@ func (d *dataset) release() {
// MakeCache generates a new ethash cache and optionally stores it to disk.
func MakeCache(block uint64, dir string) {
c := cache{epoch: block/epochLength + 1}
c := cache{epoch: block / epochLength}
c.generate(dir, math.MaxInt32, false)
c.release()
}
// MakeDataset generates a new ethash dataset and optionally stores it to disk.
func MakeDataset(block uint64, dir string) {
d := dataset{epoch: block/epochLength + 1}
d := dataset{epoch: block / epochLength}
d.generate(dir, math.MaxInt32, false)
d.release()
}
@ -355,7 +355,7 @@ type Ethash struct {
// New creates a full sized ethash PoW scheme.
func New(cachedir string, cachesinmem, cachesondisk int, dagdir string, dagsinmem, dagsondisk int) *Ethash {
if cachesinmem <= 0 {
log.Warn("One ethash cache must alwast be in memory", "requested", cachesinmem)
log.Warn("One ethash cache must always be in memory", "requested", cachesinmem)
cachesinmem = 1
}
if cachedir != "" && cachesondisk > 0 {
@ -412,7 +412,7 @@ func NewFakeDelayer(delay time.Duration) *Ethash {
return &Ethash{fakeMode: true, fakeDelay: delay}
}
// NewFullFaker creates a ethash consensus engine with a full fake scheme that
// NewFullFaker creates an ethash consensus engine with a full fake scheme that
// accepts all blocks as valid, without checking any consensus rules whatsoever.
func NewFullFaker() *Ethash {
return &Ethash{fakeMode: true, fakeFull: true}

View File

@ -54,7 +54,7 @@ func VerifyDAOHeaderExtraData(config *params.ChainConfig, header *types.Header)
if header.Number.Cmp(config.DAOForkBlock) < 0 || header.Number.Cmp(limit) >= 0 {
return nil
}
// Depending whether we support or oppose the fork, validate the extra-data contents
// Depending on whether we support or oppose the fork, validate the extra-data contents
if config.DAOForkSupport {
if !bytes.Equal(header.Extra, params.DAOForkBlockExtra) {
return ErrBadProDAOExtra

View File

@ -2,7 +2,7 @@ FROM alpine:3.5
RUN \
apk add --update go git make gcc musl-dev linux-headers ca-certificates && \
git clone --depth 1 --branch release/1.5 https://github.com/ethereum/go-ethereum && \
git clone --depth 1 --branch release/1.6 https://github.com/ethereum/go-ethereum && \
(cd go-ethereum && make geth) && \
cp go-ethereum/build/bin/geth /geth && \
apk del go git make gcc musl-dev linux-headers && \

View File

@ -49,7 +49,7 @@ import (
// TODO(zelig): watch peer solvency and notify of bouncing cheques
// TODO(zelig): enable paying with cheque by signing off
// Some functionality require interacting with the blockchain:
// Some functionality requires interacting with the blockchain:
// * setting current balance on peer's chequebook
// * sending the transaction to cash the cheque
// * depositing ether to the chequebook
@ -100,13 +100,13 @@ type Chequebook struct {
// persisted fields
balance *big.Int // not synced with blockchain
contractAddr common.Address // contract address
sent map[common.Address]*big.Int //tallies for beneficiarys
sent map[common.Address]*big.Int //tallies for beneficiaries
txhash string // tx hash of last deposit tx
threshold *big.Int // threshold that triggers autodeposit if not nil
buffer *big.Int // buffer to keep on top of balance for fork protection
log log.Logger // contextual logger with the contrac address embedded
log log.Logger // contextual logger with the contract address embedded
}
func (self *Chequebook) String() string {
@ -442,7 +442,7 @@ type Inbox struct {
maxUncashed *big.Int // threshold that triggers autocashing
cashed *big.Int // cumulative amount cashed
cheque *Cheque // last cheque, nil if none yet received
log log.Logger // contextual logger with the contrac address embedded
log log.Logger // contextual logger with the contract address embedded
}
// NewInbox creates an Inbox. An Inboxes is not persisted, the cumulative sum is updated
@ -509,9 +509,8 @@ func (self *Inbox) AutoCash(cashInterval time.Duration, maxUncashed *big.Int) {
self.autoCash(cashInterval)
}
// autoCash starts a loop that periodically clears the last check
// autoCash starts a loop that periodically clears the last cheque
// if the peer is trusted. Clearing period could be 24h or a week.
//
// The caller must hold self.lock.
func (self *Inbox) autoCash(cashInterval time.Duration) {
if self.quit != nil {
@ -557,10 +556,10 @@ func (self *Inbox) Receive(promise swap.Promise) (*big.Int, error) {
var sum *big.Int
if self.cheque == nil {
// the sum is checked against the blockchain once a check is received
// the sum is checked against the blockchain once a cheque is received
tally, err := self.session.Sent(self.beneficiary)
if err != nil {
return nil, fmt.Errorf("inbox: error calling backend to set amount: %v", err)
return nil, fmt.Errorf("inbox: error calling backend to set amount: %v", err)
}
sum = tally
} else {

View File

@ -414,21 +414,10 @@ func TestCash(t *testing.T) {
t.Fatalf("expected no error, got %v", err)
}
backend.Commit()
// expBalance := big.NewInt(2)
// gotBalance := backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
// after 3x interval time and 2 cheques received, exactly one cashing tx is sent
time.Sleep(4 * interval)
backend.Commit()
// expBalance = big.NewInt(4)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
// after stopping autocash no more tx are sent
ch2, err := chbook.Issue(addr1, amount)
if err != nil {
@ -441,11 +430,6 @@ func TestCash(t *testing.T) {
}
time.Sleep(2 * interval)
backend.Commit()
// expBalance = big.NewInt(4)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
// autocash below 1
chbook.balance = big.NewInt(2)
@ -456,11 +440,6 @@ func TestCash(t *testing.T) {
t.Fatalf("expected no error, got %v", err)
}
backend.Commit()
// expBalance = big.NewInt(4)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
ch4, err := chbook.Issue(addr1, amount)
if err != nil {
@ -479,13 +458,6 @@ func TestCash(t *testing.T) {
}
backend.Commit()
// 2 checks of amount 1 received, exactly 1 tx is sent
// expBalance = big.NewInt(6)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
// autochash on receipt when maxUncashed is 0
chbook.balance = new(big.Int).Set(common.Big2)
chbox.AutoCash(0, common.Big0)
@ -495,11 +467,6 @@ func TestCash(t *testing.T) {
t.Fatalf("expected no error, got %v", err)
}
backend.Commit()
// expBalance = big.NewInt(5)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
ch6, err := chbook.Issue(addr1, amount)
if err != nil {
@ -511,21 +478,11 @@ func TestCash(t *testing.T) {
t.Fatalf("expected no error, got %v", err)
}
backend.Commit()
// expBalance = big.NewInt(4)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
_, err = chbox.Receive(ch6)
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
backend.Commit()
// expBalance = big.NewInt(6)
// gotBalance = backend.BalanceAt(addr1)
// if gotBalance.Cmp(expBalance) != 0 {
// t.Fatalf("expected beneficiary balance %v, got %v", expBalance, gotBalance)
// }
}

View File

@ -3,12 +3,12 @@
## Usage
Full documentation for the Ethereum Name Service [can be found as EIP 137](https://github.com/ethereum/EIPs/issues/137).
This package offers a simple binding that streamlines the registration arbitrary utf8 domain names to swarm content hashes.
This package offers a simple binding that streamlines the registration of arbitrary UTF8 domain names to swarm content hashes.
## Development
The SOL file in contract subdirectory implements the ENS root registry, a simple
first-in-first-served registrar for the root namespace, and a simple resolver contract;
first-in, first-served registrar for the root namespace, and a simple resolver contract;
they're used in tests, and can be used to deploy these contracts for your own purposes.
The solidity source code can be found at [github.com/arachnid/ens/](https://github.com/arachnid/ens/).

View File

@ -52,7 +52,7 @@ func NewENS(transactOpts *bind.TransactOpts, contractAddr common.Address, contra
}, nil
}
// DeployENS deploys an instance of the ENS nameservice, with a 'first in first served' root registrar.
// DeployENS deploys an instance of the ENS nameservice, with a 'first-in, first-served' root registrar.
func DeployENS(transactOpts *bind.TransactOpts, contractBackend bind.ContractBackend) (*ENS, error) {
// Deploy the ENS registry
ensAddr, _, _, err := contract.DeployENS(transactOpts, contractBackend, transactOpts.From)

View File

@ -79,7 +79,7 @@ func TestSignerPromotion(t *testing.T) {
// Gradually promote the keys, until all are authorized
keys = append([]*ecdsa.PrivateKey{key}, keys...)
for i := 1; i < len(keys); i++ {
// Check that no votes are accepted from the not yet authed user
// Check that no votes are accepted from the not yet authorized user
if _, err := oracle.Promote(bind.NewKeyedTransactor(keys[i]), common.Address{}); err != nil {
t.Fatalf("Iter #%d: failed invalid promotion attempt: %v", i, err)
}
@ -216,7 +216,7 @@ func TestVersionRelease(t *testing.T) {
// Gradually push releases, always requiring more signers than previously
keys = append([]*ecdsa.PrivateKey{key}, keys...)
for i := 1; i < len(keys); i++ {
// Check that no votes are accepted from the not yet authed user
// Check that no votes are accepted from the not yet authorized user
if _, err := oracle.Release(bind.NewKeyedTransactor(keys[i]), 0, 0, 0, [20]byte{0}); err != nil {
t.Fatalf("Iter #%d: failed invalid release attempt: %v", i, err)
}

View File

@ -59,10 +59,16 @@ func (s *StateSync) Missing(max int) []common.Hash {
}
// Process injects a batch of retrieved trie nodes data, returning if something
// was committed to the database and also the index of an entry if processing of
// was committed to the memcache and also the index of an entry if processing of
// it failed.
func (s *StateSync) Process(list []trie.SyncResult, dbw trie.DatabaseWriter) (bool, int, error) {
return (*trie.TrieSync)(s).Process(list, dbw)
func (s *StateSync) Process(list []trie.SyncResult) (bool, int, error) {
return (*trie.TrieSync)(s).Process(list)
}
// Commit flushes the data stored in the internal memcache out to persistent
// storage, returning th enumber of items written and any occurred error.
func (s *StateSync) Commit(dbw trie.DatabaseWriter) (int, error) {
return (*trie.TrieSync)(s).Commit(dbw)
}
// Pending returns the number of state entries currently pending for download.

View File

@ -138,9 +138,12 @@ func testIterativeStateSync(t *testing.T, batch int) {
}
results[i] = trie.SyncResult{Hash: hash, Data: data}
}
if _, index, err := sched.Process(results, dstDb); err != nil {
if _, index, err := sched.Process(results); err != nil {
t.Fatalf("failed to process result #%d: %v", index, err)
}
if index, err := sched.Commit(dstDb); err != nil {
t.Fatalf("failed to commit data #%d: %v", index, err)
}
queue = append(queue[:0], sched.Missing(batch)...)
}
// Cross check that the two states are in sync
@ -168,9 +171,12 @@ func TestIterativeDelayedStateSync(t *testing.T) {
}
results[i] = trie.SyncResult{Hash: hash, Data: data}
}
if _, index, err := sched.Process(results, dstDb); err != nil {
if _, index, err := sched.Process(results); err != nil {
t.Fatalf("failed to process result #%d: %v", index, err)
}
if index, err := sched.Commit(dstDb); err != nil {
t.Fatalf("failed to commit data #%d: %v", index, err)
}
queue = append(queue[len(results):], sched.Missing(0)...)
}
// Cross check that the two states are in sync
@ -206,9 +212,12 @@ func testIterativeRandomStateSync(t *testing.T, batch int) {
results = append(results, trie.SyncResult{Hash: hash, Data: data})
}
// Feed the retrieved results back and queue new tasks
if _, index, err := sched.Process(results, dstDb); err != nil {
if _, index, err := sched.Process(results); err != nil {
t.Fatalf("failed to process result #%d: %v", index, err)
}
if index, err := sched.Commit(dstDb); err != nil {
t.Fatalf("failed to commit data #%d: %v", index, err)
}
queue = make(map[common.Hash]struct{})
for _, hash := range sched.Missing(batch) {
queue[hash] = struct{}{}
@ -249,9 +258,12 @@ func TestIterativeRandomDelayedStateSync(t *testing.T) {
}
}
// Feed the retrieved results back and queue new tasks
if _, index, err := sched.Process(results, dstDb); err != nil {
if _, index, err := sched.Process(results); err != nil {
t.Fatalf("failed to process result #%d: %v", index, err)
}
if index, err := sched.Commit(dstDb); err != nil {
t.Fatalf("failed to commit data #%d: %v", index, err)
}
for _, hash := range sched.Missing(0) {
queue[hash] = struct{}{}
}
@ -283,9 +295,12 @@ func TestIncompleteStateSync(t *testing.T) {
results[i] = trie.SyncResult{Hash: hash, Data: data}
}
// Process each of the state nodes
if _, index, err := sched.Process(results, dstDb); err != nil {
if _, index, err := sched.Process(results); err != nil {
t.Fatalf("failed to process result #%d: %v", index, err)
}
if index, err := sched.Commit(dstDb); err != nil {
t.Fatalf("failed to commit data #%d: %v", index, err)
}
for _, result := range results {
added = append(added, result.Hash)
}

View File

@ -243,7 +243,7 @@ func (st *StateTransition) TransitionDb() (ret []byte, requiredGas, usedGas *big
ret, st.gas, vmerr = evm.Call(sender, st.to().Address(), st.data, st.gas, st.value)
}
if vmerr != nil {
log.Debug("VM returned with error", "err", err)
log.Debug("VM returned with error", "err", vmerr)
// The only possible consensus-error would be if there wasn't
// sufficient balance to make the transfer happen. The first
// balance transfer may never fail.

View File

@ -35,16 +35,41 @@ import (
)
var (
// Transaction Pool Errors
ErrInvalidSender = errors.New("invalid sender")
ErrNonce = errors.New("nonce too low")
ErrUnderpriced = errors.New("transaction underpriced")
// ErrInvalidSender is returned if the transaction contains an invalid signature.
ErrInvalidSender = errors.New("invalid sender")
// ErrNonceTooLow is returned if the nonce of a transaction is lower than the
// one present in the local chain.
ErrNonceTooLow = errors.New("nonce too low")
// ErrUnderpriced is returned if a transaction's gas price is below the minimum
// configured for the transaction pool.
ErrUnderpriced = errors.New("transaction underpriced")
// ErrReplaceUnderpriced is returned if a transaction is attempted to be replaced
// with a different one without the required price bump.
ErrReplaceUnderpriced = errors.New("replacement transaction underpriced")
ErrBalance = errors.New("insufficient balance")
ErrInsufficientFunds = errors.New("insufficient funds for gas * price + value")
ErrIntrinsicGas = errors.New("intrinsic gas too low")
ErrGasLimit = errors.New("exceeds block gas limit")
ErrNegativeValue = errors.New("negative value")
// ErrInsufficientFunds is returned if the total cost of executing a transaction
// is higher than the balance of the user's account.
ErrInsufficientFunds = errors.New("insufficient funds for gas * price + value")
// ErrIntrinsicGas is returned if the transaction is specified to use less gas
// than required to start the invocation.
ErrIntrinsicGas = errors.New("intrinsic gas too low")
// ErrGasLimit is returned if a transaction's requested gas limit exceeds the
// maximum allowance of the current block.
ErrGasLimit = errors.New("exceeds block gas limit")
// ErrNegativeValue is a sanity error to ensure noone is able to specify a
// transaction with a negative value.
ErrNegativeValue = errors.New("negative value")
// ErrOversizedData is returned if the input data of a transaction is greater
// than some meaningful limit a user might use. This is not a consensus error
// making the transaction invalid, rather a DOS protection.
ErrOversizedData = errors.New("oversized data")
)
var (
@ -54,16 +79,16 @@ var (
var (
// Metrics for the pending pool
pendingDiscardCounter = metrics.NewCounter("txpool/pending/discard")
pendingReplaceCounter = metrics.NewCounter("txpool/pending/replace")
pendingRLCounter = metrics.NewCounter("txpool/pending/ratelimit") // Dropped due to rate limiting
pendingNofundsCounter = metrics.NewCounter("txpool/pending/nofunds") // Dropped due to out-of-funds
pendingDiscardCounter = metrics.NewCounter("txpool/pending/discard")
pendingReplaceCounter = metrics.NewCounter("txpool/pending/replace")
pendingRateLimitCounter = metrics.NewCounter("txpool/pending/ratelimit") // Dropped due to rate limiting
pendingNofundsCounter = metrics.NewCounter("txpool/pending/nofunds") // Dropped due to out-of-funds
// Metrics for the queued pool
queuedDiscardCounter = metrics.NewCounter("txpool/queued/discard")
queuedReplaceCounter = metrics.NewCounter("txpool/queued/replace")
queuedRLCounter = metrics.NewCounter("txpool/queued/ratelimit") // Dropped due to rate limiting
queuedNofundsCounter = metrics.NewCounter("txpool/queued/nofunds") // Dropped due to out-of-funds
queuedDiscardCounter = metrics.NewCounter("txpool/queued/discard")
queuedReplaceCounter = metrics.NewCounter("txpool/queued/replace")
queuedRateLimitCounter = metrics.NewCounter("txpool/queued/ratelimit") // Dropped due to rate limiting
queuedNofundsCounter = metrics.NewCounter("txpool/queued/nofunds") // Dropped due to out-of-funds
// General tx metrics
invalidTxCounter = metrics.NewCounter("txpool/invalid")
@ -251,7 +276,7 @@ func (pool *TxPool) resetState() {
}
// Check the queue and move transactions over to the pending if possible
// or remove those that have become invalid
pool.promoteExecutables(currentState)
pool.promoteExecutables(currentState, nil)
}
// Stop terminates the transaction pool.
@ -339,17 +364,6 @@ func (pool *TxPool) Pending() (map[common.Address]types.Transactions, error) {
pool.mu.Lock()
defer pool.mu.Unlock()
state, err := pool.currentState()
if err != nil {
return nil, err
}
// check queue first
pool.promoteExecutables(state)
// invalidate any txs
pool.demoteUnexecutables(state)
pending := make(map[common.Address]types.Transactions)
for addr, list := range pool.pending {
pending[addr] = list.Flatten()
@ -385,7 +399,7 @@ func (pool *TxPool) validateTx(tx *types.Transaction) error {
}
// Last but not least check for nonce errors
if currentState.GetNonce(from) > tx.Nonce() {
return ErrNonce
return ErrNonceTooLow
}
// Check the transaction doesn't exceed the current
@ -406,12 +420,15 @@ func (pool *TxPool) validateTx(tx *types.Transaction) error {
if currentState.GetBalance(from).Cmp(tx.Cost()) < 0 {
return ErrInsufficientFunds
}
intrGas := IntrinsicGas(tx.Data(), tx.To() == nil, pool.homestead)
if tx.Gas().Cmp(intrGas) < 0 {
return ErrIntrinsicGas
}
// Heuristic limit, reject transactions over 32KB to prevent DOS attacks
if tx.Size() > 32*1024 {
return ErrOversizedData
}
return nil
}
@ -551,13 +568,14 @@ func (pool *TxPool) Add(tx *types.Transaction) error {
if err != nil {
return err
}
state, err := pool.currentState()
if err != nil {
return err
}
// If we added a new transaction, run promotion checks and return
if !replace {
pool.promoteExecutables(state)
state, err := pool.currentState()
if err != nil {
return err
}
from, _ := types.Sender(pool.signer, tx) // already validated
pool.promoteExecutables(state, []common.Address{from})
}
return nil
}
@ -568,24 +586,26 @@ func (pool *TxPool) AddBatch(txs []*types.Transaction) error {
defer pool.mu.Unlock()
// Add the batch of transaction, tracking the accepted ones
replaced, added := true, 0
dirty := make(map[common.Address]struct{})
for _, tx := range txs {
if replace, err := pool.add(tx); err == nil {
added++
if !replace {
replaced = false
from, _ := types.Sender(pool.signer, tx) // already validated
dirty[from] = struct{}{}
}
}
}
// Only reprocess the internal state if something was actually added
if added > 0 {
if len(dirty) > 0 {
state, err := pool.currentState()
if err != nil {
return err
}
if !replaced {
pool.promoteExecutables(state)
addrs := make([]common.Address, 0, len(dirty))
for addr, _ := range dirty {
addrs = append(addrs, addr)
}
pool.promoteExecutables(state, addrs)
}
return nil
}
@ -646,8 +666,9 @@ func (pool *TxPool) removeTx(hash common.Hash) {
}
// Update the account nonce if needed
if nonce := tx.Nonce(); pool.pendingState.GetNonce(addr) > nonce {
pool.pendingState.SetNonce(addr, tx.Nonce())
pool.pendingState.SetNonce(addr, nonce)
}
return
}
}
// Transaction is in the future queue
@ -662,12 +683,23 @@ func (pool *TxPool) removeTx(hash common.Hash) {
// promoteExecutables moves transactions that have become processable from the
// future queue to the set of pending transactions. During this process, all
// invalidated transactions (low nonce, low balance) are deleted.
func (pool *TxPool) promoteExecutables(state *state.StateDB) {
func (pool *TxPool) promoteExecutables(state *state.StateDB, accounts []common.Address) {
gaslimit := pool.gasLimit()
// Gather all the accounts potentially needing updates
if accounts == nil {
accounts = make([]common.Address, 0, len(pool.queue))
for addr, _ := range pool.queue {
accounts = append(accounts, addr)
}
}
// Iterate over all accounts and promote any executable transactions
queued := uint64(0)
for addr, list := range pool.queue {
for _, addr := range accounts {
list := pool.queue[addr]
if list == nil {
continue // Just in case someone calls with a non existing account
}
// Drop all transactions that are deemed too old (low nonce)
for _, tx := range list.Forward(state.GetNonce(addr)) {
hash := tx.Hash()
@ -693,10 +725,10 @@ func (pool *TxPool) promoteExecutables(state *state.StateDB) {
// Drop all transactions over the allowed limit
for _, tx := range list.Cap(int(pool.config.AccountQueue)) {
hash := tx.Hash()
log.Trace("Removed cap-exceeding queued transaction", "hash", hash)
delete(pool.all, hash)
pool.priced.Removed()
queuedRLCounter.Inc(1)
queuedRateLimitCounter.Inc(1)
log.Trace("Removed cap-exceeding queued transaction", "hash", hash)
}
queued += uint64(list.Len())
@ -742,7 +774,18 @@ func (pool *TxPool) promoteExecutables(state *state.StateDB) {
for pending > pool.config.GlobalSlots && pool.pending[offenders[len(offenders)-2]].Len() > threshold {
for i := 0; i < len(offenders)-1; i++ {
list := pool.pending[offenders[i]]
list.Cap(list.Len() - 1)
for _, tx := range list.Cap(list.Len() - 1) {
// Drop the transaction from the global pools too
hash := tx.Hash()
delete(pool.all, hash)
pool.priced.Removed()
// Update the account nonce to the dropped transaction
if nonce := tx.Nonce(); pool.pendingState.GetNonce(offenders[i]) > nonce {
pool.pendingState.SetNonce(offenders[i], nonce)
}
log.Trace("Removed fairness-exceeding pending transaction", "hash", hash)
}
pending--
}
}
@ -753,12 +796,23 @@ func (pool *TxPool) promoteExecutables(state *state.StateDB) {
for pending > pool.config.GlobalSlots && uint64(pool.pending[offenders[len(offenders)-1]].Len()) > pool.config.AccountSlots {
for _, addr := range offenders {
list := pool.pending[addr]
list.Cap(list.Len() - 1)
for _, tx := range list.Cap(list.Len() - 1) {
// Drop the transaction from the global pools too
hash := tx.Hash()
delete(pool.all, hash)
pool.priced.Removed()
// Update the account nonce to the dropped transaction
if nonce := tx.Nonce(); pool.pendingState.GetNonce(addr) > nonce {
pool.pendingState.SetNonce(addr, nonce)
}
log.Trace("Removed fairness-exceeding pending transaction", "hash", hash)
}
pending--
}
}
}
pendingRLCounter.Inc(int64(pendingBeforeCap - pending))
pendingRateLimitCounter.Inc(int64(pendingBeforeCap - pending))
}
// If we've queued more transactions than the hard limit, drop oldest ones
if queued > pool.config.GlobalQueue {
@ -782,7 +836,7 @@ func (pool *TxPool) promoteExecutables(state *state.StateDB) {
pool.removeTx(tx.Hash())
}
drop -= size
queuedRLCounter.Inc(int64(size))
queuedRateLimitCounter.Inc(int64(size))
continue
}
// Otherwise drop only last few transactions
@ -790,7 +844,7 @@ func (pool *TxPool) promoteExecutables(state *state.StateDB) {
for i := len(txs) - 1; i >= 0 && drop > 0; i-- {
pool.removeTx(txs[i].Hash())
drop--
queuedRLCounter.Inc(1)
queuedRateLimitCounter.Inc(1)
}
}
}

View File

@ -18,6 +18,7 @@ package core
import (
"crypto/ecdsa"
"fmt"
"math/big"
"math/rand"
"testing"
@ -52,6 +53,35 @@ func setupTxPool() (*TxPool, *ecdsa.PrivateKey) {
return newPool, key
}
// validateTxPoolInternals checks various consistency invariants within the pool.
func validateTxPoolInternals(pool *TxPool) error {
pool.mu.RLock()
defer pool.mu.RUnlock()
// Ensure the total transaction set is consistent with pending + queued
pending, queued := pool.stats()
if total := len(pool.all); total != pending+queued {
return fmt.Errorf("total transaction count %d != %d pending + %d queued", total, pending, queued)
}
if priced := pool.priced.items.Len() - pool.priced.stales; priced != pending+queued {
return fmt.Errorf("total priced transaction count %d != %d pending + %d queued", priced, pending, queued)
}
// Ensure the next nonce to assign is the correct one
for addr, txs := range pool.pending {
// Find the last transaction
var last uint64
for nonce, _ := range txs.txs.items {
if last < nonce {
last = nonce
}
}
if nonce := pool.pendingState.GetNonce(addr); nonce != last+1 {
return fmt.Errorf("pending nonce mismatch: have %v, want %v", nonce, last+1)
}
}
return nil
}
func deriveSender(tx *types.Transaction) (common.Address, error) {
return types.Sender(types.HomesteadSigner{}, tx)
}
@ -150,8 +180,8 @@ func TestInvalidTransactions(t *testing.T) {
currentState.SetNonce(from, 1)
currentState.AddBalance(from, big.NewInt(0xffffffffffffff))
tx = transaction(0, big.NewInt(100000), key)
if err := pool.Add(tx); err != ErrNonce {
t.Error("expected", ErrNonce)
if err := pool.Add(tx); err != ErrNonceTooLow {
t.Error("expected", ErrNonceTooLow)
}
tx = transaction(1, big.NewInt(100000), key)
@ -175,7 +205,7 @@ func TestTransactionQueue(t *testing.T) {
pool.resetState()
pool.enqueueTx(tx.Hash(), tx)
pool.promoteExecutables(currentState)
pool.promoteExecutables(currentState, []common.Address{from})
if len(pool.pending) != 1 {
t.Error("expected valid txs to be 1 is", len(pool.pending))
}
@ -184,7 +214,7 @@ func TestTransactionQueue(t *testing.T) {
from, _ = deriveSender(tx)
currentState.SetNonce(from, 2)
pool.enqueueTx(tx.Hash(), tx)
pool.promoteExecutables(currentState)
pool.promoteExecutables(currentState, []common.Address{from})
if _, ok := pool.pending[from].txs.items[tx.Nonce()]; ok {
t.Error("expected transaction to be in tx pool")
}
@ -206,7 +236,7 @@ func TestTransactionQueue(t *testing.T) {
pool.enqueueTx(tx2.Hash(), tx2)
pool.enqueueTx(tx3.Hash(), tx3)
pool.promoteExecutables(currentState)
pool.promoteExecutables(currentState, []common.Address{from})
if len(pool.pending) != 1 {
t.Error("expected tx pool to be 1, got", len(pool.pending))
@ -218,20 +248,25 @@ func TestTransactionQueue(t *testing.T) {
func TestRemoveTx(t *testing.T) {
pool, key := setupTxPool()
tx := transaction(0, big.NewInt(100), key)
from, _ := deriveSender(tx)
addr := crypto.PubkeyToAddress(key.PublicKey)
currentState, _ := pool.currentState()
currentState.AddBalance(from, big.NewInt(1))
currentState.AddBalance(addr, big.NewInt(1))
tx1 := transaction(0, big.NewInt(100), key)
tx2 := transaction(2, big.NewInt(100), key)
pool.promoteTx(addr, tx1.Hash(), tx1)
pool.enqueueTx(tx2.Hash(), tx2)
pool.enqueueTx(tx.Hash(), tx)
pool.promoteTx(from, tx.Hash(), tx)
if len(pool.queue) != 1 {
t.Error("expected queue to be 1, got", len(pool.queue))
}
if len(pool.pending) != 1 {
t.Error("expected pending to be 1, got", len(pool.pending))
}
pool.Remove(tx.Hash())
pool.Remove(tx1.Hash())
pool.Remove(tx2.Hash())
if len(pool.queue) > 0 {
t.Error("expected queue to be 0, got", len(pool.queue))
}
@ -304,16 +339,16 @@ func TestTransactionDoubleNonce(t *testing.T) {
t.Errorf("second transaction insert failed (%v) or not reported replacement (%v)", err, replace)
}
state, _ := pool.currentState()
pool.promoteExecutables(state)
pool.promoteExecutables(state, []common.Address{addr})
if pool.pending[addr].Len() != 1 {
t.Error("expected 1 pending transactions, got", pool.pending[addr].Len())
}
if tx := pool.pending[addr].txs.items[0]; tx.Hash() != tx2.Hash() {
t.Errorf("transaction mismatch: have %x, want %x", tx.Hash(), tx2.Hash())
}
// Add the thid transaction and ensure it's not saved (smaller price)
// Add the third transaction and ensure it's not saved (smaller price)
pool.add(tx3)
pool.promoteExecutables(state)
pool.promoteExecutables(state, []common.Address{addr})
if pool.pending[addr].Len() != 1 {
t.Error("expected 1 pending transactions, got", pool.pending[addr].Len())
}
@ -404,10 +439,10 @@ func TestTransactionDropping(t *testing.T) {
)
pool.promoteTx(account, tx0.Hash(), tx0)
pool.promoteTx(account, tx1.Hash(), tx1)
pool.promoteTx(account, tx1.Hash(), tx2)
pool.promoteTx(account, tx2.Hash(), tx2)
pool.enqueueTx(tx10.Hash(), tx10)
pool.enqueueTx(tx11.Hash(), tx11)
pool.enqueueTx(tx11.Hash(), tx12)
pool.enqueueTx(tx12.Hash(), tx12)
// Check that pre and post validations leave the pool as is
if pool.pending[account].Len() != 3 {
@ -416,8 +451,8 @@ func TestTransactionDropping(t *testing.T) {
if pool.queue[account].Len() != 3 {
t.Errorf("queued transaction mismatch: have %d, want %d", pool.queue[account].Len(), 3)
}
if len(pool.all) != 4 {
t.Errorf("total transaction mismatch: have %d, want %d", len(pool.all), 4)
if len(pool.all) != 6 {
t.Errorf("total transaction mismatch: have %d, want %d", len(pool.all), 6)
}
pool.resetState()
if pool.pending[account].Len() != 3 {
@ -426,8 +461,8 @@ func TestTransactionDropping(t *testing.T) {
if pool.queue[account].Len() != 3 {
t.Errorf("queued transaction mismatch: have %d, want %d", pool.queue[account].Len(), 3)
}
if len(pool.all) != 4 {
t.Errorf("total transaction mismatch: have %d, want %d", len(pool.all), 4)
if len(pool.all) != 6 {
t.Errorf("total transaction mismatch: have %d, want %d", len(pool.all), 6)
}
// Reduce the balance of the account, and check that invalidated transactions are dropped
state.AddBalance(account, big.NewInt(-650))
@ -730,6 +765,12 @@ func testTransactionLimitingEquivalency(t *testing.T, origin uint64) {
if len(pool1.all) != len(pool2.all) {
t.Errorf("total transaction count mismatch: one-by-one algo %d, batch algo %d", len(pool1.all), len(pool2.all))
}
if err := validateTxPoolInternals(pool1); err != nil {
t.Errorf("pool 1 internal state corrupted: %v", err)
}
if err := validateTxPoolInternals(pool2); err != nil {
t.Errorf("pool 2 internal state corrupted: %v", err)
}
}
// Tests that if the transaction count belonging to multiple accounts go above
@ -776,6 +817,45 @@ func TestTransactionPendingGlobalLimiting(t *testing.T) {
if pending > int(DefaultTxPoolConfig.GlobalSlots) {
t.Fatalf("total pending transactions overflow allowance: %d > %d", pending, DefaultTxPoolConfig.GlobalSlots)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
}
// Tests that if transactions start being capped, transasctions are also removed from 'all'
func TestTransactionCapClearsFromAll(t *testing.T) {
// Reduce the queue limits to shorten test time
defer func(old uint64) { DefaultTxPoolConfig.AccountSlots = old }(DefaultTxPoolConfig.AccountSlots)
defer func(old uint64) { DefaultTxPoolConfig.AccountQueue = old }(DefaultTxPoolConfig.AccountQueue)
defer func(old uint64) { DefaultTxPoolConfig.GlobalSlots = old }(DefaultTxPoolConfig.GlobalSlots)
DefaultTxPoolConfig.AccountSlots = 2
DefaultTxPoolConfig.AccountQueue = 2
DefaultTxPoolConfig.GlobalSlots = 8
// Create the pool to test the limit enforcement with
db, _ := ethdb.NewMemDatabase()
statedb, _ := state.New(common.Hash{}, db)
pool := NewTxPool(DefaultTxPoolConfig, params.TestChainConfig, new(event.TypeMux), func() (*state.StateDB, error) { return statedb, nil }, func() *big.Int { return big.NewInt(1000000) })
pool.resetState()
// Create a number of test accounts and fund them
state, _ := pool.currentState()
key, _ := crypto.GenerateKey()
addr := crypto.PubkeyToAddress(key.PublicKey)
state.AddBalance(addr, big.NewInt(1000000))
txs := types.Transactions{}
for j := 0; j < int(DefaultTxPoolConfig.GlobalSlots)*2; j++ {
txs = append(txs, transaction(uint64(j), big.NewInt(100000), key))
}
// Import the batch and verify that limits have been enforced
pool.AddBatch(txs)
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
}
// Tests that if the transaction count belonging to multiple accounts go above
@ -820,6 +900,9 @@ func TestTransactionPendingMinimumAllowance(t *testing.T) {
t.Errorf("addr %x: total pending transactions mismatch: have %d, want %d", addr, list.Len(), DefaultTxPoolConfig.AccountSlots)
}
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
}
// Tests that setting the transaction pool gas price to a higher value correctly
@ -867,6 +950,9 @@ func TestTransactionPoolRepricing(t *testing.T) {
if queued != 3 {
t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 3)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
// Reprice the pool and check that underpriced transactions get dropped
pool.SetGasPrice(big.NewInt(2))
@ -877,6 +963,9 @@ func TestTransactionPoolRepricing(t *testing.T) {
if queued != 3 {
t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 3)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
// Check that we can't add the old transactions back
if err := pool.Add(pricedTransaction(1, big.NewInt(100000), big.NewInt(1), keys[0])); err != ErrUnderpriced {
t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced)
@ -884,6 +973,9 @@ func TestTransactionPoolRepricing(t *testing.T) {
if err := pool.Add(pricedTransaction(2, big.NewInt(100000), big.NewInt(1), keys[1])); err != ErrUnderpriced {
t.Fatalf("adding underpriced queued transaction error mismatch: have %v, want %v", err, ErrUnderpriced)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
// However we can add local underpriced transactions
tx := pricedTransaction(1, big.NewInt(100000), big.NewInt(1), keys[2])
@ -894,6 +986,9 @@ func TestTransactionPoolRepricing(t *testing.T) {
if pending, _ = pool.stats(); pending != 3 {
t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 3)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
}
// Tests that when the pool reaches its global transaction limit, underpriced
@ -945,6 +1040,9 @@ func TestTransactionPoolUnderpricing(t *testing.T) {
if queued != 1 {
t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 1)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
// Ensure that adding an underpriced transaction on block limit fails
if err := pool.Add(pricedTransaction(0, big.NewInt(100000), big.NewInt(1), keys[1])); err != ErrUnderpriced {
t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced)
@ -966,6 +1064,9 @@ func TestTransactionPoolUnderpricing(t *testing.T) {
if queued != 2 {
t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 2)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
// Ensure that adding local transactions can push out even higher priced ones
tx := pricedTransaction(1, big.NewInt(100000), big.NewInt(0), keys[2])
@ -980,6 +1081,9 @@ func TestTransactionPoolUnderpricing(t *testing.T) {
if queued != 2 {
t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 2)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
}
// Tests that the pool rejects replacement transactions that don't meet the minimum
@ -1041,6 +1145,9 @@ func TestTransactionReplacement(t *testing.T) {
if err := pool.Add(pricedTransaction(2, big.NewInt(100000), big.NewInt(threshold+1), key)); err != nil {
t.Fatalf("failed to replace original queued transaction: %v", err)
}
if err := validateTxPoolInternals(pool); err != nil {
t.Fatalf("pool internal state corrupted: %v", err)
}
}
// Benchmarks the speed of validating the contents of the pending queue of the
@ -1087,7 +1194,7 @@ func benchmarkFuturePromotion(b *testing.B, size int) {
// Benchmark the speed of pool validation
b.ResetTimer()
for i := 0; i < b.N; i++ {
pool.promoteExecutables(state)
pool.promoteExecutables(state, nil)
}
}

View File

@ -381,7 +381,7 @@ func (b *Block) Hash() common.Hash {
if hash := b.hash.Load(); hash != nil {
return hash.(common.Hash)
}
v := rlpHash(b.header)
v := b.header.Hash()
b.hash.Store(v)
return v
}

View File

@ -22,41 +22,39 @@ import (
"github.com/ethereum/go-ethereum/common"
)
var bigMaxUint64 = new(big.Int).SetUint64(^uint64(0))
// destinations stores one map per contract (keyed by hash of code).
// The maps contain an entry for each location of a JUMPDEST
// instruction.
type destinations map[common.Hash]map[uint64]struct{}
type destinations map[common.Hash][]byte
// has checks whether code has a JUMPDEST at dest.
func (d destinations) has(codehash common.Hash, code []byte, dest *big.Int) bool {
// PC cannot go beyond len(code) and certainly can't be bigger than 64bits.
// PC cannot go beyond len(code) and certainly can't be bigger than 63bits.
// Don't bother checking for JUMPDEST in that case.
if dest.Cmp(bigMaxUint64) > 0 {
udest := dest.Uint64()
if dest.BitLen() >= 63 || udest >= uint64(len(code)) {
return false
}
m, analysed := d[codehash]
if !analysed {
m = jumpdests(code)
d[codehash] = m
}
_, ok := m[dest.Uint64()]
return ok
return (m[udest/8] & (1 << (udest % 8))) != 0
}
// jumpdests creates a map that contains an entry for each
// PC location that is a JUMPDEST instruction.
func jumpdests(code []byte) map[uint64]struct{} {
m := make(map[uint64]struct{})
func jumpdests(code []byte) []byte {
m := make([]byte, len(code)/8+1)
for pc := uint64(0); pc < uint64(len(code)); pc++ {
var op OpCode = OpCode(code[pc])
switch op {
case PUSH1, PUSH2, PUSH3, PUSH4, PUSH5, PUSH6, PUSH7, PUSH8, PUSH9, PUSH10, PUSH11, PUSH12, PUSH13, PUSH14, PUSH15, PUSH16, PUSH17, PUSH18, PUSH19, PUSH20, PUSH21, PUSH22, PUSH23, PUSH24, PUSH25, PUSH26, PUSH27, PUSH28, PUSH29, PUSH30, PUSH31, PUSH32:
op := OpCode(code[pc])
if op == JUMPDEST {
m[pc/8] |= 1 << (pc % 8)
} else if op >= PUSH1 && op <= PUSH32 {
a := uint64(op) - uint64(PUSH1) + 1
pc += a
case JUMPDEST:
m[pc] = struct{}{}
}
}
return m

99
core/vm/gen_structlog.go Normal file
View File

@ -0,0 +1,99 @@
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
package vm
import (
"encoding/json"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
)
func (s StructLog) MarshalJSON() ([]byte, error) {
type StructLog struct {
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas math.HexOrDecimal64 `json:"gas"`
GasCost math.HexOrDecimal64 `json:"gasCost"`
Memory hexutil.Bytes `json:"memory"`
MemorySize int `json:"memSize"`
Stack []*math.HexOrDecimal256 `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
Err error `json:"error"`
OpName string `json:"opName"`
}
var enc StructLog
enc.Pc = s.Pc
enc.Op = s.Op
enc.Gas = math.HexOrDecimal64(s.Gas)
enc.GasCost = math.HexOrDecimal64(s.GasCost)
enc.Memory = s.Memory
enc.MemorySize = s.MemorySize
if s.Stack != nil {
enc.Stack = make([]*math.HexOrDecimal256, len(s.Stack))
for k, v := range s.Stack {
enc.Stack[k] = (*math.HexOrDecimal256)(v)
}
}
enc.Storage = s.Storage
enc.Depth = s.Depth
enc.Err = s.Err
enc.OpName = s.OpName()
return json.Marshal(&enc)
}
func (s *StructLog) UnmarshalJSON(input []byte) error {
type StructLog struct {
Pc *uint64 `json:"pc"`
Op *OpCode `json:"op"`
Gas *math.HexOrDecimal64 `json:"gas"`
GasCost *math.HexOrDecimal64 `json:"gasCost"`
Memory hexutil.Bytes `json:"memory"`
MemorySize *int `json:"memSize"`
Stack []*math.HexOrDecimal256 `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth *int `json:"depth"`
Err *error `json:"error"`
}
var dec StructLog
if err := json.Unmarshal(input, &dec); err != nil {
return err
}
if dec.Pc != nil {
s.Pc = *dec.Pc
}
if dec.Op != nil {
s.Op = *dec.Op
}
if dec.Gas != nil {
s.Gas = uint64(*dec.Gas)
}
if dec.GasCost != nil {
s.GasCost = uint64(*dec.GasCost)
}
if dec.Memory != nil {
s.Memory = dec.Memory
}
if dec.MemorySize != nil {
s.MemorySize = *dec.MemorySize
}
if dec.Stack != nil {
s.Stack = make([]*big.Int, len(dec.Stack))
for k, v := range dec.Stack {
s.Stack[k] = (*big.Int)(v)
}
}
if dec.Storage != nil {
s.Storage = dec.Storage
}
if dec.Depth != nil {
s.Depth = *dec.Depth
}
if dec.Err != nil {
s.Err = *dec.Err
}
return nil
}

View File

@ -256,15 +256,14 @@ func opXor(pc *uint64, evm *EVM, contract *Contract, memory *Memory, stack *Stac
}
func opByte(pc *uint64, evm *EVM, contract *Contract, memory *Memory, stack *Stack) ([]byte, error) {
th, val := stack.pop(), stack.pop()
if th.Cmp(big.NewInt(32)) < 0 {
byte := evm.interpreter.intPool.get().SetInt64(int64(math.PaddedBigBytes(val, 32)[th.Int64()]))
stack.push(byte)
th, val := stack.pop(), stack.peek()
if th.Cmp(common.Big32) < 0 {
b := math.Byte(val, 32, int(th.Int64()))
val.SetUint64(uint64(b))
} else {
stack.push(new(big.Int))
val.SetUint64(0)
}
evm.interpreter.intPool.put(th, val)
evm.interpreter.intPool.put(th)
return nil, nil
}
func opAddmod(pc *uint64, evm *EVM, contract *Contract, memory *Memory, stack *Stack) ([]byte, error) {

View File

@ -0,0 +1,42 @@
package vm
import (
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/params"
)
func TestByteOp(t *testing.T) {
var (
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{EnableJit: false, ForceJit: false})
stack = newstack()
)
tests := []struct {
v string
th uint64
expected *big.Int
}{
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 0, big.NewInt(0xAB)},
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", 1, big.NewInt(0xCD)},
{"00CDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff", 0, big.NewInt(0x00)},
{"00CDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff", 1, big.NewInt(0xCD)},
{"0000000000000000000000000000000000000000000000000000000000102030", 31, big.NewInt(0x30)},
{"0000000000000000000000000000000000000000000000000000000000102030", 30, big.NewInt(0x20)},
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 32, big.NewInt(0x0)},
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 0xFFFFFFFFFFFFFFFF, big.NewInt(0x0)},
}
pc := uint64(0)
for _, test := range tests {
val := new(big.Int).SetBytes(common.Hex2Bytes(test.v))
th := new(big.Int).SetUint64(test.th)
stack.push(val)
stack.push(th)
opByte(&pc, env, nil, nil, stack)
actual := stack.pop()
if actual.Cmp(test.expected) != 0 {
t.Fatalf("Expected [%v] %v:th byte to be %v, was %v.", test.v, test.th, test.expected, actual)
}
}
}

View File

@ -21,8 +21,10 @@ import (
"fmt"
"io"
"math/big"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/types"
)
@ -47,18 +49,34 @@ type LogConfig struct {
Limit int // maximum length of output, but zero means unlimited
}
//go:generate gencodec -type StructLog -field-override structLogMarshaling -out gen_structlog.go
// StructLog is emitted to the EVM each cycle and lists information about the current internal state
// prior to the execution of the statement.
type StructLog struct {
Pc uint64
Op OpCode
Gas uint64
GasCost uint64
Memory []byte
Stack []*big.Int
Storage map[common.Hash]common.Hash
Depth int
Err error
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas uint64 `json:"gas"`
GasCost uint64 `json:"gasCost"`
Memory []byte `json:"memory"`
MemorySize int `json:"memSize"`
Stack []*big.Int `json:"stack"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
Err error `json:"error"`
}
// overrides for gencodec
type structLogMarshaling struct {
Stack []*math.HexOrDecimal256
Gas math.HexOrDecimal64
GasCost math.HexOrDecimal64
Memory hexutil.Bytes
OpName string `json:"opName"`
}
func (s *StructLog) OpName() string {
return s.Op.String()
}
// Tracer is used to collect execution traces from an EVM transaction
@ -68,6 +86,7 @@ type StructLog struct {
// if you need to retain them beyond the current call.
type Tracer interface {
CaptureState(env *EVM, pc uint64, op OpCode, gas, cost uint64, memory *Memory, stack *Stack, contract *Contract, depth int, err error) error
CaptureEnd(output []byte, gasUsed uint64, t time.Duration) error
}
// StructLogger is an EVM state logger and implements Tracer.
@ -82,7 +101,7 @@ type StructLogger struct {
changedValues map[common.Address]Storage
}
// NewLogger returns a new logger
// NewStructLogger returns a new logger
func NewStructLogger(cfg *LogConfig) *StructLogger {
logger := &StructLogger{
changedValues: make(map[common.Address]Storage),
@ -93,9 +112,9 @@ func NewStructLogger(cfg *LogConfig) *StructLogger {
return logger
}
// captureState logs a new structured log message and pushes it out to the environment
// CaptureState logs a new structured log message and pushes it out to the environment
//
// captureState also tracks SSTORE ops to track dirty values.
// CaptureState also tracks SSTORE ops to track dirty values.
func (l *StructLogger) CaptureState(env *EVM, pc uint64, op OpCode, gas, cost uint64, memory *Memory, stack *Stack, contract *Contract, depth int, err error) error {
// check if already accumulated the specified number of logs
if l.cfg.Limit != 0 && l.cfg.Limit <= len(l.logs) {
@ -158,12 +177,17 @@ func (l *StructLogger) CaptureState(env *EVM, pc uint64, op OpCode, gas, cost ui
}
}
// create a new snaptshot of the EVM.
log := StructLog{pc, op, gas, cost, mem, stck, storage, env.depth, err}
log := StructLog{pc, op, gas, cost, mem, memory.Len(), stck, storage, depth, err}
l.logs = append(l.logs, log)
return nil
}
func (l *StructLogger) CaptureEnd(output []byte, gasUsed uint64, t time.Duration) error {
fmt.Printf("0x%x", output)
return nil
}
// StructLogs returns a list of captured log entries
func (l *StructLogger) StructLogs() []StructLog {
return l.logs

View File

@ -125,7 +125,7 @@ func Execute(code, input []byte, cfg *Config) ([]byte, *state.StateDB, error) {
}
// Create executes the code using the EVM create method
func Create(input []byte, cfg *Config) ([]byte, common.Address, error) {
func Create(input []byte, cfg *Config) ([]byte, common.Address, uint64, error) {
if cfg == nil {
cfg = new(Config)
}
@ -141,13 +141,13 @@ func Create(input []byte, cfg *Config) ([]byte, common.Address, error) {
)
// Call the code with the given configuration.
code, address, _, err := vmenv.Create(
code, address, leftOverGas, err := vmenv.Create(
sender,
input,
cfg.GasLimit,
cfg.Value,
)
return code, address, err
return code, address, leftOverGas, err
}
// Call executes the code given by the contract's address. It will return the
@ -155,14 +155,14 @@ func Create(input []byte, cfg *Config) ([]byte, common.Address, error) {
//
// Call, unlike Execute, requires a config and also requires the State field to
// be set.
func Call(address common.Address, input []byte, cfg *Config) ([]byte, error) {
func Call(address common.Address, input []byte, cfg *Config) ([]byte, uint64, error) {
setDefaults(cfg)
vmenv := NewEnv(cfg, cfg.State)
sender := cfg.State.GetOrNewStateObject(cfg.Origin)
// Call the code with the given configuration.
ret, _, err := vmenv.Call(
ret, leftOverGas, err := vmenv.Call(
sender,
address,
input,
@ -170,5 +170,5 @@ func Call(address common.Address, input []byte, cfg *Config) ([]byte, error) {
cfg.Value,
)
return ret, err
return ret, leftOverGas, err
}

View File

@ -106,7 +106,7 @@ func TestCall(t *testing.T) {
byte(vm.RETURN),
})
ret, err := Call(address, nil, &Config{State: state})
ret, _, err := Call(address, nil, &Config{State: state})
if err != nil {
t.Fatal("didn't expect error", err)
}

View File

@ -68,9 +68,6 @@ func Keccak512(data ...[]byte) []byte {
return d.Sum(nil)
}
// Deprecated: For backward compatibility as other packages depend on these
func Sha3Hash(data ...[]byte) common.Hash { return Keccak256Hash(data...) }
// Creates an ethereum address given the bytes and the nonce
func CreateAddress(b common.Address, nonce uint64) common.Address {
data, _ := rlp.EncodeToBytes([]interface{}{b, nonce})
@ -79,9 +76,24 @@ func CreateAddress(b common.Address, nonce uint64) common.Address {
// ToECDSA creates a private key with the given D value.
func ToECDSA(d []byte) (*ecdsa.PrivateKey, error) {
return toECDSA(d, true)
}
// ToECDSAUnsafe blidly converts a binary blob to a private key. It should almost
// never be used unless you are sure the input is valid and want to avoid hitting
// errors due to bad origin encoding (0 prefixes cut off).
func ToECDSAUnsafe(d []byte) *ecdsa.PrivateKey {
priv, _ := toECDSA(d, false)
return priv
}
// toECDSA creates a private key with the given D value. The strict parameter
// controls whether the key's length should be enforced at the curve size or
// it can also accept legacy encodings (0 prefixes).
func toECDSA(d []byte, strict bool) (*ecdsa.PrivateKey, error) {
priv := new(ecdsa.PrivateKey)
priv.PublicKey.Curve = S256()
if 8*len(d) != priv.Params().BitSize {
if strict && 8*len(d) != priv.Params().BitSize {
return nil, fmt.Errorf("invalid length, need %d bits", priv.Params().BitSize)
}
priv.D = new(big.Int).SetBytes(d)
@ -89,11 +101,12 @@ func ToECDSA(d []byte) (*ecdsa.PrivateKey, error) {
return priv, nil
}
func FromECDSA(prv *ecdsa.PrivateKey) []byte {
if prv == nil {
// FromECDSA exports a private key into a binary dump.
func FromECDSA(priv *ecdsa.PrivateKey) []byte {
if priv == nil {
return nil
}
return math.PaddedBigBytes(prv.D, 32)
return math.PaddedBigBytes(priv.D, priv.Params().BitSize/8)
}
func ToECDSAPub(pub []byte) *ecdsa.PublicKey {
@ -121,7 +134,6 @@ func HexToECDSA(hexkey string) (*ecdsa.PrivateKey, error) {
}
// LoadECDSA loads a secp256k1 private key from the given file.
// The key data is expected to be hex-encoded.
func LoadECDSA(file string) (*ecdsa.PrivateKey, error) {
buf := make([]byte, 64)
fd, err := os.Open(file)

View File

@ -36,7 +36,7 @@ var testPrivHex = "289c2857d4598e37fb9647507e47a309d6133539bf21a8b9cb6df88fd5232
// These tests are sanity checks.
// They should ensure that we don't e.g. use Sha3-224 instead of Sha3-256
// and that the sha3 library uses keccak-f permutation.
func TestSha3Hash(t *testing.T) {
func TestKeccak256Hash(t *testing.T) {
msg := []byte("abc")
exp, _ := hex.DecodeString("4e03657aea45a94fc7d47ba826c8d667c0d1e6e33a64a036ec44f58fa12d6c45")
checkhash(t, "Sha3-256-array", func(in []byte) []byte { h := Keccak256Hash(in); return h[:] }, msg, exp)

View File

@ -88,7 +88,6 @@ type Ethereum struct {
func (s *Ethereum) AddLesServer(ls LesServer) {
s.lesServer = ls
s.protocolManager.lesServer = ls
}
// New creates a new Ethereum object (including the
@ -201,10 +200,13 @@ func makeExtraData(extra []byte) []byte {
// CreateDB creates the chain database.
func CreateDB(ctx *node.ServiceContext, config *Config, name string) (ethdb.Database, error) {
db, err := ctx.OpenDatabase(name, config.DatabaseCache, config.DatabaseHandles)
if err != nil {
return nil, err
}
if db, ok := db.(*ethdb.LDBDatabase); ok {
db.Meter("eth/db/chaindata/")
}
return db, err
return db, nil
}
// CreateConsensusEngine creates the required type of consensus engine instance for an Ethereum service

View File

@ -34,7 +34,6 @@ import (
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie"
"github.com/rcrowley/go-metrics"
)
@ -99,8 +98,9 @@ type Downloader struct {
mode SyncMode // Synchronisation mode defining the strategy used (per sync cycle)
mux *event.TypeMux // Event multiplexer to announce sync operation events
queue *queue // Scheduler for selecting the hashes to download
peers *peerSet // Set of active peers from which download can proceed
queue *queue // Scheduler for selecting the hashes to download
peers *peerSet // Set of active peers from which download can proceed
stateDB ethdb.Database
fsPivotLock *types.Header // Pivot header on critical section entry (cannot change between retries)
fsPivotFails uint32 // Number of subsequent fast sync failures in the critical section
@ -109,9 +109,9 @@ type Downloader struct {
rttConfidence uint64 // Confidence in the estimated RTT (unit: millionths to allow atomic ops)
// Statistics
syncStatsChainOrigin uint64 // Origin block number where syncing started at
syncStatsChainHeight uint64 // Highest block number known when syncing started
syncStatsStateDone uint64 // Number of state trie entries already pulled
syncStatsChainOrigin uint64 // Origin block number where syncing started at
syncStatsChainHeight uint64 // Highest block number known when syncing started
syncStatsState stateSyncStats
syncStatsLock sync.RWMutex // Lock protecting the sync stats fields
// Callbacks
@ -136,16 +136,18 @@ type Downloader struct {
notified int32
// Channels
newPeerCh chan *peer
headerCh chan dataPack // [eth/62] Channel receiving inbound block headers
bodyCh chan dataPack // [eth/62] Channel receiving inbound block bodies
receiptCh chan dataPack // [eth/63] Channel receiving inbound receipts
stateCh chan dataPack // [eth/63] Channel receiving inbound node state data
bodyWakeCh chan bool // [eth/62] Channel to signal the block body fetcher of new tasks
receiptWakeCh chan bool // [eth/63] Channel to signal the receipt fetcher of new tasks
stateWakeCh chan bool // [eth/63] Channel to signal the state fetcher of new tasks
headerProcCh chan []*types.Header // [eth/62] Channel to feed the header processor new tasks
// for stateFetcher
stateSyncStart chan *stateSync
trackStateReq chan *stateReq
stateCh chan dataPack // [eth/63] Channel receiving inbound node state data
// Cancellation and termination
cancelPeer string // Identifier of the peer currently being used as the master (cancel on drop)
cancelCh chan struct{} // Channel to cancel mid-flight syncs
@ -170,8 +172,9 @@ func New(mode SyncMode, stateDb ethdb.Database, mux *event.TypeMux, hasHeader he
dl := &Downloader{
mode: mode,
mux: mux,
queue: newQueue(stateDb),
queue: newQueue(),
peers: newPeerSet(),
stateDB: stateDb,
rttEstimate: uint64(rttMaxEstimate),
rttConfidence: uint64(1000000),
hasHeader: hasHeader,
@ -188,18 +191,20 @@ func New(mode SyncMode, stateDb ethdb.Database, mux *event.TypeMux, hasHeader he
insertReceipts: insertReceipts,
rollback: rollback,
dropPeer: dropPeer,
newPeerCh: make(chan *peer, 1),
headerCh: make(chan dataPack, 1),
bodyCh: make(chan dataPack, 1),
receiptCh: make(chan dataPack, 1),
stateCh: make(chan dataPack, 1),
bodyWakeCh: make(chan bool, 1),
receiptWakeCh: make(chan bool, 1),
stateWakeCh: make(chan bool, 1),
headerProcCh: make(chan []*types.Header, 1),
quitCh: make(chan struct{}),
// for stateFetcher
stateSyncStart: make(chan *stateSync),
trackStateReq: make(chan *stateReq),
stateCh: make(chan dataPack),
}
go dl.qosTuner()
go dl.stateFetcher()
return dl
}
@ -211,9 +216,6 @@ func New(mode SyncMode, stateDb ethdb.Database, mux *event.TypeMux, hasHeader he
// of processed and the total number of known states are also returned. Otherwise
// these are zero.
func (d *Downloader) Progress() ethereum.SyncProgress {
// Fetch the pending state count outside of the lock to prevent unforeseen deadlocks
pendingStates := uint64(d.queue.PendingNodeData())
// Lock the current stats and return the progress
d.syncStatsLock.RLock()
defer d.syncStatsLock.RUnlock()
@ -231,8 +233,8 @@ func (d *Downloader) Progress() ethereum.SyncProgress {
StartingBlock: d.syncStatsChainOrigin,
CurrentBlock: current,
HighestBlock: d.syncStatsChainHeight,
PulledStates: d.syncStatsStateDone,
KnownStates: d.syncStatsStateDone + pendingStates,
PulledStates: d.syncStatsState.processed,
KnownStates: d.syncStatsState.processed + d.syncStatsState.pending,
}
}
@ -324,13 +326,13 @@ func (d *Downloader) synchronise(id string, hash common.Hash, td *big.Int, mode
d.queue.Reset()
d.peers.Reset()
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh, d.stateWakeCh} {
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh} {
select {
case <-ch:
default:
}
}
for _, ch := range []chan dataPack{d.headerCh, d.bodyCh, d.receiptCh, d.stateCh} {
for _, ch := range []chan dataPack{d.headerCh, d.bodyCh, d.receiptCh} {
for empty := false; !empty; {
select {
case <-ch:
@ -439,30 +441,40 @@ func (d *Downloader) syncWithPeer(p *peer, hash common.Hash, td *big.Int) (err e
if d.syncInitHook != nil {
d.syncInitHook(origin, height)
}
return d.spawnSync(origin+1,
func() error { return d.fetchHeaders(p, origin+1) }, // Headers are always retrieved
func() error { return d.processHeaders(origin+1, td) }, // Headers are always retrieved
func() error { return d.fetchBodies(origin + 1) }, // Bodies are retrieved during normal and fast sync
func() error { return d.fetchReceipts(origin + 1) }, // Receipts are retrieved during fast sync
func() error { return d.fetchNodeData() }, // Node state data is retrieved during fast sync
)
fetchers := []func() error{
func() error { return d.fetchHeaders(p, origin+1) }, // Headers are always retrieved
func() error { return d.fetchBodies(origin + 1) }, // Bodies are retrieved during normal and fast sync
func() error { return d.fetchReceipts(origin + 1) }, // Receipts are retrieved during fast sync
func() error { return d.processHeaders(origin+1, td) },
}
if d.mode == FastSync {
fetchers = append(fetchers, func() error { return d.processFastSyncContent(latest) })
} else if d.mode == FullSync {
fetchers = append(fetchers, d.processFullSyncContent)
}
err = d.spawnSync(fetchers)
if err != nil && d.mode == FastSync && d.fsPivotLock != nil {
// If sync failed in the critical section, bump the fail counter.
atomic.AddUint32(&d.fsPivotFails, 1)
}
return err
}
// spawnSync runs d.process and all given fetcher functions to completion in
// separate goroutines, returning the first error that appears.
func (d *Downloader) spawnSync(origin uint64, fetchers ...func() error) error {
func (d *Downloader) spawnSync(fetchers []func() error) error {
var wg sync.WaitGroup
errc := make(chan error, len(fetchers)+1)
wg.Add(len(fetchers) + 1)
go func() { defer wg.Done(); errc <- d.processContent() }()
errc := make(chan error, len(fetchers))
wg.Add(len(fetchers))
for _, fn := range fetchers {
fn := fn
go func() { defer wg.Done(); errc <- fn() }()
}
// Wait for the first error, then terminate the others.
var err error
for i := 0; i < len(fetchers)+1; i++ {
if i == len(fetchers) {
for i := 0; i < len(fetchers); i++ {
if i == len(fetchers)-1 {
// Close the queue when all fetchers have exited.
// This will cause the block processor to end when
// it has processed the queue.
@ -475,11 +487,6 @@ func (d *Downloader) spawnSync(origin uint64, fetchers ...func() error) error {
d.queue.Close()
d.Cancel()
wg.Wait()
// If sync failed in the critical section, bump the fail counter
if err != nil && d.mode == FastSync && d.fsPivotLock != nil {
atomic.AddUint32(&d.fsPivotFails, 1)
}
return err
}
@ -552,7 +559,6 @@ func (d *Downloader) fetchHeight(p *peer) (*types.Header, error) {
return nil, errTimeout
case <-d.bodyCh:
case <-d.stateCh:
case <-d.receiptCh:
// Out of bounds delivery, ignore
}
@ -649,7 +655,6 @@ func (d *Downloader) findAncestor(p *peer, height uint64) (uint64, error) {
return 0, errTimeout
case <-d.bodyCh:
case <-d.stateCh:
case <-d.receiptCh:
// Out of bounds delivery, ignore
}
@ -714,7 +719,6 @@ func (d *Downloader) findAncestor(p *peer, height uint64) (uint64, error) {
return 0, errTimeout
case <-d.bodyCh:
case <-d.stateCh:
case <-d.receiptCh:
// Out of bounds delivery, ignore
}
@ -827,7 +831,7 @@ func (d *Downloader) fetchHeaders(p *peer, from uint64) error {
d.dropPeer(p.id)
// Finish the sync gracefully instead of dumping the gathered data though
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh, d.stateWakeCh} {
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh} {
select {
case ch <- false:
case <-d.cancelCh:
@ -927,68 +931,6 @@ func (d *Downloader) fetchReceipts(from uint64) error {
return err
}
// fetchNodeData iteratively downloads the scheduled state trie nodes, taking any
// available peers, reserving a chunk of nodes for each, waiting for delivery and
// also periodically checking for timeouts.
func (d *Downloader) fetchNodeData() error {
log.Debug("Downloading node state data")
var (
deliver = func(packet dataPack) (int, error) {
start := time.Now()
return d.queue.DeliverNodeData(packet.PeerId(), packet.(*statePack).states, func(delivered int, progressed bool, err error) {
// If the peer returned old-requested data, forgive
if err == trie.ErrNotRequested {
log.Debug("Forgiving reply to stale state request", "peer", packet.PeerId())
return
}
if err != nil {
// If the node data processing failed, the root hash is very wrong, abort
log.Error("State processing failed", "peer", packet.PeerId(), "err", err)
d.Cancel()
return
}
// Processing succeeded, notify state fetcher of continuation
pending := d.queue.PendingNodeData()
if pending > 0 {
select {
case d.stateWakeCh <- true:
default:
}
}
d.syncStatsLock.Lock()
d.syncStatsStateDone += uint64(delivered)
syncStatsStateDone := d.syncStatsStateDone // Thread safe copy for the log below
d.syncStatsLock.Unlock()
// If real database progress was made, reset any fast-sync pivot failure
if progressed && atomic.LoadUint32(&d.fsPivotFails) > 1 {
log.Debug("Fast-sync progressed, resetting fail counter", "previous", atomic.LoadUint32(&d.fsPivotFails))
atomic.StoreUint32(&d.fsPivotFails, 1) // Don't ever reset to 0, as that will unlock the pivot block
}
// Log a message to the user and return
if delivered > 0 {
log.Info("Imported new state entries", "count", delivered, "elapsed", common.PrettyDuration(time.Since(start)), "processed", syncStatsStateDone, "pending", pending)
}
})
}
expire = func() map[string]int { return d.queue.ExpireNodeData(d.requestTTL()) }
throttle = func() bool { return false }
reserve = func(p *peer, count int) (*fetchRequest, bool, error) {
return d.queue.ReserveNodeData(p, count), false, nil
}
fetch = func(p *peer, req *fetchRequest) error { return p.FetchNodeData(req) }
capacity = func(p *peer) int { return p.NodeDataCapacity(d.requestRTT()) }
setIdle = func(p *peer, accepted int) { p.SetNodeDataIdle(accepted) }
)
err := d.fetchParts(errCancelStateFetch, d.stateCh, deliver, d.stateWakeCh, expire,
d.queue.PendingNodeData, d.queue.InFlightNodeData, throttle, reserve, nil, fetch,
d.queue.CancelNodeData, capacity, d.peers.NodeDataIdlePeers, setIdle, "states")
log.Debug("Node state data download terminated", "err", err)
return err
}
// fetchParts iteratively downloads scheduled block parts, taking any available
// peers, reserving a chunk of fetch requests for each, waiting for delivery and
// also periodically checking for timeouts.
@ -1229,7 +1171,7 @@ func (d *Downloader) processHeaders(origin uint64, td *big.Int) error {
// Terminate header processing if we synced up
if len(headers) == 0 {
// Notify everyone that headers are fully processed
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh, d.stateWakeCh} {
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh} {
select {
case ch <- false:
case <-d.cancelCh:
@ -1341,7 +1283,7 @@ func (d *Downloader) processHeaders(origin uint64, td *big.Int) error {
origin += uint64(limit)
}
// Signal the content downloaders of the availablility of new tasks
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh, d.stateWakeCh} {
for _, ch := range []chan bool{d.bodyWakeCh, d.receiptWakeCh} {
select {
case ch <- true:
default:
@ -1351,73 +1293,153 @@ func (d *Downloader) processHeaders(origin uint64, td *big.Int) error {
}
}
// processContent takes fetch results from the queue and tries to import them
// into the chain. The type of import operation will depend on the result contents.
func (d *Downloader) processContent() error {
pivot := d.queue.FastSyncPivot()
// processFullSyncContent takes fetch results from the queue and imports them into the chain.
func (d *Downloader) processFullSyncContent() error {
for {
results := d.queue.WaitResults()
if len(results) == 0 {
return nil // queue empty
return nil
}
if d.chainInsertHook != nil {
d.chainInsertHook(results)
}
// Actually import the blocks
first, last := results[0].Header, results[len(results)-1].Header
if err := d.importBlockResults(results); err != nil {
return err
}
}
}
func (d *Downloader) importBlockResults(results []*fetchResult) error {
for len(results) != 0 {
// Check for any termination requests. This makes clean shutdown faster.
select {
case <-d.quitCh:
return errCancelContentProcessing
default:
}
// Retrieve the a batch of results to import
items := int(math.Min(float64(len(results)), float64(maxResultsProcess)))
first, last := results[0].Header, results[items-1].Header
log.Debug("Inserting downloaded chain", "items", len(results),
"firstnum", first.Number, "firsthash", first.Hash(),
"lastnum", last.Number, "lasthash", last.Hash(),
)
for len(results) != 0 {
// Check for any termination requests
select {
case <-d.quitCh:
return errCancelContentProcessing
default:
blocks := make([]*types.Block, items)
for i, result := range results[:items] {
blocks[i] = types.NewBlockWithHeader(result.Header).WithBody(result.Transactions, result.Uncles)
}
if index, err := d.insertBlocks(blocks); err != nil {
log.Debug("Downloaded item processing failed", "number", results[index].Header.Number, "hash", results[index].Header.Hash(), "err", err)
return errInvalidChain
}
// Shift the results to the next batch
results = results[items:]
}
return nil
}
// processFastSyncContent takes fetch results from the queue and writes them to the
// database. It also controls the synchronisation of state nodes of the pivot block.
func (d *Downloader) processFastSyncContent(latest *types.Header) error {
// Start syncing state of the reported head block.
// This should get us most of the state of the pivot block.
stateSync := d.syncState(latest.Root)
defer stateSync.Cancel()
go func() {
if err := stateSync.Wait(); err != nil {
d.queue.Close() // wake up WaitResults
}
}()
pivot := d.queue.FastSyncPivot()
for {
results := d.queue.WaitResults()
if len(results) == 0 {
return stateSync.Cancel()
}
if d.chainInsertHook != nil {
d.chainInsertHook(results)
}
P, beforeP, afterP := splitAroundPivot(pivot, results)
if err := d.commitFastSyncData(beforeP, stateSync); err != nil {
return err
}
if P != nil {
stateSync.Cancel()
if err := d.commitPivotBlock(P); err != nil {
return err
}
// Retrieve the a batch of results to import
var (
blocks = make([]*types.Block, 0, maxResultsProcess)
receipts = make([]types.Receipts, 0, maxResultsProcess)
)
items := int(math.Min(float64(len(results)), float64(maxResultsProcess)))
for _, result := range results[:items] {
switch {
case d.mode == FullSync:
blocks = append(blocks, types.NewBlockWithHeader(result.Header).WithBody(result.Transactions, result.Uncles))
case d.mode == FastSync:
blocks = append(blocks, types.NewBlockWithHeader(result.Header).WithBody(result.Transactions, result.Uncles))
if result.Header.Number.Uint64() <= pivot {
receipts = append(receipts, result.Receipts)
}
}
}
// Try to process the results, aborting if there's an error
var (
err error
index int
)
switch {
case len(receipts) > 0:
index, err = d.insertReceipts(blocks, receipts)
if err == nil && blocks[len(blocks)-1].NumberU64() == pivot {
log.Debug("Committing block as new head", "number", blocks[len(blocks)-1].Number(), "hash", blocks[len(blocks)-1].Hash())
index, err = len(blocks)-1, d.commitHeadBlock(blocks[len(blocks)-1].Hash())
}
default:
index, err = d.insertBlocks(blocks)
}
if err != nil {
log.Debug("Downloaded item processing failed", "number", results[index].Header.Number, "hash", results[index].Header.Hash(), "err", err)
return errInvalidChain
}
// Shift the results to the next batch
results = results[items:]
}
if err := d.importBlockResults(afterP); err != nil {
return err
}
}
}
func splitAroundPivot(pivot uint64, results []*fetchResult) (p *fetchResult, before, after []*fetchResult) {
for _, result := range results {
num := result.Header.Number.Uint64()
switch {
case num < pivot:
before = append(before, result)
case num == pivot:
p = result
default:
after = append(after, result)
}
}
return p, before, after
}
func (d *Downloader) commitFastSyncData(results []*fetchResult, stateSync *stateSync) error {
for len(results) != 0 {
// Check for any termination requests.
select {
case <-d.quitCh:
return errCancelContentProcessing
case <-stateSync.done:
if err := stateSync.Wait(); err != nil {
return err
}
default:
}
// Retrieve the a batch of results to import
items := int(math.Min(float64(len(results)), float64(maxResultsProcess)))
first, last := results[0].Header, results[items-1].Header
log.Debug("Inserting fast-sync blocks", "items", len(results),
"firstnum", first.Number, "firsthash", first.Hash(),
"lastnumn", last.Number, "lasthash", last.Hash(),
)
blocks := make([]*types.Block, items)
receipts := make([]types.Receipts, items)
for i, result := range results[:items] {
blocks[i] = types.NewBlockWithHeader(result.Header).WithBody(result.Transactions, result.Uncles)
receipts[i] = result.Receipts
}
if index, err := d.insertReceipts(blocks, receipts); err != nil {
log.Debug("Downloaded item processing failed", "number", results[index].Header.Number, "hash", results[index].Header.Hash(), "err", err)
return errInvalidChain
}
// Shift the results to the next batch
results = results[items:]
}
return nil
}
func (d *Downloader) commitPivotBlock(result *fetchResult) error {
b := types.NewBlockWithHeader(result.Header).WithBody(result.Transactions, result.Uncles)
// Sync the pivot block state. This should complete reasonably quickly because
// we've already synced up to the reported head block state earlier.
if err := d.syncState(b.Root()).Wait(); err != nil {
return err
}
log.Debug("Committing fast sync pivot as new head", "number", b.Number(), "hash", b.Hash())
if _, err := d.insertReceipts([]*types.Block{b}, []types.Receipts{result.Receipts}); err != nil {
return err
}
return d.commitHeadBlock(b.Hash())
}
// DeliverHeaders injects a new batch of block headers received from a remote
// node into the download schedule.
func (d *Downloader) DeliverHeaders(id string, headers []*types.Header) (err error) {

View File

@ -38,8 +38,6 @@ var (
receiptDropMeter = metrics.NewMeter("eth/downloader/receipts/drop")
receiptTimeoutMeter = metrics.NewMeter("eth/downloader/receipts/timeout")
stateInMeter = metrics.NewMeter("eth/downloader/states/in")
stateReqTimer = metrics.NewTimer("eth/downloader/states/req")
stateDropMeter = metrics.NewMeter("eth/downloader/states/drop")
stateTimeoutMeter = metrics.NewMeter("eth/downloader/states/timeout")
stateInMeter = metrics.NewMeter("eth/downloader/states/in")
stateDropMeter = metrics.NewMeter("eth/downloader/states/drop")
)

View File

@ -30,6 +30,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
)
@ -195,7 +196,7 @@ func (p *peer) FetchReceipts(request *fetchRequest) error {
}
// FetchNodeData sends a node state data retrieval request to the remote peer.
func (p *peer) FetchNodeData(request *fetchRequest) error {
func (p *peer) FetchNodeData(hashes []common.Hash) error {
// Sanity check the protocol version
if p.version < 63 {
panic(fmt.Sprintf("node data fetch [eth/63+] requested on eth/%d", p.version))
@ -205,14 +206,7 @@ func (p *peer) FetchNodeData(request *fetchRequest) error {
return errAlreadyFetching
}
p.stateStarted = time.Now()
// Convert the hash set to a retrievable slice
hashes := make([]common.Hash, 0, len(request.Hashes))
for hash := range request.Hashes {
hashes = append(hashes, hash)
}
go p.getNodeData(hashes)
return nil
}
@ -343,8 +337,9 @@ func (p *peer) Lacks(hash common.Hash) bool {
// peerSet represents the collection of active peer participating in the chain
// download procedure.
type peerSet struct {
peers map[string]*peer
lock sync.RWMutex
peers map[string]*peer
newPeerFeed event.Feed
lock sync.RWMutex
}
// newPeerSet creates a new peer set top track the active download sources.
@ -354,6 +349,10 @@ func newPeerSet() *peerSet {
}
}
func (ps *peerSet) SubscribeNewPeers(ch chan<- *peer) event.Subscription {
return ps.newPeerFeed.Subscribe(ch)
}
// Reset iterates over the current peer set, and resets each of the known peers
// to prepare for a next batch of block retrieval.
func (ps *peerSet) Reset() {
@ -377,9 +376,8 @@ func (ps *peerSet) Register(p *peer) error {
// Register the new peer with some meaningful defaults
ps.lock.Lock()
defer ps.lock.Unlock()
if _, ok := ps.peers[p.id]; ok {
ps.lock.Unlock()
return errAlreadyRegistered
}
if len(ps.peers) > 0 {
@ -399,6 +397,9 @@ func (ps *peerSet) Register(p *peer) error {
p.stateThroughput /= float64(len(ps.peers))
}
ps.peers[p.id] = p
ps.lock.Unlock()
ps.newPeerFeed.Send(p)
return nil
}

View File

@ -26,20 +26,13 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie"
"github.com/rcrowley/go-metrics"
"gopkg.in/karalabe/cookiejar.v2/collections/prque"
)
var (
blockCacheLimit = 8192 // Maximum number of blocks to cache before throttling the download
maxInFlightStates = 8192 // Maximum number of state downloads to allow concurrently
)
var blockCacheLimit = 8192 // Maximum number of blocks to cache before throttling the download
var (
errNoFetchesPending = errors.New("no fetches pending")
@ -94,15 +87,6 @@ type queue struct {
receiptPendPool map[string]*fetchRequest // [eth/63] Currently pending receipt retrieval operations
receiptDonePool map[common.Hash]struct{} // [eth/63] Set of the completed receipt fetches
stateTaskIndex int // [eth/63] Counter indexing the added hashes to ensure prioritised retrieval order
stateTaskPool map[common.Hash]int // [eth/63] Pending node data retrieval tasks, mapping to their priority
stateTaskQueue *prque.Prque // [eth/63] Priority queue of the hashes to fetch the node data for
statePendPool map[string]*fetchRequest // [eth/63] Currently pending node data retrieval operations
stateDatabase ethdb.Database // [eth/63] Trie database to populate during state reassembly
stateScheduler *state.StateSync // [eth/63] State trie synchronisation scheduler and integrator
stateWriters int // [eth/63] Number of running state DB writer goroutines
resultCache []*fetchResult // Downloaded but not yet delivered fetch results
resultOffset uint64 // Offset of the first cached fetch result in the block chain
@ -112,7 +96,7 @@ type queue struct {
}
// newQueue creates a new download queue for scheduling block retrieval.
func newQueue(stateDb ethdb.Database) *queue {
func newQueue() *queue {
lock := new(sync.Mutex)
return &queue{
headerPendPool: make(map[string]*fetchRequest),
@ -125,10 +109,6 @@ func newQueue(stateDb ethdb.Database) *queue {
receiptTaskQueue: prque.New(),
receiptPendPool: make(map[string]*fetchRequest),
receiptDonePool: make(map[common.Hash]struct{}),
stateTaskPool: make(map[common.Hash]int),
stateTaskQueue: prque.New(),
statePendPool: make(map[string]*fetchRequest),
stateDatabase: stateDb,
resultCache: make([]*fetchResult, blockCacheLimit),
active: sync.NewCond(lock),
lock: lock,
@ -158,12 +138,6 @@ func (q *queue) Reset() {
q.receiptPendPool = make(map[string]*fetchRequest)
q.receiptDonePool = make(map[common.Hash]struct{})
q.stateTaskIndex = 0
q.stateTaskPool = make(map[common.Hash]int)
q.stateTaskQueue.Reset()
q.statePendPool = make(map[string]*fetchRequest)
q.stateScheduler = nil
q.resultCache = make([]*fetchResult, blockCacheLimit)
q.resultOffset = 0
}
@ -201,28 +175,6 @@ func (q *queue) PendingReceipts() int {
return q.receiptTaskQueue.Size()
}
// PendingNodeData retrieves the number of node data entries pending for retrieval.
func (q *queue) PendingNodeData() int {
q.lock.Lock()
defer q.lock.Unlock()
return q.pendingNodeDataLocked()
}
// pendingNodeDataLocked retrieves the number of node data entries pending for retrieval.
// The caller must hold q.lock.
func (q *queue) pendingNodeDataLocked() int {
var n int
if q.stateScheduler != nil {
n = q.stateScheduler.Pending()
}
// Ensure that PendingNodeData doesn't return 0 until all state is written.
if q.stateWriters > 0 {
n++
}
return n
}
// InFlightHeaders retrieves whether there are header fetch requests currently
// in flight.
func (q *queue) InFlightHeaders() bool {
@ -250,28 +202,15 @@ func (q *queue) InFlightReceipts() bool {
return len(q.receiptPendPool) > 0
}
// InFlightNodeData retrieves whether there are node data entry fetch requests
// currently in flight.
func (q *queue) InFlightNodeData() bool {
q.lock.Lock()
defer q.lock.Unlock()
return len(q.statePendPool)+q.stateWriters > 0
}
// Idle returns if the queue is fully idle or has some data still inside. This
// method is used by the tester to detect termination events.
// Idle returns if the queue is fully idle or has some data still inside.
func (q *queue) Idle() bool {
q.lock.Lock()
defer q.lock.Unlock()
queued := q.blockTaskQueue.Size() + q.receiptTaskQueue.Size() + q.stateTaskQueue.Size()
pending := len(q.blockPendPool) + len(q.receiptPendPool) + len(q.statePendPool)
queued := q.blockTaskQueue.Size() + q.receiptTaskQueue.Size()
pending := len(q.blockPendPool) + len(q.receiptPendPool)
cached := len(q.blockDonePool) + len(q.receiptDonePool)
if q.stateScheduler != nil {
queued += q.stateScheduler.Pending()
}
return (queued + pending + cached) == 0
}
@ -389,19 +328,6 @@ func (q *queue) Schedule(headers []*types.Header, from uint64) []*types.Header {
q.receiptTaskPool[hash] = header
q.receiptTaskQueue.Push(header, -float32(header.Number.Uint64()))
}
if q.mode == FastSync && header.Number.Uint64() == q.fastSyncPivot {
// Pivoting point of the fast sync, switch the state retrieval to this
log.Debug("Switching state downloads to new block", "number", header.Number, "hash", hash)
q.stateTaskIndex = 0
q.stateTaskPool = make(map[common.Hash]int)
q.stateTaskQueue.Reset()
for _, req := range q.statePendPool {
req.Hashes = make(map[common.Hash]int) // Make sure executing requests fail, but don't disappear
}
q.stateScheduler = state.NewStateSync(header.Root, q.stateDatabase)
}
inserts = append(inserts, header)
q.headerHead = hash
from++
@ -448,31 +374,15 @@ func (q *queue) countProcessableItems() int {
if result == nil || result.Pending > 0 {
return i
}
// Special handling for the fast-sync pivot block:
if q.mode == FastSync {
bnum := result.Header.Number.Uint64()
if bnum == q.fastSyncPivot {
// If the state of the pivot block is not
// available yet, we cannot proceed and return 0.
//
// Stop before processing the pivot block to ensure that
// resultCache has space for fsHeaderForceVerify items. Not
// doing this could leave us unable to download the required
// amount of headers.
if i > 0 || len(q.stateTaskPool) > 0 || q.pendingNodeDataLocked() > 0 {
// Stop before processing the pivot block to ensure that
// resultCache has space for fsHeaderForceVerify items. Not
// doing this could leave us unable to download the required
// amount of headers.
if q.mode == FastSync && result.Header.Number.Uint64() == q.fastSyncPivot {
for j := 0; j < fsHeaderForceVerify; j++ {
if i+j+1 >= len(q.resultCache) || q.resultCache[i+j+1] == nil {
return i
}
for j := 0; j < fsHeaderForceVerify; j++ {
if i+j+1 >= len(q.resultCache) || q.resultCache[i+j+1] == nil {
return i
}
}
}
// If we're just the fast sync pivot, stop as well
// because the following batch needs different insertion.
// This simplifies handling the switchover in d.process.
if bnum == q.fastSyncPivot+1 && i > 0 {
return i
}
}
}
@ -519,81 +429,6 @@ func (q *queue) ReserveHeaders(p *peer, count int) *fetchRequest {
return request
}
// ReserveNodeData reserves a set of node data hashes for the given peer, skipping
// any previously failed download.
func (q *queue) ReserveNodeData(p *peer, count int) *fetchRequest {
// Create a task generator to fetch status-fetch tasks if all schedules ones are done
generator := func(max int) {
if q.stateScheduler != nil {
for _, hash := range q.stateScheduler.Missing(max) {
q.stateTaskPool[hash] = q.stateTaskIndex
q.stateTaskQueue.Push(hash, -float32(q.stateTaskIndex))
q.stateTaskIndex++
}
}
}
q.lock.Lock()
defer q.lock.Unlock()
return q.reserveHashes(p, count, q.stateTaskQueue, generator, q.statePendPool, maxInFlightStates)
}
// reserveHashes reserves a set of hashes for the given peer, skipping previously
// failed ones.
//
// Note, this method expects the queue lock to be already held for writing. The
// reason the lock is not obtained in here is because the parameters already need
// to access the queue, so they already need a lock anyway.
func (q *queue) reserveHashes(p *peer, count int, taskQueue *prque.Prque, taskGen func(int), pendPool map[string]*fetchRequest, maxPending int) *fetchRequest {
// Short circuit if the peer's already downloading something (sanity check to
// not corrupt state)
if _, ok := pendPool[p.id]; ok {
return nil
}
// Calculate an upper limit on the hashes we might fetch (i.e. throttling)
allowance := maxPending
if allowance > 0 {
for _, request := range pendPool {
allowance -= len(request.Hashes)
}
}
// If there's a task generator, ask it to fill our task queue
if taskGen != nil && taskQueue.Size() < allowance {
taskGen(allowance - taskQueue.Size())
}
if taskQueue.Empty() {
return nil
}
// Retrieve a batch of hashes, skipping previously failed ones
send := make(map[common.Hash]int)
skip := make(map[common.Hash]int)
for proc := 0; (allowance == 0 || proc < allowance) && len(send) < count && !taskQueue.Empty(); proc++ {
hash, priority := taskQueue.Pop()
if p.Lacks(hash.(common.Hash)) {
skip[hash.(common.Hash)] = int(priority)
} else {
send[hash.(common.Hash)] = int(priority)
}
}
// Merge all the skipped hashes back
for hash, index := range skip {
taskQueue.Push(hash, float32(index))
}
// Assemble and return the block download request
if len(send) == 0 {
return nil
}
request := &fetchRequest{
Peer: p,
Hashes: send,
Time: time.Now(),
}
pendPool[p.id] = request
return request
}
// ReserveBodies reserves a set of body fetches for the given peer, skipping any
// previously failed downloads. Beside the next batch of needed fetches, it also
// returns a flag whether empty blocks were queued requiring processing.
@ -722,12 +557,6 @@ func (q *queue) CancelReceipts(request *fetchRequest) {
q.cancel(request, q.receiptTaskQueue, q.receiptPendPool)
}
// CancelNodeData aborts a node state data fetch request, returning all pending
// hashes to the task queue.
func (q *queue) CancelNodeData(request *fetchRequest) {
q.cancel(request, q.stateTaskQueue, q.statePendPool)
}
// Cancel aborts a fetch request, returning all pending hashes to the task queue.
func (q *queue) cancel(request *fetchRequest, taskQueue *prque.Prque, pendPool map[string]*fetchRequest) {
q.lock.Lock()
@ -764,12 +593,6 @@ func (q *queue) Revoke(peerId string) {
}
delete(q.receiptPendPool, peerId)
}
if request, ok := q.statePendPool[peerId]; ok {
for hash, index := range request.Hashes {
q.stateTaskQueue.Push(hash, float32(index))
}
delete(q.statePendPool, peerId)
}
}
// ExpireHeaders checks for in flight requests that exceeded a timeout allowance,
@ -799,15 +622,6 @@ func (q *queue) ExpireReceipts(timeout time.Duration) map[string]int {
return q.expire(timeout, q.receiptPendPool, q.receiptTaskQueue, receiptTimeoutMeter)
}
// ExpireNodeData checks for in flight node data requests that exceeded a timeout
// allowance, canceling them and returning the responsible peers for penalisation.
func (q *queue) ExpireNodeData(timeout time.Duration) map[string]int {
q.lock.Lock()
defer q.lock.Unlock()
return q.expire(timeout, q.statePendPool, q.stateTaskQueue, stateTimeoutMeter)
}
// expire is the generic check that move expired tasks from a pending pool back
// into a task pool, returning all entities caught with expired tasks.
//
@ -1044,84 +858,6 @@ func (q *queue) deliver(id string, taskPool map[common.Hash]*types.Header, taskQ
}
}
// DeliverNodeData injects a node state data retrieval response into the queue.
// The method returns the number of node state accepted from the delivery.
func (q *queue) DeliverNodeData(id string, data [][]byte, callback func(int, bool, error)) (int, error) {
q.lock.Lock()
defer q.lock.Unlock()
// Short circuit if the data was never requested
request := q.statePendPool[id]
if request == nil {
return 0, errNoFetchesPending
}
stateReqTimer.UpdateSince(request.Time)
delete(q.statePendPool, id)
// If no data was retrieved, mark their hashes as unavailable for the origin peer
if len(data) == 0 {
for hash := range request.Hashes {
request.Peer.MarkLacking(hash)
}
}
// Iterate over the downloaded data and verify each of them
errs := make([]error, 0)
process := []trie.SyncResult{}
for _, blob := range data {
// Skip any state trie entries that were not requested
hash := common.BytesToHash(crypto.Keccak256(blob))
if _, ok := request.Hashes[hash]; !ok {
errs = append(errs, fmt.Errorf("non-requested state data %x", hash))
continue
}
// Inject the next state trie item into the processing queue
process = append(process, trie.SyncResult{Hash: hash, Data: blob})
delete(request.Hashes, hash)
delete(q.stateTaskPool, hash)
}
// Return all failed or missing fetches to the queue
for hash, index := range request.Hashes {
q.stateTaskQueue.Push(hash, float32(index))
}
if q.stateScheduler == nil {
return 0, errNoFetchesPending
}
// Run valid nodes through the trie download scheduler. It writes completed nodes to a
// batch, which is committed asynchronously. This may lead to over-fetches because the
// scheduler treats everything as written after Process has returned, but it's
// unlikely to be an issue in practice.
batch := q.stateDatabase.NewBatch()
progressed, nproc, procerr := q.stateScheduler.Process(process, batch)
q.stateWriters += 1
go func() {
if procerr == nil {
nproc = len(process)
procerr = batch.Write()
}
// Return processing errors through the callback so the sync gets canceled. The
// number of writers is decremented prior to the call so PendingNodeData will
// return zero when the callback runs.
q.lock.Lock()
q.stateWriters -= 1
q.lock.Unlock()
callback(nproc, progressed, procerr)
// Wake up WaitResults after the state has been written because it might be
// waiting for completion of the pivot block's state download.
q.active.Signal()
}()
// If none of the data items were good, it's a stale delivery
switch {
case len(errs) == 0:
return len(process), nil
case len(errs) == len(request.Hashes):
return len(process), errStaleDelivery
default:
return len(process), fmt.Errorf("multiple failures: %v", errs)
}
}
// Prepare configures the result cache to allow accepting and caching inbound
// fetch results.
func (q *queue) Prepare(offset uint64, mode SyncMode, pivot uint64, head *types.Header) {
@ -1134,9 +870,4 @@ func (q *queue) Prepare(offset uint64, mode SyncMode, pivot uint64, head *types.
}
q.fastSyncPivot = pivot
q.mode = mode
// If long running fast sync, also start up a head stateretrieval immediately
if mode == FastSync && pivot > 0 {
q.stateScheduler = state.NewStateSync(head.Root, q.stateDatabase)
}
}

449
eth/downloader/statesync.go Normal file
View File

@ -0,0 +1,449 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"fmt"
"hash"
"sync"
"sync/atomic"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/crypto/sha3"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie"
)
// stateReq represents a batch of state fetch requests groupped together into
// a single data retrieval network packet.
type stateReq struct {
items []common.Hash // Hashes of the state items to download
tasks map[common.Hash]*stateTask // Download tasks to track previous attempts
timeout time.Duration // Maximum round trip time for this to complete
timer *time.Timer // Timer to fire when the RTT timeout expires
peer *peer // Peer that we're requesting from
response [][]byte // Response data of the peer (nil for timeouts)
}
// timedOut returns if this request timed out.
func (req *stateReq) timedOut() bool {
return req.response == nil
}
// stateSyncStats is a collection of progress stats to report during a state trie
// sync to RPC requests as well as to display in user logs.
type stateSyncStats struct {
processed uint64 // Number of state entries processed
duplicate uint64 // Number of state entries downloaded twice
unexpected uint64 // Number of non-requested state entries received
pending uint64 // Number of still pending state entries
}
// syncState starts downloading state with the given root hash.
func (d *Downloader) syncState(root common.Hash) *stateSync {
s := newStateSync(d, root)
select {
case d.stateSyncStart <- s:
case <-d.quitCh:
s.err = errCancelStateFetch
close(s.done)
}
return s
}
// stateFetcher manages the active state sync and accepts requests
// on its behalf.
func (d *Downloader) stateFetcher() {
for {
select {
case s := <-d.stateSyncStart:
for next := s; next != nil; {
next = d.runStateSync(next)
}
case <-d.stateCh:
// Ignore state responses while no sync is running.
case <-d.quitCh:
return
}
}
}
// runStateSync runs a state synchronisation until it completes or another root
// hash is requested to be switched over to.
func (d *Downloader) runStateSync(s *stateSync) *stateSync {
var (
active = make(map[string]*stateReq) // Currently in-flight requests
finished []*stateReq // Completed or failed requests
timeout = make(chan *stateReq) // Timed out active requests
)
defer func() {
// Cancel active request timers on exit. Also set peers to idle so they're
// available for the next sync.
for _, req := range active {
req.timer.Stop()
req.peer.SetNodeDataIdle(len(req.items))
}
}()
// Run the state sync.
go s.run()
defer s.Cancel()
for {
// Enable sending of the first buffered element if there is one.
var (
deliverReq *stateReq
deliverReqCh chan *stateReq
)
if len(finished) > 0 {
deliverReq = finished[0]
deliverReqCh = s.deliver
}
select {
// The stateSync lifecycle:
case next := <-d.stateSyncStart:
return next
case <-s.done:
return nil
// Send the next finished request to the current sync:
case deliverReqCh <- deliverReq:
finished = append(finished[:0], finished[1:]...)
// Handle incoming state packs:
case pack := <-d.stateCh:
// Discard any data not requested (or previsouly timed out)
req := active[pack.PeerId()]
if req == nil {
log.Debug("Unrequested node data", "peer", pack.PeerId(), "len", pack.Items())
continue
}
// Finalize the request and queue up for processing
req.timer.Stop()
req.response = pack.(*statePack).states
finished = append(finished, req)
delete(active, pack.PeerId())
// Handle timed-out requests:
case req := <-timeout:
// If the peer is already requesting something else, ignore the stale timeout.
// This can happen when the timeout and the delivery happens simultaneously,
// causing both pathways to trigger.
if active[req.peer.id] != req {
continue
}
// Move the timed out data back into the download queue
finished = append(finished, req)
delete(active, req.peer.id)
// Track outgoing state requests:
case req := <-d.trackStateReq:
// If an active request already exists for this peer, we have a problem. In
// theory the trie node schedule must never assign two requests to the same
// peer. In practive however, a peer might receive a request, disconnect and
// immediately reconnect before the previous times out. In this case the first
// request is never honored, alas we must not silently overwrite it, as that
// causes valid requests to go missing and sync to get stuck.
if old := active[req.peer.id]; old != nil {
log.Warn("Busy peer assigned new state fetch", "peer", old.peer.id)
// Make sure the previous one doesn't get siletly lost
finished = append(finished, old)
}
// Start a timer to notify the sync loop if the peer stalled.
req.timer = time.AfterFunc(req.timeout, func() {
select {
case timeout <- req:
case <-s.done:
// Prevent leaking of timer goroutines in the unlikely case where a
// timer is fired just before exiting runStateSync.
}
})
active[req.peer.id] = req
}
}
}
// stateSync schedules requests for downloading a particular state trie defined
// by a given state root.
type stateSync struct {
d *Downloader // Downloader instance to access and manage current peerset
sched *state.StateSync // State trie sync scheduler defining the tasks
keccak hash.Hash // Keccak256 hasher to verify deliveries with
tasks map[common.Hash]*stateTask // Set of tasks currently queued for retrieval
deliver chan *stateReq // Delivery channel multiplexing peer responses
cancel chan struct{} // Channel to signal a termination request
cancelOnce sync.Once // Ensures cancel only ever gets called once
done chan struct{} // Channel to signal termination completion
err error // Any error hit during sync (set before completion)
}
// stateTask represents a single trie node download taks, containing a set of
// peers already attempted retrieval from to detect stalled syncs and abort.
type stateTask struct {
attempts map[string]struct{}
}
// newStateSync creates a new state trie download scheduler. This method does not
// yet start the sync. The user needs to call run to initiate.
func newStateSync(d *Downloader, root common.Hash) *stateSync {
return &stateSync{
d: d,
sched: state.NewStateSync(root, d.stateDB),
keccak: sha3.NewKeccak256(),
tasks: make(map[common.Hash]*stateTask),
deliver: make(chan *stateReq),
cancel: make(chan struct{}),
done: make(chan struct{}),
}
}
// run starts the task assignment and response processing loop, blocking until
// it finishes, and finally notifying any goroutines waiting for the loop to
// finish.
func (s *stateSync) run() {
s.err = s.loop()
close(s.done)
}
// Wait blocks until the sync is done or canceled.
func (s *stateSync) Wait() error {
<-s.done
return s.err
}
// Cancel cancels the sync and waits until it has shut down.
func (s *stateSync) Cancel() error {
s.cancelOnce.Do(func() { close(s.cancel) })
return s.Wait()
}
// loop is the main event loop of a state trie sync. It it responsible for the
// assignment of new tasks to peers (including sending it to them) as well as
// for the processing of inbound data. Note, that the loop does not directly
// receive data from peers, rather those are buffered up in the downloader and
// pushed here async. The reason is to decouple processing from data receipt
// and timeouts.
func (s *stateSync) loop() error {
// Listen for new peer events to assign tasks to them
newPeer := make(chan *peer, 1024)
peerSub := s.d.peers.SubscribeNewPeers(newPeer)
defer peerSub.Unsubscribe()
// Keep assigning new tasks until the sync completes or aborts
for s.sched.Pending() > 0 {
if err := s.assignTasks(); err != nil {
return err
}
// Tasks assigned, wait for something to happen
select {
case <-newPeer:
// New peer arrived, try to assign it download tasks
case <-s.cancel:
return errCancelStateFetch
case req := <-s.deliver:
// Response or timeout triggered, drop the peer if stalling
log.Trace("Received node data response", "peer", req.peer.id, "count", len(req.response), "timeout", req.timedOut())
if len(req.items) <= 2 && req.timedOut() {
// 2 items are the minimum requested, if even that times out, we've no use of
// this peer at the moment.
log.Warn("Stalling state sync, dropping peer", "peer", req.peer.id)
s.d.dropPeer(req.peer.id)
}
// Process all the received blobs and check for stale delivery
stale, err := s.process(req)
if err != nil {
log.Warn("Node data write error", "err", err)
return err
}
// The the delivery contains requested data, mark the node idle (otherwise it's a timed out delivery)
if !stale {
req.peer.SetNodeDataIdle(len(req.response))
}
}
}
return nil
}
// assignTasks attempts to assing new tasks to all idle peers, either from the
// batch currently being retried, or fetching new data from the trie sync itself.
func (s *stateSync) assignTasks() error {
// Iterate over all idle peers and try to assign them state fetches
peers, _ := s.d.peers.NodeDataIdlePeers()
for _, p := range peers {
// Assign a batch of fetches proportional to the estimated latency/bandwidth
cap := p.NodeDataCapacity(s.d.requestRTT())
req := &stateReq{peer: p, timeout: s.d.requestTTL()}
s.fillTasks(cap, req)
// If the peer was assigned tasks to fetch, send the network request
if len(req.items) > 0 {
req.peer.log.Trace("Requesting new batch of data", "type", "state", "count", len(req.items))
select {
case s.d.trackStateReq <- req:
req.peer.FetchNodeData(req.items)
case <-s.cancel:
}
}
}
return nil
}
// fillTasks fills the given request object with a maximum of n state download
// tasks to send to the remote peer.
func (s *stateSync) fillTasks(n int, req *stateReq) {
// Refill available tasks from the scheduler.
if len(s.tasks) < n {
new := s.sched.Missing(n - len(s.tasks))
for _, hash := range new {
s.tasks[hash] = &stateTask{make(map[string]struct{})}
}
}
// Find tasks that haven't been tried with the request's peer.
req.items = make([]common.Hash, 0, n)
req.tasks = make(map[common.Hash]*stateTask, n)
for hash, t := range s.tasks {
// Stop when we've gathered enough requests
if len(req.items) == n {
break
}
// Skip any requests we've already tried from this peer
if _, ok := t.attempts[req.peer.id]; ok {
continue
}
// Assign the request to this peer
t.attempts[req.peer.id] = struct{}{}
req.items = append(req.items, hash)
req.tasks[hash] = t
delete(s.tasks, hash)
}
}
// process iterates over a batch of delivered state data, injecting each item
// into a running state sync, re-queuing any items that were requested but not
// delivered.
func (s *stateSync) process(req *stateReq) (bool, error) {
// Collect processing stats and update progress if valid data was received
processed, written, duplicate, unexpected := 0, 0, 0, 0
defer func(start time.Time) {
if processed+written+duplicate+unexpected > 0 {
s.updateStats(processed, written, duplicate, unexpected, time.Since(start))
}
}(time.Now())
// Iterate over all the delivered data and inject one-by-one into the trie
progress, stale := false, len(req.response) > 0
for _, blob := range req.response {
prog, hash, err := s.processNodeData(blob)
switch err {
case nil:
processed++
case trie.ErrNotRequested:
unexpected++
case trie.ErrAlreadyProcessed:
duplicate++
default:
return stale, fmt.Errorf("invalid state node %s: %v", hash.TerminalString(), err)
}
if prog {
progress = true
}
// If the node delivered a requested item, mark the delivery non-stale
if _, ok := req.tasks[hash]; ok {
delete(req.tasks, hash)
stale = false
}
}
// If some data managed to hit the database, flush and reset failure counters
if progress {
// Flush any accumulated data out to disk
batch := s.d.stateDB.NewBatch()
count, err := s.sched.Commit(batch)
if err != nil {
return stale, err
}
if err := batch.Write(); err != nil {
return stale, err
}
written = count
// If we're inside the critical section, reset fail counter since we progressed
if atomic.LoadUint32(&s.d.fsPivotFails) > 1 {
log.Trace("Fast-sync progressed, resetting fail counter", "previous", atomic.LoadUint32(&s.d.fsPivotFails))
atomic.StoreUint32(&s.d.fsPivotFails, 1) // Don't ever reset to 0, as that will unlock the pivot block
}
}
// Put unfulfilled tasks back into the retry queue
npeers := s.d.peers.Len()
for hash, task := range req.tasks {
// If the node did deliver something, missing items may be due to a protocol
// limit or a previous timeout + delayed delivery. Both cases should permit
// the node to retry the missing items (to avoid single-peer stalls).
if len(req.response) > 0 || req.timedOut() {
delete(task.attempts, req.peer.id)
}
// If we've requested the node too many times already, it may be a malicious
// sync where nobody has the right data. Abort.
if len(task.attempts) >= npeers {
return stale, fmt.Errorf("state node %s failed with all peers (%d tries, %d peers)", hash.TerminalString(), len(task.attempts), npeers)
}
// Missing item, place into the retry queue.
s.tasks[hash] = task
}
return stale, nil
}
// processNodeData tries to inject a trie node data blob delivered from a remote
// peer into the state trie, returning whether anything useful was written or any
// error occurred.
func (s *stateSync) processNodeData(blob []byte) (bool, common.Hash, error) {
res := trie.SyncResult{Data: blob}
s.keccak.Reset()
s.keccak.Write(blob)
s.keccak.Sum(res.Hash[:0])
committed, _, err := s.sched.Process([]trie.SyncResult{res})
return committed, res.Hash, err
}
// updateStats bumps the various state sync progress counters and displays a log
// message for the user to see.
func (s *stateSync) updateStats(processed, written, duplicate, unexpected int, duration time.Duration) {
s.d.syncStatsLock.Lock()
defer s.d.syncStatsLock.Unlock()
s.d.syncStatsState.pending = uint64(s.sched.Pending())
s.d.syncStatsState.processed += uint64(processed)
s.d.syncStatsState.duplicate += uint64(duplicate)
s.d.syncStatsState.unexpected += uint64(unexpected)
log.Info("Imported new state entries", "count", processed, "flushed", written, "elapsed", common.PrettyDuration(duration), "processed", s.d.syncStatsState.processed, "pending", s.d.syncStatsState.pending, "retry", len(s.tasks), "duplicate", s.d.syncStatsState.duplicate, "unexpected", s.d.syncStatsState.unexpected)
}

View File

@ -87,8 +87,6 @@ type ProtocolManager struct {
quitSync chan struct{}
noMorePeers chan struct{}
lesServer LesServer
// wait group is used for graceful shutdowns during downloading
// and processing
wg sync.WaitGroup

View File

@ -167,11 +167,11 @@ func (ec *Client) TransactionByHash(ctx context.Context, hash common.Hash) (tx *
} else if _, r, _ := tx.RawSignatureValues(); r == nil {
return nil, false, fmt.Errorf("server returned transaction without signature")
}
var block struct{ BlockHash *common.Hash }
var block struct{ BlockNumber *string }
if err := json.Unmarshal(raw, &block); err != nil {
return nil, false, err
}
return tx, block.BlockHash == nil, nil
return tx, block.BlockNumber == nil, nil
}
// TransactionCount returns the total number of transactions in the given block.

View File

@ -45,13 +45,6 @@ func (db *MemDatabase) Put(key []byte, value []byte) error {
return nil
}
func (db *MemDatabase) Set(key []byte, value []byte) {
db.lock.Lock()
defer db.lock.Unlock()
db.Put(key, value)
}
func (db *MemDatabase) Get(key []byte) ([]byte, error) {
db.lock.RLock()
defer db.lock.RUnlock()

View File

@ -31,6 +31,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/mclock"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
@ -119,7 +120,7 @@ func (s *Service) Stop() error {
// loop keeps trying to connect to the netstats server, reporting chain events
// until termination.
func (s *Service) loop() {
// Subscribe tso chain events to execute updates on
// Subscribe to chain events to execute updates on
var emux *event.TypeMux
if s.eth != nil {
emux = s.eth.EventMux()
@ -132,6 +133,46 @@ func (s *Service) loop() {
txSub := emux.Subscribe(core.TxPreEvent{})
defer txSub.Unsubscribe()
// Start a goroutine that exhausts the subsciptions to avoid events piling up
var (
quitCh = make(chan struct{})
headCh = make(chan *types.Block, 1)
txCh = make(chan struct{}, 1)
)
go func() {
var lastTx mclock.AbsTime
for {
select {
// Notify of chain head events, but drop if too frequent
case head, ok := <-headSub.Chan():
if !ok { // node stopped
close(quitCh)
return
}
select {
case headCh <- head.Data.(core.ChainHeadEvent).Block:
default:
}
// Notify of new transaction events, but drop if too frequent
case _, ok := <-txSub.Chan():
if !ok { // node stopped
close(quitCh)
return
}
if time.Duration(mclock.Now()-lastTx) < time.Second {
continue
}
lastTx = mclock.Now()
select {
case txCh <- struct{}{}:
default:
}
}
}
}()
// Loop reporting until termination
for {
// Resolve the URL, defaulting to TLS, but falling back to none too
@ -151,7 +192,7 @@ func (s *Service) loop() {
if conf, err = websocket.NewConfig(url, "http://localhost/"); err != nil {
continue
}
conf.Dialer = &net.Dialer{Timeout: 3 * time.Second}
conf.Dialer = &net.Dialer{Timeout: 5 * time.Second}
if conn, err = websocket.DialConfig(conf); err == nil {
break
}
@ -181,6 +222,10 @@ func (s *Service) loop() {
for err == nil {
select {
case <-quitCh:
conn.Close()
return
case <-fullReport.C:
if err = s.report(conn); err != nil {
log.Warn("Full stats report failed", "err", err)
@ -189,30 +234,14 @@ func (s *Service) loop() {
if err = s.reportHistory(conn, list); err != nil {
log.Warn("Requested history report failed", "err", err)
}
case head, ok := <-headSub.Chan():
if !ok { // node stopped
conn.Close()
return
}
if err = s.reportBlock(conn, head.Data.(core.ChainHeadEvent).Block); err != nil {
case head := <-headCh:
if err = s.reportBlock(conn, head); err != nil {
log.Warn("Block stats report failed", "err", err)
}
if err = s.reportPending(conn); err != nil {
log.Warn("Post-block transaction stats report failed", "err", err)
}
case _, ok := <-txSub.Chan():
if !ok { // node stopped
conn.Close()
return
}
// Exhaust events to avoid reporting too frequently
for exhausted := false; !exhausted; {
select {
case <-headSub.Chan():
default:
exhausted = true
}
}
case <-txCh:
if err = s.reportPending(conn); err != nil {
log.Warn("Transaction stats report failed", "err", err)
}
@ -398,7 +427,7 @@ func (s *Service) reportLatency(conn *websocket.Conn) error {
select {
case <-s.pongCh:
// Pong delivered, report the latency
case <-time.After(3 * time.Second):
case <-time.After(5 * time.Second):
// Ping timeout, abort
return errors.New("ping timed out")
}

View File

@ -0,0 +1,270 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
package cmdtest
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"regexp"
"sync"
"testing"
"text/template"
"time"
"github.com/docker/docker/pkg/reexec"
)
func NewTestCmd(t *testing.T, data interface{}) *TestCmd {
return &TestCmd{T: t, Data: data}
}
type TestCmd struct {
// For total convenience, all testing methods are available.
*testing.T
Func template.FuncMap
Data interface{}
Cleanup func()
cmd *exec.Cmd
stdout *bufio.Reader
stdin io.WriteCloser
stderr *testlogger
}
// Run exec's the current binary using name as argv[0] which will trigger the
// reexec init function for that name (e.g. "geth-test" in cmd/geth/run_test.go)
func (tt *TestCmd) Run(name string, args ...string) {
tt.stderr = &testlogger{t: tt.T}
tt.cmd = &exec.Cmd{
Path: reexec.Self(),
Args: append([]string{name}, args...),
Stderr: tt.stderr,
}
stdout, err := tt.cmd.StdoutPipe()
if err != nil {
tt.Fatal(err)
}
tt.stdout = bufio.NewReader(stdout)
if tt.stdin, err = tt.cmd.StdinPipe(); err != nil {
tt.Fatal(err)
}
if err := tt.cmd.Start(); err != nil {
tt.Fatal(err)
}
}
// InputLine writes the given text to the childs stdin.
// This method can also be called from an expect template, e.g.:
//
// geth.expect(`Passphrase: {{.InputLine "password"}}`)
func (tt *TestCmd) InputLine(s string) string {
io.WriteString(tt.stdin, s+"\n")
return ""
}
func (tt *TestCmd) SetTemplateFunc(name string, fn interface{}) {
if tt.Func == nil {
tt.Func = make(map[string]interface{})
}
tt.Func[name] = fn
}
// Expect runs its argument as a template, then expects the
// child process to output the result of the template within 5s.
//
// If the template starts with a newline, the newline is removed
// before matching.
func (tt *TestCmd) Expect(tplsource string) {
// Generate the expected output by running the template.
tpl := template.Must(template.New("").Funcs(tt.Func).Parse(tplsource))
wantbuf := new(bytes.Buffer)
if err := tpl.Execute(wantbuf, tt.Data); err != nil {
panic(err)
}
// Trim exactly one newline at the beginning. This makes tests look
// much nicer because all expect strings are at column 0.
want := bytes.TrimPrefix(wantbuf.Bytes(), []byte("\n"))
if err := tt.matchExactOutput(want); err != nil {
tt.Fatal(err)
}
tt.Logf("Matched stdout text:\n%s", want)
}
func (tt *TestCmd) matchExactOutput(want []byte) error {
buf := make([]byte, len(want))
n := 0
tt.withKillTimeout(func() { n, _ = io.ReadFull(tt.stdout, buf) })
buf = buf[:n]
if n < len(want) || !bytes.Equal(buf, want) {
// Grab any additional buffered output in case of mismatch
// because it might help with debugging.
buf = append(buf, make([]byte, tt.stdout.Buffered())...)
tt.stdout.Read(buf[n:])
// Find the mismatch position.
for i := 0; i < n; i++ {
if want[i] != buf[i] {
return fmt.Errorf("Output mismatch at ◊:\n---------------- (stdout text)\n%s◊%s\n---------------- (expected text)\n%s",
buf[:i], buf[i:n], want)
}
}
if n < len(want) {
return fmt.Errorf("Not enough output, got until ◊:\n---------------- (stdout text)\n%s\n---------------- (expected text)\n%s◊%s",
buf, want[:n], want[n:])
}
}
return nil
}
// ExpectRegexp expects the child process to output text matching the
// given regular expression within 5s.
//
// Note that an arbitrary amount of output may be consumed by the
// regular expression. This usually means that expect cannot be used
// after ExpectRegexp.
func (tt *TestCmd) ExpectRegexp(resource string) (*regexp.Regexp, []string) {
var (
re = regexp.MustCompile(resource)
rtee = &runeTee{in: tt.stdout}
matches []int
)
tt.withKillTimeout(func() { matches = re.FindReaderSubmatchIndex(rtee) })
output := rtee.buf.Bytes()
if matches == nil {
tt.Fatalf("Output did not match:\n---------------- (stdout text)\n%s\n---------------- (regular expression)\n%s",
output, resource)
return re, nil
}
tt.Logf("Matched stdout text:\n%s", output)
var submatches []string
for i := 0; i < len(matches); i += 2 {
submatch := string(output[matches[i]:matches[i+1]])
submatches = append(submatches, submatch)
}
return re, submatches
}
// ExpectExit expects the child process to exit within 5s without
// printing any additional text on stdout.
func (tt *TestCmd) ExpectExit() {
var output []byte
tt.withKillTimeout(func() {
output, _ = ioutil.ReadAll(tt.stdout)
})
tt.WaitExit()
if tt.Cleanup != nil {
tt.Cleanup()
}
if len(output) > 0 {
tt.Errorf("Unmatched stdout text:\n%s", output)
}
}
func (tt *TestCmd) WaitExit() {
tt.cmd.Wait()
}
func (tt *TestCmd) Interrupt() {
tt.cmd.Process.Signal(os.Interrupt)
}
// StderrText returns any stderr output written so far.
// The returned text holds all log lines after ExpectExit has
// returned.
func (tt *TestCmd) StderrText() string {
tt.stderr.mu.Lock()
defer tt.stderr.mu.Unlock()
return tt.stderr.buf.String()
}
func (tt *TestCmd) CloseStdin() {
tt.stdin.Close()
}
func (tt *TestCmd) Kill() {
tt.cmd.Process.Kill()
if tt.Cleanup != nil {
tt.Cleanup()
}
}
func (tt *TestCmd) withKillTimeout(fn func()) {
timeout := time.AfterFunc(5*time.Second, func() {
tt.Log("killing the child process (timeout)")
tt.Kill()
})
defer timeout.Stop()
fn()
}
// testlogger logs all written lines via t.Log and also
// collects them for later inspection.
type testlogger struct {
t *testing.T
mu sync.Mutex
buf bytes.Buffer
}
func (tl *testlogger) Write(b []byte) (n int, err error) {
lines := bytes.Split(b, []byte("\n"))
for _, line := range lines {
if len(line) > 0 {
tl.t.Logf("(stderr) %s", line)
}
}
tl.mu.Lock()
tl.buf.Write(b)
tl.mu.Unlock()
return len(b), err
}
// runeTee collects text read through it into buf.
type runeTee struct {
in interface {
io.Reader
io.ByteReader
io.RuneReader
}
buf bytes.Buffer
}
func (rtee *runeTee) Read(b []byte) (n int, err error) {
n, err = rtee.in.Read(b)
rtee.buf.Write(b[:n])
return n, err
}
func (rtee *runeTee) ReadRune() (r rune, size int, err error) {
r, size, err = rtee.in.ReadRune()
if err == nil {
rtee.buf.WriteRune(r)
}
return r, size, err
}
func (rtee *runeTee) ReadByte() (b byte, err error) {
b, err = rtee.in.ReadByte()
if err == nil {
rtee.buf.WriteByte(b)
}
return b, err
}

View File

@ -211,8 +211,9 @@ type PrivateAccountAPI struct {
// NewPrivateAccountAPI create a new PrivateAccountAPI.
func NewPrivateAccountAPI(b Backend, nonceLock *AddrLocker) *PrivateAccountAPI {
return &PrivateAccountAPI{
am: b.AccountManager(),
b: b,
am: b.AccountManager(),
nonceLock: nonceLock,
b: b,
}
}

View File

@ -21,6 +21,7 @@ import (
"errors"
"fmt"
"math/big"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
@ -344,6 +345,12 @@ func (jst *JavascriptTracer) CaptureState(env *vm.EVM, pc uint64, op vm.OpCode,
return nil
}
// CaptureEnd is called after the call finishes
func (jst *JavascriptTracer) CaptureEnd(output []byte, gasUsed uint64, t time.Duration) error {
//TODO! @Arachnid please figure out of there's anything we can use this method for
return nil
}
// GetResult calls the Javascript 'result' function and returns its value, or any accumulated error
func (jst *JavascriptTracer) GetResult() (result interface{}, err error) {
if jst.err != nil {

View File

@ -19,6 +19,7 @@ package les
import (
"fmt"
"sync"
"time"
"github.com/ethereum/go-ethereum/accounts"
@ -38,6 +39,7 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/params"
rpc "github.com/ethereum/go-ethereum/rpc"
)
@ -49,9 +51,13 @@ type LightEthereum struct {
// Channel for shutting down the service
shutdownChan chan bool
// Handlers
peers *peerSet
txPool *light.TxPool
blockchain *light.LightChain
protocolManager *ProtocolManager
serverPool *serverPool
reqDist *requestDistributor
retriever *retrieveManager
// DB interfaces
chainDb ethdb.Database // Block chain database
@ -63,6 +69,9 @@ type LightEthereum struct {
networkId uint64
netRPCService *ethapi.PublicNetAPI
quitSync chan struct{}
wg sync.WaitGroup
}
func New(ctx *node.ServiceContext, config *eth.Config) (*LightEthereum, error) {
@ -76,20 +85,26 @@ func New(ctx *node.ServiceContext, config *eth.Config) (*LightEthereum, error) {
}
log.Info("Initialised chain configuration", "config", chainConfig)
odr := NewLesOdr(chainDb)
relay := NewLesTxRelay()
peers := newPeerSet()
quitSync := make(chan struct{})
eth := &LightEthereum{
odr: odr,
relay: relay,
chainDb: chainDb,
chainConfig: chainConfig,
chainDb: chainDb,
eventMux: ctx.EventMux,
peers: peers,
reqDist: newRequestDistributor(peers, quitSync),
accountManager: ctx.AccountManager,
engine: eth.CreateConsensusEngine(ctx, config, chainConfig, chainDb),
shutdownChan: make(chan bool),
networkId: config.NetworkId,
}
if eth.blockchain, err = light.NewLightChain(odr, eth.chainConfig, eth.engine, eth.eventMux); err != nil {
eth.relay = NewLesTxRelay(peers, eth.reqDist)
eth.serverPool = newServerPool(chainDb, quitSync, &eth.wg)
eth.retriever = newRetrieveManager(peers, eth.reqDist, eth.serverPool)
eth.odr = NewLesOdr(chainDb, eth.retriever)
if eth.blockchain, err = light.NewLightChain(eth.odr, eth.chainConfig, eth.engine, eth.eventMux); err != nil {
return nil, err
}
// Rewind the chain in case of an incompatible config upgrade.
@ -100,13 +115,9 @@ func New(ctx *node.ServiceContext, config *eth.Config) (*LightEthereum, error) {
}
eth.txPool = light.NewTxPool(eth.chainConfig, eth.eventMux, eth.blockchain, eth.relay)
lightSync := config.SyncMode == downloader.LightSync
if eth.protocolManager, err = NewProtocolManager(eth.chainConfig, lightSync, config.NetworkId, eth.eventMux, eth.engine, eth.blockchain, nil, chainDb, odr, relay); err != nil {
if eth.protocolManager, err = NewProtocolManager(eth.chainConfig, true, config.NetworkId, eth.eventMux, eth.engine, eth.peers, eth.blockchain, nil, chainDb, eth.odr, eth.relay, quitSync, &eth.wg); err != nil {
return nil, err
}
relay.ps = eth.protocolManager.peers
relay.reqDist = eth.protocolManager.reqDist
eth.ApiBackend = &LesApiBackend{eth, nil}
gpoParams := config.GPO
if gpoParams.Default == nil {
@ -116,6 +127,10 @@ func New(ctx *node.ServiceContext, config *eth.Config) (*LightEthereum, error) {
return eth, nil
}
func lesTopic(genesisHash common.Hash) discv5.Topic {
return discv5.Topic("LES@" + common.Bytes2Hex(genesisHash.Bytes()[0:8]))
}
type LightDummyAPI struct{}
// Etherbase is the address that mining rewards will be send to
@ -188,7 +203,8 @@ func (s *LightEthereum) Protocols() []p2p.Protocol {
func (s *LightEthereum) Start(srvr *p2p.Server) error {
log.Warn("Light client mode is an experimental feature")
s.netRPCService = ethapi.NewPublicNetAPI(srvr, s.networkId)
s.protocolManager.Start(srvr)
s.serverPool.start(srvr, lesTopic(s.blockchain.Genesis().Hash()))
s.protocolManager.Start()
return nil
}

View File

@ -34,11 +34,11 @@ var ErrNoPeers = errors.New("no suitable peers available")
type requestDistributor struct {
reqQueue *list.List
lastReqOrder uint64
peers map[distPeer]struct{}
peerLock sync.RWMutex
stopChn, loopChn chan struct{}
loopNextSent bool
lock sync.Mutex
getAllPeers func() map[distPeer]struct{}
}
// distPeer is an LES server peer interface for the request distributor.
@ -71,15 +71,39 @@ type distReq struct {
}
// newRequestDistributor creates a new request distributor
func newRequestDistributor(getAllPeers func() map[distPeer]struct{}, stopChn chan struct{}) *requestDistributor {
r := &requestDistributor{
reqQueue: list.New(),
loopChn: make(chan struct{}, 2),
stopChn: stopChn,
getAllPeers: getAllPeers,
func newRequestDistributor(peers *peerSet, stopChn chan struct{}) *requestDistributor {
d := &requestDistributor{
reqQueue: list.New(),
loopChn: make(chan struct{}, 2),
stopChn: stopChn,
peers: make(map[distPeer]struct{}),
}
go r.loop()
return r
if peers != nil {
peers.notify(d)
}
go d.loop()
return d
}
// registerPeer implements peerSetNotify
func (d *requestDistributor) registerPeer(p *peer) {
d.peerLock.Lock()
d.peers[p] = struct{}{}
d.peerLock.Unlock()
}
// unregisterPeer implements peerSetNotify
func (d *requestDistributor) unregisterPeer(p *peer) {
d.peerLock.Lock()
delete(d.peers, p)
d.peerLock.Unlock()
}
// registerTestPeer adds a new test peer
func (d *requestDistributor) registerTestPeer(p distPeer) {
d.peerLock.Lock()
d.peers[p] = struct{}{}
d.peerLock.Unlock()
}
// distMaxWait is the maximum waiting time after which further necessary waiting
@ -152,8 +176,7 @@ func (sp selectPeerItem) Weight() int64 {
// nextRequest returns the next possible request from any peer, along with the
// associated peer and necessary waiting time
func (d *requestDistributor) nextRequest() (distPeer, *distReq, time.Duration) {
peers := d.getAllPeers()
checkedPeers := make(map[distPeer]struct{})
elem := d.reqQueue.Front()
var (
bestPeer distPeer
@ -162,11 +185,14 @@ func (d *requestDistributor) nextRequest() (distPeer, *distReq, time.Duration) {
sel *weightedRandomSelect
)
for (len(peers) > 0 || elem == d.reqQueue.Front()) && elem != nil {
d.peerLock.RLock()
defer d.peerLock.RUnlock()
for (len(d.peers) > 0 || elem == d.reqQueue.Front()) && elem != nil {
req := elem.Value.(*distReq)
canSend := false
for peer, _ := range peers {
if peer.canQueue() && req.canSend(peer) {
for peer, _ := range d.peers {
if _, ok := checkedPeers[peer]; !ok && peer.canQueue() && req.canSend(peer) {
canSend = true
cost := req.getCost(peer)
wait, bufRemain := peer.waitBefore(cost)
@ -182,7 +208,7 @@ func (d *requestDistributor) nextRequest() (distPeer, *distReq, time.Duration) {
bestWait = wait
}
}
delete(peers, peer)
checkedPeers[peer] = struct{}{}
}
}
next := elem.Next()

View File

@ -122,20 +122,14 @@ func testRequestDistributor(t *testing.T, resend bool) {
stop := make(chan struct{})
defer close(stop)
dist := newRequestDistributor(nil, stop)
var peers [testDistPeerCount]*testDistPeer
for i, _ := range peers {
peers[i] = &testDistPeer{}
go peers[i].worker(t, !resend, stop)
dist.registerTestPeer(peers[i])
}
dist := newRequestDistributor(func() map[distPeer]struct{} {
m := make(map[distPeer]struct{})
for _, peer := range peers {
m[peer] = struct{}{}
}
return m
}, stop)
var wg sync.WaitGroup
for i := 1; i <= testDistReqCount; i++ {

View File

@ -116,6 +116,7 @@ func newLightFetcher(pm *ProtocolManager) *lightFetcher {
syncDone: make(chan *peer),
maxConfirmedTd: big.NewInt(0),
}
pm.peers.notify(f)
go f.syncLoop()
return f
}
@ -209,8 +210,8 @@ func (f *lightFetcher) syncLoop() {
}
}
// addPeer adds a new peer to the fetcher's peer set
func (f *lightFetcher) addPeer(p *peer) {
// registerPeer adds a new peer to the fetcher's peer set
func (f *lightFetcher) registerPeer(p *peer) {
p.lock.Lock()
p.hasBlock = func(hash common.Hash, number uint64) bool {
return f.peerHasBlock(p, hash, number)
@ -223,8 +224,8 @@ func (f *lightFetcher) addPeer(p *peer) {
f.peers[p] = &fetcherPeerInfo{nodeByHash: make(map[common.Hash]*fetcherTreeNode)}
}
// removePeer removes a new peer from the fetcher's peer set
func (f *lightFetcher) removePeer(p *peer) {
// unregisterPeer removes a new peer from the fetcher's peer set
func (f *lightFetcher) unregisterPeer(p *peer) {
p.lock.Lock()
p.hasBlock = nil
p.lock.Unlock()
@ -416,7 +417,7 @@ func (f *lightFetcher) nextRequest() (*distReq, uint64) {
f.syncing = bestSyncing
var rq *distReq
reqID := getNextReqID()
reqID := genReqID()
if f.syncing {
rq = &distReq{
getCost: func(dp distPeer) uint64 {

View File

@ -102,7 +102,9 @@ type ProtocolManager struct {
odr *LesOdr
server *LesServer
serverPool *serverPool
lesTopic discv5.Topic
reqDist *requestDistributor
retriever *retrieveManager
downloader *downloader.Downloader
fetcher *lightFetcher
@ -123,12 +125,12 @@ type ProtocolManager struct {
// wait group is used for graceful shutdowns during downloading
// and processing
wg sync.WaitGroup
wg *sync.WaitGroup
}
// NewProtocolManager returns a new ethereum sub protocol manager. The Ethereum sub protocol manages peers capable
// with the ethereum network.
func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, networkId uint64, mux *event.TypeMux, engine consensus.Engine, blockchain BlockChain, txpool txPool, chainDb ethdb.Database, odr *LesOdr, txrelay *LesTxRelay) (*ProtocolManager, error) {
func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, networkId uint64, mux *event.TypeMux, engine consensus.Engine, peers *peerSet, blockchain BlockChain, txpool txPool, chainDb ethdb.Database, odr *LesOdr, txrelay *LesTxRelay, quitSync chan struct{}, wg *sync.WaitGroup) (*ProtocolManager, error) {
// Create the protocol manager with the base fields
manager := &ProtocolManager{
lightSync: lightSync,
@ -136,15 +138,20 @@ func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, network
blockchain: blockchain,
chainConfig: chainConfig,
chainDb: chainDb,
odr: odr,
networkId: networkId,
txpool: txpool,
txrelay: txrelay,
odr: odr,
peers: newPeerSet(),
peers: peers,
newPeerCh: make(chan *peer),
quitSync: make(chan struct{}),
quitSync: quitSync,
wg: wg,
noMorePeers: make(chan struct{}),
}
if odr != nil {
manager.retriever = odr.retriever
manager.reqDist = odr.retriever.dist
}
// Initiate a sub-protocol for every implemented version we can handle
manager.SubProtocols = make([]p2p.Protocol, 0, len(ProtocolVersions))
for i, version := range ProtocolVersions {
@ -202,84 +209,22 @@ func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, network
manager.downloader = downloader.New(downloader.LightSync, chainDb, manager.eventMux, blockchain.HasHeader, nil, blockchain.GetHeaderByHash,
nil, blockchain.CurrentHeader, nil, nil, nil, blockchain.GetTdByHash,
blockchain.InsertHeaderChain, nil, nil, blockchain.Rollback, removePeer)
manager.peers.notify((*downloaderPeerNotify)(manager))
manager.fetcher = newLightFetcher(manager)
}
manager.reqDist = newRequestDistributor(func() map[distPeer]struct{} {
m := make(map[distPeer]struct{})
peers := manager.peers.AllPeers()
for _, peer := range peers {
m[peer] = struct{}{}
}
return m
}, manager.quitSync)
if odr != nil {
odr.removePeer = removePeer
odr.reqDist = manager.reqDist
}
/*validator := func(block *types.Block, parent *types.Block) error {
return core.ValidateHeader(pow, block.Header(), parent.Header(), true, false)
}
heighter := func() uint64 {
return chainman.LastBlockNumberU64()
}
manager.fetcher = fetcher.New(chainman.GetBlockNoOdr, validator, nil, heighter, chainman.InsertChain, manager.removePeer)
*/
return manager, nil
}
// removePeer initiates disconnection from a peer by removing it from the peer set
func (pm *ProtocolManager) removePeer(id string) {
// Short circuit if the peer was already removed
peer := pm.peers.Peer(id)
if peer == nil {
return
}
log.Debug("Removing light Ethereum peer", "peer", id)
if err := pm.peers.Unregister(id); err != nil {
if err == errNotRegistered {
return
}
}
// Unregister the peer from the downloader and Ethereum peer set
if pm.lightSync {
pm.downloader.UnregisterPeer(id)
if pm.txrelay != nil {
pm.txrelay.removePeer(id)
}
if pm.fetcher != nil {
pm.fetcher.removePeer(peer)
}
}
// Hard disconnect at the networking layer
if peer != nil {
peer.Peer.Disconnect(p2p.DiscUselessPeer)
}
pm.peers.Unregister(id)
}
func (pm *ProtocolManager) Start(srvr *p2p.Server) {
var topicDisc *discv5.Network
if srvr != nil {
topicDisc = srvr.DiscV5
}
lesTopic := discv5.Topic("LES@" + common.Bytes2Hex(pm.blockchain.Genesis().Hash().Bytes()[0:8]))
func (pm *ProtocolManager) Start() {
if pm.lightSync {
// start sync handler
if srvr != nil { // srvr is nil during testing
pm.serverPool = newServerPool(pm.chainDb, []byte("serverPool/"), srvr, lesTopic, pm.quitSync, &pm.wg)
pm.odr.serverPool = pm.serverPool
pm.fetcher = newLightFetcher(pm)
}
go pm.syncer()
} else {
if topicDisc != nil {
go func() {
logger := log.New("topic", lesTopic)
logger.Info("Starting topic registration")
defer logger.Info("Terminated topic registration")
topicDisc.RegisterTopic(lesTopic, pm.quitSync)
}()
}
go func() {
for range pm.newPeerCh {
}
@ -342,65 +287,10 @@ func (pm *ProtocolManager) handle(p *peer) error {
}()
// Register the peer in the downloader. If the downloader considers it banned, we disconnect
if pm.lightSync {
requestHeadersByHash := func(origin common.Hash, amount int, skip int, reverse bool) error {
reqID := getNextReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
peer := dp.(*peer)
return peer.GetRequestCost(GetBlockHeadersMsg, amount)
},
canSend: func(dp distPeer) bool {
return dp.(*peer) == p
},
request: func(dp distPeer) func() {
peer := dp.(*peer)
cost := peer.GetRequestCost(GetBlockHeadersMsg, amount)
peer.fcServer.QueueRequest(reqID, cost)
return func() { peer.RequestHeadersByHash(reqID, cost, origin, amount, skip, reverse) }
},
}
_, ok := <-pm.reqDist.queue(rq)
if !ok {
return ErrNoPeers
}
return nil
}
requestHeadersByNumber := func(origin uint64, amount int, skip int, reverse bool) error {
reqID := getNextReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
peer := dp.(*peer)
return peer.GetRequestCost(GetBlockHeadersMsg, amount)
},
canSend: func(dp distPeer) bool {
return dp.(*peer) == p
},
request: func(dp distPeer) func() {
peer := dp.(*peer)
cost := peer.GetRequestCost(GetBlockHeadersMsg, amount)
peer.fcServer.QueueRequest(reqID, cost)
return func() { peer.RequestHeadersByNumber(reqID, cost, origin, amount, skip, reverse) }
},
}
_, ok := <-pm.reqDist.queue(rq)
if !ok {
return ErrNoPeers
}
return nil
}
if err := pm.downloader.RegisterPeer(p.id, ethVersion, p.HeadAndTd,
requestHeadersByHash, requestHeadersByNumber, nil, nil, nil); err != nil {
return err
}
if pm.txrelay != nil {
pm.txrelay.addPeer(p)
}
p.lock.Lock()
head := p.headInfo
p.lock.Unlock()
if pm.fetcher != nil {
pm.fetcher.addPeer(p)
pm.fetcher.announce(p, head)
}
@ -926,7 +816,7 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
}
if deliverMsg != nil {
err := pm.odr.Deliver(p, deliverMsg)
err := pm.retriever.deliver(p, deliverMsg)
if err != nil {
p.responseErrors++
if p.responseErrors > maxResponseErrors {
@ -946,3 +836,64 @@ func (self *ProtocolManager) NodeInfo() *eth.EthNodeInfo {
Head: self.blockchain.LastBlockHash(),
}
}
// downloaderPeerNotify implements peerSetNotify
type downloaderPeerNotify ProtocolManager
func (d *downloaderPeerNotify) registerPeer(p *peer) {
pm := (*ProtocolManager)(d)
requestHeadersByHash := func(origin common.Hash, amount int, skip int, reverse bool) error {
reqID := genReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
peer := dp.(*peer)
return peer.GetRequestCost(GetBlockHeadersMsg, amount)
},
canSend: func(dp distPeer) bool {
return dp.(*peer) == p
},
request: func(dp distPeer) func() {
peer := dp.(*peer)
cost := peer.GetRequestCost(GetBlockHeadersMsg, amount)
peer.fcServer.QueueRequest(reqID, cost)
return func() { peer.RequestHeadersByHash(reqID, cost, origin, amount, skip, reverse) }
},
}
_, ok := <-pm.reqDist.queue(rq)
if !ok {
return ErrNoPeers
}
return nil
}
requestHeadersByNumber := func(origin uint64, amount int, skip int, reverse bool) error {
reqID := genReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
peer := dp.(*peer)
return peer.GetRequestCost(GetBlockHeadersMsg, amount)
},
canSend: func(dp distPeer) bool {
return dp.(*peer) == p
},
request: func(dp distPeer) func() {
peer := dp.(*peer)
cost := peer.GetRequestCost(GetBlockHeadersMsg, amount)
peer.fcServer.QueueRequest(reqID, cost)
return func() { peer.RequestHeadersByNumber(reqID, cost, origin, amount, skip, reverse) }
},
}
_, ok := <-pm.reqDist.queue(rq)
if !ok {
return ErrNoPeers
}
return nil
}
pm.downloader.RegisterPeer(p.id, ethVersion, p.HeadAndTd, requestHeadersByHash, requestHeadersByNumber, nil, nil, nil)
}
func (d *downloaderPeerNotify) unregisterPeer(p *peer) {
pm := (*ProtocolManager)(d)
pm.downloader.UnregisterPeer(p.id)
}

View File

@ -25,6 +25,7 @@ import (
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/downloader"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
@ -42,7 +43,8 @@ func expectResponse(r p2p.MsgReader, msgcode, reqID, bv uint64, data interface{}
func TestGetBlockHeadersLes1(t *testing.T) { testGetBlockHeaders(t, 1) }
func testGetBlockHeaders(t *testing.T, protocol int) {
pm, _, _ := newTestProtocolManagerMust(t, false, downloader.MaxHashFetch+15, nil)
db, _ := ethdb.NewMemDatabase()
pm := newTestProtocolManagerMust(t, false, downloader.MaxHashFetch+15, nil, nil, nil, db)
bc := pm.blockchain.(*core.BlockChain)
peer, _ := newTestPeer(t, "peer", protocol, pm, true)
defer peer.close()
@ -170,7 +172,8 @@ func testGetBlockHeaders(t *testing.T, protocol int) {
func TestGetBlockBodiesLes1(t *testing.T) { testGetBlockBodies(t, 1) }
func testGetBlockBodies(t *testing.T, protocol int) {
pm, _, _ := newTestProtocolManagerMust(t, false, downloader.MaxBlockFetch+15, nil)
db, _ := ethdb.NewMemDatabase()
pm := newTestProtocolManagerMust(t, false, downloader.MaxBlockFetch+15, nil, nil, nil, db)
bc := pm.blockchain.(*core.BlockChain)
peer, _ := newTestPeer(t, "peer", protocol, pm, true)
defer peer.close()
@ -246,7 +249,8 @@ func TestGetCodeLes1(t *testing.T) { testGetCode(t, 1) }
func testGetCode(t *testing.T, protocol int) {
// Assemble the test environment
pm, _, _ := newTestProtocolManagerMust(t, false, 4, testChainGen)
db, _ := ethdb.NewMemDatabase()
pm := newTestProtocolManagerMust(t, false, 4, testChainGen, nil, nil, db)
bc := pm.blockchain.(*core.BlockChain)
peer, _ := newTestPeer(t, "peer", protocol, pm, true)
defer peer.close()
@ -278,7 +282,8 @@ func TestGetReceiptLes1(t *testing.T) { testGetReceipt(t, 1) }
func testGetReceipt(t *testing.T, protocol int) {
// Assemble the test environment
pm, db, _ := newTestProtocolManagerMust(t, false, 4, testChainGen)
db, _ := ethdb.NewMemDatabase()
pm := newTestProtocolManagerMust(t, false, 4, testChainGen, nil, nil, db)
bc := pm.blockchain.(*core.BlockChain)
peer, _ := newTestPeer(t, "peer", protocol, pm, true)
defer peer.close()
@ -304,7 +309,8 @@ func TestGetProofsLes1(t *testing.T) { testGetReceipt(t, 1) }
func testGetProofs(t *testing.T, protocol int) {
// Assemble the test environment
pm, db, _ := newTestProtocolManagerMust(t, false, 4, testChainGen)
db, _ := ethdb.NewMemDatabase()
pm := newTestProtocolManagerMust(t, false, 4, testChainGen, nil, nil, db)
bc := pm.blockchain.(*core.BlockChain)
peer, _ := newTestPeer(t, "peer", protocol, pm, true)
defer peer.close()

View File

@ -25,7 +25,6 @@ import (
"math/big"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash"
@ -132,22 +131,22 @@ func testRCL() RequestCostList {
// newTestProtocolManager creates a new protocol manager for testing purposes,
// with the given number of blocks already known, and potential notification
// channels for different events.
func newTestProtocolManager(lightSync bool, blocks int, generator func(int, *core.BlockGen)) (*ProtocolManager, ethdb.Database, *LesOdr, error) {
func newTestProtocolManager(lightSync bool, blocks int, generator func(int, *core.BlockGen), peers *peerSet, odr *LesOdr, db ethdb.Database) (*ProtocolManager, error) {
var (
evmux = new(event.TypeMux)
engine = ethash.NewFaker()
db, _ = ethdb.NewMemDatabase()
gspec = core.Genesis{
Config: params.TestChainConfig,
Alloc: core.GenesisAlloc{testBankAddress: {Balance: testBankFunds}},
}
genesis = gspec.MustCommit(db)
odr *LesOdr
chain BlockChain
chain BlockChain
)
if peers == nil {
peers = newPeerSet()
}
if lightSync {
odr = NewLesOdr(db)
chain, _ = light.NewLightChain(odr, gspec.Config, engine, evmux)
} else {
blockchain, _ := core.NewBlockChain(db, gspec.Config, engine, evmux, vm.Config{})
@ -158,9 +157,9 @@ func newTestProtocolManager(lightSync bool, blocks int, generator func(int, *cor
chain = blockchain
}
pm, err := NewProtocolManager(gspec.Config, lightSync, NetworkId, evmux, engine, chain, nil, db, odr, nil)
pm, err := NewProtocolManager(gspec.Config, lightSync, NetworkId, evmux, engine, peers, chain, nil, db, odr, nil, make(chan struct{}), new(sync.WaitGroup))
if err != nil {
return nil, nil, nil, err
return nil, err
}
if !lightSync {
srv := &LesServer{protocolManager: pm}
@ -174,20 +173,20 @@ func newTestProtocolManager(lightSync bool, blocks int, generator func(int, *cor
srv.fcManager = flowcontrol.NewClientManager(50, 10, 1000000000)
srv.fcCostStats = newCostStats(nil)
}
pm.Start(nil)
return pm, db, odr, nil
pm.Start()
return pm, nil
}
// newTestProtocolManagerMust creates a new protocol manager for testing purposes,
// with the given number of blocks already known, and potential notification
// channels for different events. In case of an error, the constructor force-
// fails the test.
func newTestProtocolManagerMust(t *testing.T, lightSync bool, blocks int, generator func(int, *core.BlockGen)) (*ProtocolManager, ethdb.Database, *LesOdr) {
pm, db, odr, err := newTestProtocolManager(lightSync, blocks, generator)
func newTestProtocolManagerMust(t *testing.T, lightSync bool, blocks int, generator func(int, *core.BlockGen), peers *peerSet, odr *LesOdr, db ethdb.Database) *ProtocolManager {
pm, err := newTestProtocolManager(lightSync, blocks, generator, peers, odr, db)
if err != nil {
t.Fatalf("Failed to create protocol manager: %v", err)
}
return pm, db, odr
return pm
}
// testTxPool is a fake, helper transaction pool for testing purposes
@ -342,30 +341,3 @@ func (p *testPeer) handshake(t *testing.T, td *big.Int, head common.Hash, headNu
func (p *testPeer) close() {
p.app.Close()
}
type testServerPool struct {
peer *peer
lock sync.RWMutex
}
func (p *testServerPool) setPeer(peer *peer) {
p.lock.Lock()
defer p.lock.Unlock()
p.peer = peer
}
func (p *testServerPool) getAllPeers() map[distPeer]struct{} {
p.lock.RLock()
defer p.lock.RUnlock()
m := make(map[distPeer]struct{})
if p.peer != nil {
m[p.peer] = struct{}{}
}
return m
}
func (p *testServerPool) adjustResponseTime(*poolEntry, time.Duration, bool) {
}

View File

@ -18,45 +18,24 @@ package les
import (
"context"
"crypto/rand"
"encoding/binary"
"sync"
"time"
"github.com/ethereum/go-ethereum/common/mclock"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/light"
"github.com/ethereum/go-ethereum/log"
)
var (
softRequestTimeout = time.Millisecond * 500
hardRequestTimeout = time.Second * 10
)
// peerDropFn is a callback type for dropping a peer detected as malicious.
type peerDropFn func(id string)
type odrPeerSelector interface {
adjustResponseTime(*poolEntry, time.Duration, bool)
}
// LesOdr implements light.OdrBackend
type LesOdr struct {
light.OdrBackend
db ethdb.Database
stop chan struct{}
removePeer peerDropFn
mlock, clock sync.Mutex
sentReqs map[uint64]*sentReq
serverPool odrPeerSelector
reqDist *requestDistributor
db ethdb.Database
stop chan struct{}
retriever *retrieveManager
}
func NewLesOdr(db ethdb.Database) *LesOdr {
func NewLesOdr(db ethdb.Database, retriever *retrieveManager) *LesOdr {
return &LesOdr{
db: db,
stop: make(chan struct{}),
sentReqs: make(map[uint64]*sentReq),
db: db,
retriever: retriever,
stop: make(chan struct{}),
}
}
@ -68,17 +47,6 @@ func (odr *LesOdr) Database() ethdb.Database {
return odr.db
}
// validatorFunc is a function that processes a message.
type validatorFunc func(ethdb.Database, *Msg) error
// sentReq is a request waiting for an answer that satisfies its valFunc
type sentReq struct {
valFunc validatorFunc
sentTo map[*peer]chan struct{}
lock sync.RWMutex // protects acces to sentTo
answered chan struct{} // closed and set to nil when any peer answers it
}
const (
MsgBlockBodies = iota
MsgCode
@ -94,156 +62,29 @@ type Msg struct {
Obj interface{}
}
// Deliver is called by the LES protocol manager to deliver ODR reply messages to waiting requests
func (self *LesOdr) Deliver(peer *peer, msg *Msg) error {
var delivered chan struct{}
self.mlock.Lock()
req, ok := self.sentReqs[msg.ReqID]
self.mlock.Unlock()
if ok {
req.lock.Lock()
delivered, ok = req.sentTo[peer]
req.lock.Unlock()
}
// Retrieve tries to fetch an object from the LES network.
// If the network retrieval was successful, it stores the object in local db.
func (self *LesOdr) Retrieve(ctx context.Context, req light.OdrRequest) (err error) {
lreq := LesRequest(req)
if !ok {
return errResp(ErrUnexpectedResponse, "reqID = %v", msg.ReqID)
}
if err := req.valFunc(self.db, msg); err != nil {
peer.Log().Warn("Invalid odr response", "err", err)
return errResp(ErrInvalidResponse, "reqID = %v", msg.ReqID)
}
close(delivered)
req.lock.Lock()
delete(req.sentTo, peer)
if req.answered != nil {
close(req.answered)
req.answered = nil
}
req.lock.Unlock()
return nil
}
func (self *LesOdr) requestPeer(req *sentReq, peer *peer, delivered, timeout chan struct{}, reqWg *sync.WaitGroup) {
stime := mclock.Now()
defer func() {
req.lock.Lock()
delete(req.sentTo, peer)
req.lock.Unlock()
reqWg.Done()
}()
select {
case <-delivered:
if self.serverPool != nil {
self.serverPool.adjustResponseTime(peer.poolEntry, time.Duration(mclock.Now()-stime), false)
}
return
case <-time.After(softRequestTimeout):
close(timeout)
case <-self.stop:
return
}
select {
case <-delivered:
case <-time.After(hardRequestTimeout):
peer.Log().Debug("Request timed out hard")
go self.removePeer(peer.id)
case <-self.stop:
return
}
if self.serverPool != nil {
self.serverPool.adjustResponseTime(peer.poolEntry, time.Duration(mclock.Now()-stime), true)
}
}
// networkRequest sends a request to known peers until an answer is received
// or the context is cancelled
func (self *LesOdr) networkRequest(ctx context.Context, lreq LesOdrRequest) error {
answered := make(chan struct{})
req := &sentReq{
valFunc: lreq.Validate,
sentTo: make(map[*peer]chan struct{}),
answered: answered, // reply delivered by any peer
}
exclude := make(map[*peer]struct{})
reqWg := new(sync.WaitGroup)
reqWg.Add(1)
defer reqWg.Done()
var timeout chan struct{}
reqID := getNextReqID()
reqID := genReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
return lreq.GetCost(dp.(*peer))
},
canSend: func(dp distPeer) bool {
p := dp.(*peer)
_, ok := exclude[p]
return !ok && lreq.CanSend(p)
return lreq.CanSend(p)
},
request: func(dp distPeer) func() {
p := dp.(*peer)
exclude[p] = struct{}{}
delivered := make(chan struct{})
timeout = make(chan struct{})
req.lock.Lock()
req.sentTo[p] = delivered
req.lock.Unlock()
reqWg.Add(1)
cost := lreq.GetCost(p)
p.fcServer.QueueRequest(reqID, cost)
go self.requestPeer(req, p, delivered, timeout, reqWg)
return func() { lreq.Request(reqID, p) }
},
}
self.mlock.Lock()
self.sentReqs[reqID] = req
self.mlock.Unlock()
go func() {
reqWg.Wait()
self.mlock.Lock()
delete(self.sentReqs, reqID)
self.mlock.Unlock()
}()
for {
peerChn := self.reqDist.queue(rq)
select {
case <-ctx.Done():
self.reqDist.cancel(rq)
return ctx.Err()
case <-answered:
self.reqDist.cancel(rq)
return nil
case _, ok := <-peerChn:
if !ok {
return ErrNoPeers
}
}
select {
case <-ctx.Done():
return ctx.Err()
case <-answered:
return nil
case <-timeout:
}
}
}
// Retrieve tries to fetch an object from the LES network.
// If the network retrieval was successful, it stores the object in local db.
func (self *LesOdr) Retrieve(ctx context.Context, req light.OdrRequest) (err error) {
lreq := LesRequest(req)
err = self.networkRequest(ctx, lreq)
if err == nil {
if err = self.retriever.retrieve(ctx, reqID, rq, func(p distPeer, msg *Msg) error { return lreq.Validate(self.db, msg) }); err == nil {
// retrieved from network, store in db
req.StoreResult(self.db)
} else {
@ -251,9 +92,3 @@ func (self *LesOdr) Retrieve(ctx context.Context, req light.OdrRequest) (err err
}
return
}
func getNextReqID() uint64 {
var rnd [8]byte
rand.Read(rnd[:])
return binary.BigEndian.Uint64(rnd[:])
}

View File

@ -158,15 +158,15 @@ func odrContractCall(ctx context.Context, db ethdb.Database, config *params.Chai
func testOdr(t *testing.T, protocol int, expFail uint64, fn odrTestFn) {
// Assemble the test environment
pm, db, odr := newTestProtocolManagerMust(t, false, 4, testChainGen)
lpm, ldb, odr := newTestProtocolManagerMust(t, true, 0, nil)
peers := newPeerSet()
dist := newRequestDistributor(peers, make(chan struct{}))
rm := newRetrieveManager(peers, dist, nil)
db, _ := ethdb.NewMemDatabase()
ldb, _ := ethdb.NewMemDatabase()
odr := NewLesOdr(ldb, rm)
pm := newTestProtocolManagerMust(t, false, 4, testChainGen, nil, nil, db)
lpm := newTestProtocolManagerMust(t, true, 0, nil, peers, odr, ldb)
_, err1, lpeer, err2 := newTestPeerPair("peer", protocol, pm, lpm)
pool := &testServerPool{}
lpm.reqDist = newRequestDistributor(pool.getAllPeers, lpm.quitSync)
odr.reqDist = lpm.reqDist
pool.setPeer(lpeer)
odr.serverPool = pool
lpeer.hasBlock = func(common.Hash, uint64) bool { return true }
select {
case <-time.After(time.Millisecond * 100):
case err := <-err1:
@ -198,13 +198,19 @@ func testOdr(t *testing.T, protocol int, expFail uint64, fn odrTestFn) {
}
// temporarily remove peer to test odr fails
pool.setPeer(nil)
// expect retrievals to fail (except genesis block) without a les peer
peers.Unregister(lpeer.id)
time.Sleep(time.Millisecond * 10) // ensure that all peerSetNotify callbacks are executed
test(expFail)
pool.setPeer(lpeer)
// expect all retrievals to pass
peers.Register(lpeer)
time.Sleep(time.Millisecond * 10) // ensure that all peerSetNotify callbacks are executed
lpeer.lock.Lock()
lpeer.hasBlock = func(common.Hash, uint64) bool { return true }
lpeer.lock.Unlock()
test(5)
pool.setPeer(nil)
// still expect all retrievals to pass, now data should be cached locally
peers.Unregister(lpeer.id)
time.Sleep(time.Millisecond * 10) // ensure that all peerSetNotify callbacks are executed
test(5)
}

View File

@ -166,9 +166,9 @@ func (p *peer) GetRequestCost(msgcode uint64, amount int) uint64 {
// HasBlock checks if the peer has a given block
func (p *peer) HasBlock(hash common.Hash, number uint64) bool {
p.lock.RLock()
hashBlock := p.hasBlock
hasBlock := p.hasBlock
p.lock.RUnlock()
return hashBlock != nil && hashBlock(hash, number)
return hasBlock != nil && hasBlock(hash, number)
}
// SendAnnounce announces the availability of a number of blocks through
@ -433,12 +433,20 @@ func (p *peer) String() string {
)
}
// peerSetNotify is a callback interface to notify services about added or
// removed peers
type peerSetNotify interface {
registerPeer(*peer)
unregisterPeer(*peer)
}
// peerSet represents the collection of active peers currently participating in
// the Light Ethereum sub-protocol.
type peerSet struct {
peers map[string]*peer
lock sync.RWMutex
closed bool
peers map[string]*peer
lock sync.RWMutex
notifyList []peerSetNotify
closed bool
}
// newPeerSet creates a new peer set to track the active participants.
@ -448,6 +456,17 @@ func newPeerSet() *peerSet {
}
}
// notify adds a service to be notified about added or removed peers
func (ps *peerSet) notify(n peerSetNotify) {
ps.lock.Lock()
defer ps.lock.Unlock()
ps.notifyList = append(ps.notifyList, n)
for _, p := range ps.peers {
go n.registerPeer(p)
}
}
// Register injects a new peer into the working set, or returns an error if the
// peer is already known.
func (ps *peerSet) Register(p *peer) error {
@ -462,11 +481,14 @@ func (ps *peerSet) Register(p *peer) error {
}
ps.peers[p.id] = p
p.sendQueue = newExecQueue(100)
for _, n := range ps.notifyList {
go n.registerPeer(p)
}
return nil
}
// Unregister removes a remote peer from the active set, disabling any further
// actions to/from that particular entity.
// actions to/from that particular entity. It also initiates disconnection at the networking layer.
func (ps *peerSet) Unregister(id string) error {
ps.lock.Lock()
defer ps.lock.Unlock()
@ -474,7 +496,11 @@ func (ps *peerSet) Unregister(id string) error {
if p, ok := ps.peers[id]; !ok {
return errNotRegistered
} else {
for _, n := range ps.notifyList {
go n.unregisterPeer(p)
}
p.sendQueue.quit()
p.Peer.Disconnect(p2p.DiscUselessPeer)
}
delete(ps.peers, id)
return nil

View File

@ -68,15 +68,16 @@ func tfCodeAccess(db ethdb.Database, bhash common.Hash, number uint64) light.Odr
func testAccess(t *testing.T, protocol int, fn accessTestFn) {
// Assemble the test environment
pm, db, _ := newTestProtocolManagerMust(t, false, 4, testChainGen)
lpm, ldb, odr := newTestProtocolManagerMust(t, true, 0, nil)
peers := newPeerSet()
dist := newRequestDistributor(peers, make(chan struct{}))
rm := newRetrieveManager(peers, dist, nil)
db, _ := ethdb.NewMemDatabase()
ldb, _ := ethdb.NewMemDatabase()
odr := NewLesOdr(ldb, rm)
pm := newTestProtocolManagerMust(t, false, 4, testChainGen, nil, nil, db)
lpm := newTestProtocolManagerMust(t, true, 0, nil, peers, odr, ldb)
_, err1, lpeer, err2 := newTestPeerPair("peer", protocol, pm, lpm)
pool := &testServerPool{}
lpm.reqDist = newRequestDistributor(pool.getAllPeers, lpm.quitSync)
odr.reqDist = lpm.reqDist
pool.setPeer(lpeer)
odr.serverPool = pool
lpeer.hasBlock = func(common.Hash, uint64) bool { return true }
select {
case <-time.After(time.Millisecond * 100):
case err := <-err1:
@ -108,10 +109,16 @@ func testAccess(t *testing.T, protocol int, fn accessTestFn) {
}
// temporarily remove peer to test odr fails
pool.setPeer(nil)
peers.Unregister(lpeer.id)
time.Sleep(time.Millisecond * 10) // ensure that all peerSetNotify callbacks are executed
// expect retrievals to fail (except genesis block) without a les peer
test(0)
pool.setPeer(lpeer)
peers.Register(lpeer)
time.Sleep(time.Millisecond * 10) // ensure that all peerSetNotify callbacks are executed
lpeer.lock.Lock()
lpeer.hasBlock = func(common.Hash, uint64) bool { return true }
lpeer.lock.Unlock()
// expect all retrievals to pass
test(5)
}

395
les/retrieve.go Normal file
View File

@ -0,0 +1,395 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package light implements on-demand retrieval capable state and chain objects
// for the Ethereum Light Client.
package les
import (
"context"
"crypto/rand"
"encoding/binary"
"sync"
"time"
"github.com/ethereum/go-ethereum/common/mclock"
)
var (
retryQueue = time.Millisecond * 100
softRequestTimeout = time.Millisecond * 500
hardRequestTimeout = time.Second * 10
)
// retrieveManager is a layer on top of requestDistributor which takes care of
// matching replies by request ID and handles timeouts and resends if necessary.
type retrieveManager struct {
dist *requestDistributor
peers *peerSet
serverPool peerSelector
lock sync.RWMutex
sentReqs map[uint64]*sentReq
}
// validatorFunc is a function that processes a reply message
type validatorFunc func(distPeer, *Msg) error
// peerSelector receives feedback info about response times and timeouts
type peerSelector interface {
adjustResponseTime(*poolEntry, time.Duration, bool)
}
// sentReq represents a request sent and tracked by retrieveManager
type sentReq struct {
rm *retrieveManager
req *distReq
id uint64
validate validatorFunc
eventsCh chan reqPeerEvent
stopCh chan struct{}
stopped bool
err error
lock sync.RWMutex // protect access to sentTo map
sentTo map[distPeer]sentReqToPeer
reqQueued bool // a request has been queued but not sent
reqSent bool // a request has been sent but not timed out
reqSrtoCount int // number of requests that reached soft (but not hard) timeout
}
// sentReqToPeer notifies the request-from-peer goroutine (tryRequest) about a response
// delivered by the given peer. Only one delivery is allowed per request per peer,
// after which delivered is set to true, the validity of the response is sent on the
// valid channel and no more responses are accepted.
type sentReqToPeer struct {
delivered bool
valid chan bool
}
// reqPeerEvent is sent by the request-from-peer goroutine (tryRequest) to the
// request state machine (retrieveLoop) through the eventsCh channel.
type reqPeerEvent struct {
event int
peer distPeer
}
const (
rpSent = iota // if peer == nil, not sent (no suitable peers)
rpSoftTimeout
rpHardTimeout
rpDeliveredValid
rpDeliveredInvalid
)
// newRetrieveManager creates the retrieve manager
func newRetrieveManager(peers *peerSet, dist *requestDistributor, serverPool peerSelector) *retrieveManager {
return &retrieveManager{
peers: peers,
dist: dist,
serverPool: serverPool,
sentReqs: make(map[uint64]*sentReq),
}
}
// retrieve sends a request (to multiple peers if necessary) and waits for an answer
// that is delivered through the deliver function and successfully validated by the
// validator callback. It returns when a valid answer is delivered or the context is
// cancelled.
func (rm *retrieveManager) retrieve(ctx context.Context, reqID uint64, req *distReq, val validatorFunc) error {
sentReq := rm.sendReq(reqID, req, val)
select {
case <-sentReq.stopCh:
case <-ctx.Done():
sentReq.stop(ctx.Err())
}
return sentReq.getError()
}
// sendReq starts a process that keeps trying to retrieve a valid answer for a
// request from any suitable peers until stopped or succeeded.
func (rm *retrieveManager) sendReq(reqID uint64, req *distReq, val validatorFunc) *sentReq {
r := &sentReq{
rm: rm,
req: req,
id: reqID,
sentTo: make(map[distPeer]sentReqToPeer),
stopCh: make(chan struct{}),
eventsCh: make(chan reqPeerEvent, 10),
validate: val,
}
canSend := req.canSend
req.canSend = func(p distPeer) bool {
// add an extra check to canSend: the request has not been sent to the same peer before
r.lock.RLock()
_, sent := r.sentTo[p]
r.lock.RUnlock()
return !sent && canSend(p)
}
request := req.request
req.request = func(p distPeer) func() {
// before actually sending the request, put an entry into the sentTo map
r.lock.Lock()
r.sentTo[p] = sentReqToPeer{false, make(chan bool, 1)}
r.lock.Unlock()
return request(p)
}
rm.lock.Lock()
rm.sentReqs[reqID] = r
rm.lock.Unlock()
go r.retrieveLoop()
return r
}
// deliver is called by the LES protocol manager to deliver reply messages to waiting requests
func (rm *retrieveManager) deliver(peer distPeer, msg *Msg) error {
rm.lock.RLock()
req, ok := rm.sentReqs[msg.ReqID]
rm.lock.RUnlock()
if ok {
return req.deliver(peer, msg)
}
return errResp(ErrUnexpectedResponse, "reqID = %v", msg.ReqID)
}
// reqStateFn represents a state of the retrieve loop state machine
type reqStateFn func() reqStateFn
// retrieveLoop is the retrieval state machine event loop
func (r *sentReq) retrieveLoop() {
go r.tryRequest()
r.reqQueued = true
state := r.stateRequesting
for state != nil {
state = state()
}
r.rm.lock.Lock()
delete(r.rm.sentReqs, r.id)
r.rm.lock.Unlock()
}
// stateRequesting: a request has been queued or sent recently; when it reaches soft timeout,
// a new request is sent to a new peer
func (r *sentReq) stateRequesting() reqStateFn {
select {
case ev := <-r.eventsCh:
r.update(ev)
switch ev.event {
case rpSent:
if ev.peer == nil {
// request send failed, no more suitable peers
if r.waiting() {
// we are already waiting for sent requests which may succeed so keep waiting
return r.stateNoMorePeers
}
// nothing to wait for, no more peers to ask, return with error
r.stop(ErrNoPeers)
// no need to go to stopped state because waiting() already returned false
return nil
}
case rpSoftTimeout:
// last request timed out, try asking a new peer
go r.tryRequest()
r.reqQueued = true
return r.stateRequesting
case rpDeliveredValid:
r.stop(nil)
return r.stateStopped
}
return r.stateRequesting
case <-r.stopCh:
return r.stateStopped
}
}
// stateNoMorePeers: could not send more requests because no suitable peers are available.
// Peers may become suitable for a certain request later or new peers may appear so we
// keep trying.
func (r *sentReq) stateNoMorePeers() reqStateFn {
select {
case <-time.After(retryQueue):
go r.tryRequest()
r.reqQueued = true
return r.stateRequesting
case ev := <-r.eventsCh:
r.update(ev)
if ev.event == rpDeliveredValid {
r.stop(nil)
return r.stateStopped
}
return r.stateNoMorePeers
case <-r.stopCh:
return r.stateStopped
}
}
// stateStopped: request succeeded or cancelled, just waiting for some peers
// to either answer or time out hard
func (r *sentReq) stateStopped() reqStateFn {
for r.waiting() {
r.update(<-r.eventsCh)
}
return nil
}
// update updates the queued/sent flags and timed out peers counter according to the event
func (r *sentReq) update(ev reqPeerEvent) {
switch ev.event {
case rpSent:
r.reqQueued = false
if ev.peer != nil {
r.reqSent = true
}
case rpSoftTimeout:
r.reqSent = false
r.reqSrtoCount++
case rpHardTimeout, rpDeliveredValid, rpDeliveredInvalid:
r.reqSrtoCount--
}
}
// waiting returns true if the retrieval mechanism is waiting for an answer from
// any peer
func (r *sentReq) waiting() bool {
return r.reqQueued || r.reqSent || r.reqSrtoCount > 0
}
// tryRequest tries to send the request to a new peer and waits for it to either
// succeed or time out if it has been sent. It also sends the appropriate reqPeerEvent
// messages to the request's event channel.
func (r *sentReq) tryRequest() {
sent := r.rm.dist.queue(r.req)
var p distPeer
select {
case p = <-sent:
case <-r.stopCh:
if r.rm.dist.cancel(r.req) {
p = nil
} else {
p = <-sent
}
}
r.eventsCh <- reqPeerEvent{rpSent, p}
if p == nil {
return
}
reqSent := mclock.Now()
srto, hrto := false, false
r.lock.RLock()
s, ok := r.sentTo[p]
r.lock.RUnlock()
if !ok {
panic(nil)
}
defer func() {
// send feedback to server pool and remove peer if hard timeout happened
pp, ok := p.(*peer)
if ok && r.rm.serverPool != nil {
respTime := time.Duration(mclock.Now() - reqSent)
r.rm.serverPool.adjustResponseTime(pp.poolEntry, respTime, srto)
}
if hrto {
pp.Log().Debug("Request timed out hard")
if r.rm.peers != nil {
r.rm.peers.Unregister(pp.id)
}
}
r.lock.Lock()
delete(r.sentTo, p)
r.lock.Unlock()
}()
select {
case ok := <-s.valid:
if ok {
r.eventsCh <- reqPeerEvent{rpDeliveredValid, p}
} else {
r.eventsCh <- reqPeerEvent{rpDeliveredInvalid, p}
}
return
case <-time.After(softRequestTimeout):
srto = true
r.eventsCh <- reqPeerEvent{rpSoftTimeout, p}
}
select {
case ok := <-s.valid:
if ok {
r.eventsCh <- reqPeerEvent{rpDeliveredValid, p}
} else {
r.eventsCh <- reqPeerEvent{rpDeliveredInvalid, p}
}
case <-time.After(hardRequestTimeout):
hrto = true
r.eventsCh <- reqPeerEvent{rpHardTimeout, p}
}
}
// deliver a reply belonging to this request
func (r *sentReq) deliver(peer distPeer, msg *Msg) error {
r.lock.Lock()
defer r.lock.Unlock()
s, ok := r.sentTo[peer]
if !ok || s.delivered {
return errResp(ErrUnexpectedResponse, "reqID = %v", msg.ReqID)
}
valid := r.validate(peer, msg) == nil
r.sentTo[peer] = sentReqToPeer{true, s.valid}
s.valid <- valid
if !valid {
return errResp(ErrInvalidResponse, "reqID = %v", msg.ReqID)
}
return nil
}
// stop stops the retrieval process and sets an error code that will be returned
// by getError
func (r *sentReq) stop(err error) {
r.lock.Lock()
if !r.stopped {
r.stopped = true
r.err = err
close(r.stopCh)
}
r.lock.Unlock()
}
// getError returns any retrieval error (either internally generated or set by the
// stop function) after stopCh has been closed
func (r *sentReq) getError() error {
return r.err
}
// genReqID generates a new random request ID
func genReqID() uint64 {
var rnd [8]byte
rand.Read(rnd[:])
return binary.BigEndian.Uint64(rnd[:])
}

View File

@ -32,6 +32,7 @@ import (
"github.com/ethereum/go-ethereum/light"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/discv5"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
)
@ -41,17 +42,24 @@ type LesServer struct {
fcManager *flowcontrol.ClientManager // nil if our node is client only
fcCostStats *requestCostStats
defParams *flowcontrol.ServerParams
lesTopic discv5.Topic
quitSync chan struct{}
stopped bool
}
func NewLesServer(eth *eth.Ethereum, config *eth.Config) (*LesServer, error) {
pm, err := NewProtocolManager(eth.BlockChain().Config(), false, config.NetworkId, eth.EventMux(), eth.Engine(), eth.BlockChain(), eth.TxPool(), eth.ChainDb(), nil, nil)
quitSync := make(chan struct{})
pm, err := NewProtocolManager(eth.BlockChain().Config(), false, config.NetworkId, eth.EventMux(), eth.Engine(), newPeerSet(), eth.BlockChain(), eth.TxPool(), eth.ChainDb(), nil, nil, quitSync, new(sync.WaitGroup))
if err != nil {
return nil, err
}
pm.blockLoop()
srv := &LesServer{protocolManager: pm}
srv := &LesServer{
protocolManager: pm,
quitSync: quitSync,
lesTopic: lesTopic(eth.BlockChain().Genesis().Hash()),
}
pm.server = srv
srv.defParams = &flowcontrol.ServerParams{
@ -69,7 +77,14 @@ func (s *LesServer) Protocols() []p2p.Protocol {
// Start starts the LES server
func (s *LesServer) Start(srvr *p2p.Server) {
s.protocolManager.Start(srvr)
s.protocolManager.Start()
go func() {
logger := log.New("topic", s.lesTopic)
logger.Info("Starting topic registration")
defer logger.Info("Terminated topic registration")
srvr.DiscV5.RegisterTopic(s.lesTopic, s.quitSync)
}()
}
// Stop stops the LES service

View File

@ -102,6 +102,8 @@ type serverPool struct {
wg *sync.WaitGroup
connWg sync.WaitGroup
topic discv5.Topic
discSetPeriod chan time.Duration
discNodes chan *discv5.Node
discLookups chan bool
@ -118,11 +120,9 @@ type serverPool struct {
}
// newServerPool creates a new serverPool instance
func newServerPool(db ethdb.Database, dbPrefix []byte, server *p2p.Server, topic discv5.Topic, quit chan struct{}, wg *sync.WaitGroup) *serverPool {
func newServerPool(db ethdb.Database, quit chan struct{}, wg *sync.WaitGroup) *serverPool {
pool := &serverPool{
db: db,
dbKey: append(dbPrefix, []byte(topic)...),
server: server,
quit: quit,
wg: wg,
entries: make(map[discover.NodeID]*poolEntry),
@ -135,19 +135,25 @@ func newServerPool(db ethdb.Database, dbPrefix []byte, server *p2p.Server, topic
}
pool.knownQueue = newPoolEntryQueue(maxKnownEntries, pool.removeEntry)
pool.newQueue = newPoolEntryQueue(maxNewEntries, pool.removeEntry)
wg.Add(1)
pool.loadNodes()
pool.checkDial()
return pool
}
func (pool *serverPool) start(server *p2p.Server, topic discv5.Topic) {
pool.server = server
pool.topic = topic
pool.dbKey = append([]byte("serverPool/"), []byte(topic)...)
pool.wg.Add(1)
pool.loadNodes()
go pool.eventLoop()
pool.checkDial()
if pool.server.DiscV5 != nil {
pool.discSetPeriod = make(chan time.Duration, 1)
pool.discNodes = make(chan *discv5.Node, 100)
pool.discLookups = make(chan bool, 100)
go pool.server.DiscV5.SearchTopic(topic, pool.discSetPeriod, pool.discNodes, pool.discLookups)
go pool.server.DiscV5.SearchTopic(pool.topic, pool.discSetPeriod, pool.discNodes, pool.discLookups)
}
go pool.eventLoop()
return pool
}
// connect should be called upon any incoming connection. If the connection has been
@ -485,7 +491,7 @@ func (pool *serverPool) checkDial() {
// dial initiates a new connection
func (pool *serverPool) dial(entry *poolEntry, knownSelected bool) {
if entry.state != psNotConnected {
if pool.server == nil || entry.state != psNotConnected {
return
}
entry.state = psDialed

View File

@ -39,26 +39,28 @@ type LesTxRelay struct {
reqDist *requestDistributor
}
func NewLesTxRelay() *LesTxRelay {
return &LesTxRelay{
func NewLesTxRelay(ps *peerSet, reqDist *requestDistributor) *LesTxRelay {
r := &LesTxRelay{
txSent: make(map[common.Hash]*ltrInfo),
txPending: make(map[common.Hash]struct{}),
ps: ps,
reqDist: reqDist,
}
ps.notify(r)
return r
}
func (self *LesTxRelay) addPeer(p *peer) {
func (self *LesTxRelay) registerPeer(p *peer) {
self.lock.Lock()
defer self.lock.Unlock()
self.ps.Register(p)
self.peerList = self.ps.AllPeers()
}
func (self *LesTxRelay) removePeer(id string) {
func (self *LesTxRelay) unregisterPeer(p *peer) {
self.lock.Lock()
defer self.lock.Unlock()
self.ps.Unregister(id)
self.peerList = self.ps.AllPeers()
}
@ -112,7 +114,7 @@ func (self *LesTxRelay) send(txs types.Transactions, count int) {
pp := p
ll := list
reqID := getNextReqID()
reqID := genReqID()
rq := &distReq{
getCost: func(dp distPeer) uint64 {
peer := dp.(*peer)

View File

@ -360,7 +360,7 @@ func (pool *TxPool) validateTx(ctx context.Context, tx *types.Transaction) error
currentState := pool.currentState()
if n, err := currentState.GetNonce(ctx, from); err == nil {
if n > tx.Nonce() {
return core.ErrNonce
return core.ErrNonceTooLow
}
} else {
return err

View File

@ -37,6 +37,9 @@ package go;
import android.test.InstrumentationTestCase;
import android.test.MoreAsserts;
import java.math.BigInteger;
import java.util.Arrays;
import org.ethereum.geth.*;
public class AndroidTest extends InstrumentationTestCase {
@ -115,6 +118,32 @@ public class AndroidTest extends InstrumentationTestCase {
fail(e.toString());
}
}
// Tests that recovering transaction signers works for both Homestead and EIP155
// signatures too. Regression test for go-ethereum issue #14599.
public void testIssue14599() {
try {
byte[] preEIP155RLP = new BigInteger("f901fc8032830138808080b901ae60056013565b6101918061001d6000396000f35b3360008190555056006001600060e060020a6000350480630a874df61461003a57806341c0e1b514610058578063a02b161e14610066578063dbbdf0831461007757005b610045600435610149565b80600160a060020a031660005260206000f35b610060610161565b60006000f35b6100716004356100d4565b60006000f35b61008560043560243561008b565b60006000f35b600054600160a060020a031632600160a060020a031614156100ac576100b1565b6100d0565b8060018360005260205260406000208190555081600060005260206000a15b5050565b600054600160a060020a031633600160a060020a031614158015610118575033600160a060020a0316600182600052602052604060002054600160a060020a031614155b61012157610126565b610146565b600060018260005260205260406000208190555080600060005260206000a15b50565b60006001826000526020526040600020549050919050565b600054600160a060020a031633600160a060020a0316146101815761018f565b600054600160a060020a0316ff5b561ca0c5689ed1ad124753d54576dfb4b571465a41900a1dff4058d8adf16f752013d0a01221cbd70ec28c94a3b55ec771bcbc70778d6ee0b51ca7ea9514594c861b1884", 16).toByteArray();
preEIP155RLP = Arrays.copyOfRange(preEIP155RLP, 1, preEIP155RLP.length);
byte[] postEIP155RLP = new BigInteger("f86b80847735940082520894ef5bbb9bba2e1ca69ef81b23a8727d889f3ef0a1880de0b6b3a7640000802ba06fef16c44726a102e6d55a651740636ef8aec6df3ebf009e7b0c1f29e4ac114aa057e7fbc69760b522a78bb568cfc37a58bfdcf6ea86cb8f9b550263f58074b9cc", 16).toByteArray();
postEIP155RLP = Arrays.copyOfRange(postEIP155RLP, 1, postEIP155RLP.length);
Transaction preEIP155 = new Transaction(preEIP155RLP);
Transaction postEIP155 = new Transaction(postEIP155RLP);
preEIP155.getFrom(null); // Homestead should accept homestead
preEIP155.getFrom(new BigInt(4)); // EIP155 should accept homestead (missing chain ID)
postEIP155.getFrom(new BigInt(4)); // EIP155 should accept EIP 155
try {
postEIP155.getFrom(null);
fail("EIP155 transaction accepted by Homestead");
} catch (Exception e) {}
} catch (Exception e) {
fail(e.toString());
}
}
}
`

View File

@ -264,8 +264,11 @@ func (tx *Transaction) GetHash() *Hash { return &Hash{tx.tx.Hash()} }
func (tx *Transaction) GetSigHash() *Hash { return &Hash{tx.tx.SigHash(types.HomesteadSigner{})} }
func (tx *Transaction) GetCost() *BigInt { return &BigInt{tx.tx.Cost()} }
func (tx *Transaction) GetFrom() (address *Address, _ error) {
from, err := types.Sender(types.HomesteadSigner{}, tx.tx)
func (tx *Transaction) GetFrom(chainID *BigInt) (address *Address, _ error) {
if chainID == nil { // Null passed from mobile app
chainID = new(BigInt)
}
from, err := types.Sender(types.NewEIP155Signer(chainID.bigint), tx.tx)
return &Address{from}, err
}

View File

@ -43,7 +43,11 @@ func (ctx *ServiceContext) OpenDatabase(name string, cache int, handles int) (et
if ctx.config.DataDir == "" {
return ethdb.NewMemDatabase()
}
return ethdb.NewLDBDatabase(ctx.config.resolvePath(name), cache, handles)
db, err := ethdb.NewLDBDatabase(ctx.config.resolvePath(name), cache, handles)
if err != nil {
return nil, err
}
return db, nil
}
// ResolvePath resolves a user path into the data directory if that was relative

View File

@ -23,7 +23,7 @@ import (
const (
VersionMajor = 1 // Major version component of the current release
VersionMinor = 6 // Minor version component of the current release
VersionPatch = 2 // Patch version component of the current release
VersionPatch = 6 // Patch version component of the current release
VersionMeta = "stable" // Version metadata to append to the version string
)

View File

@ -55,7 +55,7 @@ type Decoder interface {
// To decode into a pointer, Decode will decode into the value pointed
// to. If the pointer is nil, a new value of the pointer's element
// type is allocated. If the pointer is non-nil, the existing value
// will reused.
// will be reused.
//
// To decode into a struct, Decode expects the input to be an RLP
// list. The decoded elements of the list are assigned to each public
@ -290,7 +290,7 @@ func makeListDecoder(typ reflect.Type, tag tags) (decoder, error) {
}
case tag.tail:
// A slice with "tail" tag can occur as the last field
// of a struct and is upposed to swallow all remaining
// of a struct and is supposed to swallow all remaining
// list elements. The struct decoder already called s.List,
// proceed directly to decoding the elements.
dec = func(s *Stream, val reflect.Value) error {
@ -741,7 +741,7 @@ func (s *Stream) uint(maxbits int) (uint64, error) {
}
// Bool reads an RLP string of up to 1 byte and returns its contents
// as an boolean. If the input does not contain an RLP string, the
// as a boolean. If the input does not contain an RLP string, the
// returned error will be ErrExpectedString.
func (s *Stream) Bool() (bool, error) {
num, err := s.uint(8)

View File

@ -17,13 +17,13 @@
/*
Package rlp implements the RLP serialization format.
The purpose of RLP (Recursive Linear Prefix) qis to encode arbitrarily
The purpose of RLP (Recursive Linear Prefix) is to encode arbitrarily
nested arrays of binary data, and RLP is the main encoding method used
to serialize objects in Ethereum. The only purpose of RLP is to encode
structure; encoding specific atomic data types (eg. strings, ints,
floats) is left up to higher-order protocols; in Ethereum integers
must be represented in big endian binary form with no leading zeroes
(thus making the integer value zero be equivalent to the empty byte
(thus making the integer value zero equivalent to the empty byte
array).
RLP values are distinguished by a type tag. The type tag precedes the

View File

@ -478,7 +478,7 @@ func writeEncoder(val reflect.Value, w *encbuf) error {
// with a pointer receiver.
func writeEncoderNoPtr(val reflect.Value, w *encbuf) error {
if !val.CanAddr() {
// We can't get the address. It would be possible make the
// We can't get the address. It would be possible to make the
// value addressable by creating a shallow copy, but this
// creates other problems so we're not doing it (yet).
//
@ -583,7 +583,7 @@ func makePtrWriter(typ reflect.Type) (writer, error) {
return writer, err
}
// putint writes i to the beginning of b in with big endian byte
// putint writes i to the beginning of b in big endian byte
// order, using the least number of bytes needed to represent i.
func putint(b []byte, i uint64) (size int) {
switch {

View File

@ -22,7 +22,7 @@ import (
)
// RawValue represents an encoded RLP value and can be used to delay
// RLP decoding or precompute an encoding. Note that the decoder does
// RLP decoding or to precompute an encoding. Note that the decoder does
// not verify whether the content of RawValues is valid RLP.
type RawValue []byte

2
swarm/dev/.dockerignore Normal file
View File

@ -0,0 +1,2 @@
bin/*
cluster/*

2
swarm/dev/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
bin/*
cluster/*

42
swarm/dev/Dockerfile Normal file
View File

@ -0,0 +1,42 @@
FROM ubuntu:xenial
# install build + test dependencies
RUN apt-get update && \
apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
fuse \
g++ \
gcc \
git \
iproute2 \
iputils-ping \
less \
libc6-dev \
make \
pkg-config \
&& \
apt-get clean
# install Go
ENV GO_VERSION 1.8.1
RUN curl -fSLo golang.tar.gz "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" && \
tar -xzf golang.tar.gz -C /usr/local && \
rm golang.tar.gz
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
# install docker CLI
RUN curl -fSLo docker.tar.gz https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz && \
tar -xzf docker.tar.gz -C /usr/local/bin --strip-components=1 docker/docker && \
rm docker.tar.gz
# install jq
RUN curl -fSLo /usr/local/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 && \
chmod +x /usr/local/bin/jq
# install govendor
RUN go get -u github.com/kardianos/govendor
# add custom bashrc
ADD bashrc /root/.bashrc

14
swarm/dev/Makefile Normal file
View File

@ -0,0 +1,14 @@
.PHONY: build cluster test
default: build
build:
go build -o bin/swarm github.com/ethereum/go-ethereum/cmd/swarm
go build -o bin/geth github.com/ethereum/go-ethereum/cmd/geth
go build -o bin/bootnode github.com/ethereum/go-ethereum/cmd/bootnode
cluster: build
scripts/boot-cluster.sh
test:
go test -v github.com/ethereum/go-ethereum/swarm/...

20
swarm/dev/README.md Normal file
View File

@ -0,0 +1,20 @@
Swarm development environment
=============================
The Swarm development environment is a Linux bash shell which can be run in a
Docker container and provides a predictable build and test environment.
### Start the Docker container
Run the `run.sh` script to build the Docker image and run it, you will then be
at a bash prompt inside the `swarm/dev` directory.
### Build binaries
Run `make` to build the `swarm`, `geth` and `bootnode` binaries into the
`swarm/dev/bin` directory.
### Boot a cluster
Run `make cluster` to start a 3 node Swarm cluster, or run
`scripts/boot-cluster.sh --size N` to boot a cluster of size N.

21
swarm/dev/bashrc Normal file
View File

@ -0,0 +1,21 @@
export ROOT="${GOPATH}/src/github.com/ethereum/go-ethereum"
export PATH="${ROOT}/swarm/dev/bin:${PATH}"
cd "${ROOT}/swarm/dev"
cat <<WELCOME
=============================================
Welcome to the swarm development environment.
- Run 'make' to build the swarm, geth and bootnode binaries
- Run 'make test' to run the swarm unit tests
- Run 'make cluster' to start a swarm cluster
- Run 'exit' to exit the development environment
See the 'scripts' directory for some useful scripts.
=============================================
WELCOME

90
swarm/dev/run.sh Executable file
View File

@ -0,0 +1,90 @@
#!/usr/bin/env bash
#
# A script to build and run the Swarm development environment using Docker.
set -e
ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
# DEFAULT_NAME is the default name for the Docker image and container
DEFAULT_NAME="swarm-dev"
usage() {
cat >&2 <<USAGE
usage: $0 [options]
Build and run the Swarm development environment.
Depends on Docker being installed locally.
OPTIONS:
-n, --name NAME Docker image and container name [default: ${DEFAULT_NAME}]
-d, --docker-args ARGS Custom args to pass to 'docker run' (e.g. '-p 8000:8000' to expose a port)
-h, --help Show this message
USAGE
}
main() {
local name="${DEFAULT_NAME}"
local docker_args=""
parse_args "$@"
build_image
run_image
}
parse_args() {
while true; do
case "$1" in
-h | --help)
usage
exit 0
;;
-n | --name)
if [[ -z "$2" ]]; then
echo "ERROR: --name flag requires an argument" >&2
exit 1
fi
name="$2"
shift 2
;;
-d | --docker-args)
if [[ -z "$2" ]]; then
echo "ERROR: --docker-args flag requires an argument" >&2
exit 1
fi
docker_args="$2"
shift 2
;;
*)
break
;;
esac
done
if [[ $# -ne 0 ]]; then
usage
echo "ERROR: invalid arguments" >&2
exit 1
fi
}
build_image() {
docker build --tag "${name}" "${ROOT}/swarm/dev"
}
run_image() {
exec docker run \
--privileged \
--interactive \
--tty \
--rm \
--hostname "${name}" \
--name "${name}" \
--volume "${ROOT}:/go/src/github.com/ethereum/go-ethereum" \
--volume "/var/run/docker.sock:/var/run/docker.sock" \
${docker_args} \
"${name}" \
/bin/bash
}
main "$@"

288
swarm/dev/scripts/boot-cluster.sh Executable file
View File

@ -0,0 +1,288 @@
#!/bin/bash
#
# A script to boot a dev swarm cluster on a Linux host (typically in a Docker
# container started with swarm/dev/run.sh).
#
# The cluster contains a bootnode, a geth node and multiple swarm nodes, with
# each node having its own data directory in a base directory passed with the
# --dir flag (default is swarm/dev/cluster).
#
# To avoid using different ports for each node and to make networking more
# realistic, each node gets its own network namespace with IPs assigned from
# the 192.168.33.0/24 subnet:
#
# bootnode: 192.168.33.2
# geth: 192.168.33.3
# swarm: 192.168.33.10{1,2,...,n}
set -e
ROOT="$(cd "$(dirname "$0")/../../.." && pwd)"
source "${ROOT}/swarm/dev/scripts/util.sh"
# DEFAULT_BASE_DIR is the default base directory to store node data
DEFAULT_BASE_DIR="${ROOT}/swarm/dev/cluster"
# DEFAULT_CLUSTER_SIZE is the default swarm cluster size
DEFAULT_CLUSTER_SIZE=3
# Linux bridge configuration for connecting the node network namespaces
BRIDGE_NAME="swarmbr0"
BRIDGE_IP="192.168.33.1"
# static bootnode configuration
BOOTNODE_IP="192.168.33.2"
BOOTNODE_PORT="30301"
BOOTNODE_KEY="32078f313bea771848db70745225c52c00981589ad6b5b49163f0f5ee852617d"
BOOTNODE_PUBKEY="760c4460e5336ac9bbd87952a3c7ec4363fc0a97bd31c86430806e287b437fd1b01abc6e1db640cf3106b520344af1d58b00b57823db3e1407cbc433e1b6d04d"
BOOTNODE_URL="enode://${BOOTNODE_PUBKEY}@${BOOTNODE_IP}:${BOOTNODE_PORT}"
# static geth configuration
GETH_IP="192.168.33.3"
GETH_RPC_PORT="8545"
GETH_RPC_URL="http://${GETH_IP}:${GETH_RPC_PORT}"
usage() {
cat >&2 <<USAGE
usage: $0 [options]
Boot a dev swarm cluster.
OPTIONS:
-d, --dir DIR Base directory to store node data [default: ${DEFAULT_BASE_DIR}]
-s, --size SIZE Size of swarm cluster [default: ${DEFAULT_CLUSTER_SIZE}]
-h, --help Show this message
USAGE
}
main() {
local base_dir="${DEFAULT_BASE_DIR}"
local cluster_size="${DEFAULT_CLUSTER_SIZE}"
parse_args "$@"
local pid_dir="${base_dir}/pids"
local log_dir="${base_dir}/logs"
mkdir -p "${base_dir}" "${pid_dir}" "${log_dir}"
stop_cluster
create_network
start_bootnode
start_geth_node
start_swarm_nodes
}
parse_args() {
while true; do
case "$1" in
-h | --help)
usage
exit 0
;;
-d | --dir)
if [[ -z "$2" ]]; then
fail "--dir flag requires an argument"
fi
base_dir="$2"
shift 2
;;
-s | --size)
if [[ -z "$2" ]]; then
fail "--size flag requires an argument"
fi
cluster_size="$2"
shift 2
;;
*)
break
;;
esac
done
if [[ $# -ne 0 ]]; then
usage
fail "ERROR: invalid arguments: $@"
fi
}
stop_cluster() {
info "stopping existing cluster"
"${ROOT}/swarm/dev/scripts/stop-cluster.sh" --dir "${base_dir}"
}
# create_network creates a Linux bridge which is used to connect the node
# network namespaces together
create_network() {
local subnet="${BRIDGE_IP}/24"
info "creating ${subnet} network on ${BRIDGE_NAME}"
ip link add name "${BRIDGE_NAME}" type bridge
ip link set dev "${BRIDGE_NAME}" up
ip address add "${subnet}" dev "${BRIDGE_NAME}"
}
# start_bootnode starts a bootnode which is used to bootstrap the geth and
# swarm nodes
start_bootnode() {
local key_file="${base_dir}/bootnode.key"
echo -n "${BOOTNODE_KEY}" > "${key_file}"
local args=(
--addr "${BOOTNODE_IP}:${BOOTNODE_PORT}"
--nodekey "${key_file}"
--verbosity "6"
)
start_node "bootnode" "${BOOTNODE_IP}" "$(which bootnode)" ${args[@]}
}
# start_geth_node starts a geth node with --datadir pointing at <base-dir>/geth
# and a single, unlocked account with password "geth"
start_geth_node() {
local dir="${base_dir}/geth"
mkdir -p "${dir}"
local password="geth"
echo "${password}" > "${dir}/password"
# create an account if necessary
if [[ ! -e "${dir}/keystore" ]]; then
info "creating geth account"
create_account "${dir}" "${password}"
fi
# get the account address
local address="$(jq --raw-output '.address' ${dir}/keystore/*)"
if [[ -z "${address}" ]]; then
fail "failed to get geth account address"
fi
local args=(
--datadir "${dir}"
--networkid "321"
--bootnodes "${BOOTNODE_URL}"
--unlock "${address}"
--password "${dir}/password"
--rpc
--rpcaddr "${GETH_IP}"
--rpcport "${GETH_RPC_PORT}"
--verbosity "6"
)
start_node "geth" "${GETH_IP}" "$(which geth)" ${args[@]}
}
start_swarm_nodes() {
for i in $(seq 1 ${cluster_size}); do
start_swarm_node "${i}"
done
}
# start_swarm_node starts a swarm node with a name like "swarmNN" (where NN is
# a zero-padded integer like "07"), --datadir pointing at <base-dir>/<name>
# (e.g. <base-dir>/swarm07) and a single account with <name> as the password
start_swarm_node() {
local num=$1
local name="swarm$(printf '%02d' ${num})"
local ip="192.168.33.1$(printf '%02d' ${num})"
local dir="${base_dir}/${name}"
mkdir -p "${dir}"
local password="${name}"
echo "${password}" > "${dir}/password"
# create an account if necessary
if [[ ! -e "${dir}/keystore" ]]; then
info "creating account for ${name}"
create_account "${dir}" "${password}"
fi
# get the account address
local address="$(jq --raw-output '.address' ${dir}/keystore/*)"
if [[ -z "${address}" ]]; then
fail "failed to get swarm account address"
fi
local args=(
--bootnodes "${BOOTNODE_URL}"
--datadir "${dir}"
--identity "${name}"
--ethapi "${GETH_RPC_URL}"
--bzznetworkid "321"
--bzzaccount "${address}"
--password "${dir}/password"
--verbosity "6"
)
start_node "${name}" "${ip}" "$(which swarm)" ${args[@]}
}
# start_node runs the node command as a daemon in a network namespace
start_node() {
local name="$1"
local ip="$2"
local path="$3"
local cmd_args=${@:4}
info "starting ${name} with IP ${ip}"
create_node_network "${name}" "${ip}"
# add a marker to the log file
cat >> "${log_dir}/${name}.log" <<EOF
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Starting ${name} node - $(date)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
EOF
# run the command in the network namespace using start-stop-daemon to
# daemonise the process, sending all output to the log file
local daemon_args=(
--start
--background
--no-close
--make-pidfile
--pidfile "${pid_dir}/${name}.pid"
--exec "${path}"
)
if ! ip netns exec "${name}" start-stop-daemon ${daemon_args[@]} -- $cmd_args &>> "${log_dir}/${name}.log"; then
fail "could not start ${name}, check ${log_dir}/${name}.log"
fi
}
# create_node_network creates a network namespace and connects it to the Linux
# bridge using a veth pair
create_node_network() {
local name="$1"
local ip="$2"
# create the namespace
ip netns add "${name}"
# create the veth pair
local veth0="veth${name}0"
local veth1="veth${name}1"
ip link add name "${veth0}" type veth peer name "${veth1}"
# add one end to the bridge
ip link set dev "${veth0}" master "${BRIDGE_NAME}"
ip link set dev "${veth0}" up
# add the other end to the namespace, rename it eth0 and give it the ip
ip link set dev "${veth1}" netns "${name}"
ip netns exec "${name}" ip link set dev "${veth1}" name "eth0"
ip netns exec "${name}" ip link set dev "eth0" up
ip netns exec "${name}" ip address add "${ip}/24" dev "eth0"
}
create_account() {
local dir=$1
local password=$2
geth --datadir "${dir}" --password /dev/stdin account new <<< "${password}"
}
main "$@"

View File

@ -0,0 +1,96 @@
#!/bin/bash
#
# A script to upload random data to a swarm cluster.
#
# Example:
#
# random-uploads.sh --addr 192.168.33.101:8500 --size 40k --count 1000
set -e
ROOT="$(cd "$(dirname "$0")/../../.." && pwd)"
source "${ROOT}/swarm/dev/scripts/util.sh"
DEFAULT_ADDR="localhost:8500"
DEFAULT_UPLOAD_SIZE="40k"
DEFAULT_UPLOAD_COUNT="1000"
usage() {
cat >&2 <<USAGE
usage: $0 [options]
Upload random data to a Swarm cluster.
OPTIONS:
-a, --addr ADDR Swarm API address [default: ${DEFAULT_ADDR}]
-s, --size SIZE Individual upload size [default: ${DEFAULT_UPLOAD_SIZE}]
-c, --count COUNT Number of uploads [default: ${DEFAULT_UPLOAD_COUNT}]
-h, --help Show this message
USAGE
}
main() {
local addr="${DEFAULT_ADDR}"
local upload_size="${DEFAULT_UPLOAD_SIZE}"
local upload_count="${DEFAULT_UPLOAD_COUNT}"
parse_args "$@"
info "uploading ${upload_count} ${upload_size} random files to ${addr}"
for i in $(seq 1 ${upload_count}); do
info "upload ${i} / ${upload_count}:"
do_random_upload
echo
done
}
do_random_upload() {
curl -fsSL -X POST --data-binary "$(random_data)" "http://${addr}/bzzr:/"
}
random_data() {
dd if=/dev/urandom of=/dev/stdout bs="${upload_size}" count=1 2>/dev/null
}
parse_args() {
while true; do
case "$1" in
-h | --help)
usage
exit 0
;;
-a | --addr)
if [[ -z "$2" ]]; then
fail "--addr flag requires an argument"
fi
addr="$2"
shift 2
;;
-s | --size)
if [[ -z "$2" ]]; then
fail "--size flag requires an argument"
fi
upload_size="$2"
shift 2
;;
-c | --count)
if [[ -z "$2" ]]; then
fail "--count flag requires an argument"
fi
upload_count="$2"
shift 2
;;
*)
break
;;
esac
done
if [[ $# -ne 0 ]]; then
usage
fail "ERROR: invalid arguments: $@"
fi
}
main "$@"

View File

@ -0,0 +1,98 @@
#!/bin/bash
#
# A script to shutdown a dev swarm cluster.
set -e
ROOT="$(cd "$(dirname "$0")/../../.." && pwd)"
source "${ROOT}/swarm/dev/scripts/util.sh"
DEFAULT_BASE_DIR="${ROOT}/swarm/dev/cluster"
usage() {
cat >&2 <<USAGE
usage: $0 [options]
Shutdown a dev swarm cluster.
OPTIONS:
-d, --dir DIR Base directory [default: ${DEFAULT_BASE_DIR}]
-h, --help Show this message
USAGE
}
main() {
local base_dir="${DEFAULT_BASE_DIR}"
parse_args "$@"
local pid_dir="${base_dir}/pids"
stop_swarm_nodes
stop_node "geth"
stop_node "bootnode"
delete_network
}
parse_args() {
while true; do
case "$1" in
-h | --help)
usage
exit 0
;;
-d | --dir)
if [[ -z "$2" ]]; then
fail "--dir flag requires an argument"
fi
base_dir="$2"
shift 2
;;
*)
break
;;
esac
done
if [[ $# -ne 0 ]]; then
usage
fail "ERROR: invalid arguments: $@"
fi
}
stop_swarm_nodes() {
for name in $(ls "${pid_dir}" | grep -oP 'swarm\d+'); do
stop_node "${name}"
done
}
stop_node() {
local name=$1
local pid_file="${pid_dir}/${name}.pid"
if [[ -e "${pid_file}" ]]; then
info "stopping ${name}"
start-stop-daemon \
--stop \
--pidfile "${pid_file}" \
--remove-pidfile \
--oknodo \
--retry 15
fi
if ip netns list | grep -qF "${name}"; then
ip netns delete "${name}"
fi
if ip link show "veth${name}0" &>/dev/null; then
ip link delete dev "veth${name}0"
fi
}
delete_network() {
if ip link show "swarmbr0" &>/dev/null; then
ip link delete dev "swarmbr0"
fi
}
main "$@"

53
swarm/dev/scripts/util.sh Normal file
View File

@ -0,0 +1,53 @@
# shared shell functions
info() {
local msg="$@"
local timestamp="$(date +%H:%M:%S)"
say "===> ${timestamp} ${msg}" "green"
}
warn() {
local msg="$@"
local timestamp=$(date +%H:%M:%S)
say "===> ${timestamp} WARN: ${msg}" "yellow" >&2
}
fail() {
local msg="$@"
say "ERROR: ${msg}" "red" >&2
exit 1
}
# say prints the given message to STDOUT, using the optional color if
# STDOUT is a terminal.
#
# usage:
#
# say "foo" - prints "foo"
# say "bar" "red" - prints "bar" in red
# say "baz" "green" - prints "baz" in green
# say "qux" "red" | tee - prints "qux" with no colour
#
say() {
local msg=$1
local color=$2
if [[ -n "${color}" ]] && [[ -t 1 ]]; then
case "${color}" in
red)
echo -e "\033[1;31m${msg}\033[0m"
;;
green)
echo -e "\033[1;32m${msg}\033[0m"
;;
yellow)
echo -e "\033[1;33m${msg}\033[0m"
;;
*)
echo "${msg}"
;;
esac
else
echo "${msg}"
fi
}

Some files were not shown because too many files have changed in this diff Show More