Compare commits

...

68 Commits

Author SHA1 Message Date
Péter Szilágyi
96c6f6e50f Revert "eth: drop eth/65, the last non-reqid protocol version" 2021-08-20 11:46:56 +03:00
chuwt
5566e5d152 eth/downloader: fix typo in comment (#23413) 2021-08-18 13:03:41 +03:00
陈佳
57feabea66 eth, internal/ethapi: make RPC block miner field show block sealer correctly (#23312)
Makes the RPC block return the POA sealer for clique blocks on the 'miner' field (was previously zeroes)
2021-08-17 18:55:18 +02:00
Zachinquarantine
16ecdd5839 cmd/utils: add --nousb to the list of deprecated flags (#23388)
Adds --nousb as a deprecated flag when someone runs the geth show-deprecated-flags command.
2021-08-17 18:49:19 +02:00
Zachinquarantine
85b9bdd641 cmd, core: remove calaveras testnet (#23366)
Removes references to the short-lived Calaveras testnet
2021-08-17 18:43:25 +02:00
jwasinger
6902485767 cmd, metrics: add support for influxdb-v2 (cherry-picking from italoacasas' changes), leave existing support for v1 to maintain backwards-compatibility. (#23194)
This PR adds flag to enable InfluxDB v2 (--metrics.influxdbv2), flags for v2-specific features (--metrics.influxdb.token, --metrics.influxdb.bucket), also carries over addition of support for specifying organization (--metrics.influxdb.organization), but still retains backwards compatibility with InfluxDB v1.
2021-08-17 18:40:14 +02:00
Martin Holst Swende
fb4007bb22 tests: update, enable legacy tests, remove vm tests (#23350)
* tests: update, enable legacy tests, remove vm tests

* tests: minor fixes
2021-08-17 17:30:21 +02:00
Péter Szilágyi
0a68558e7e accounts/external: handle 0 chainid as not-set for the Clef API (#23394)
* accounts/external: handle 0 chainid as not-set for the Clef API

* accounts/external: document SignTx

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-08-13 15:39:51 +03:00
Péter Szilágyi
fd604becbb Merge pull request #23120 from karalabe/drop-eth-65
eth: drop eth/65, the last non-reqid protocol version
2021-08-13 11:52:47 +03:00
Martin Holst Swende
5f98020a21 core/rawdb: implement sequential reads in freezer_table (#23117)
* core/rawdb: implement sequential reads in freezer_table

* core/rawdb, ethdb: add sequential reader to db interface

* core/rawdb: lint nitpicks

* core/rawdb: fix some nitpicks

* core/rawdb: fix flaw with deferred reads not being performed

* core/rawdb: better documentation
2021-08-13 11:51:01 +03:00
Péter Szilágyi
a580f7d6c5 params: begin v1.10.8 release cycle 2021-08-12 10:15:49 +03:00
Péter Szilágyi
12f0ff40b1 params: release Geth v1.10.7 2021-08-12 10:14:03 +03:00
Péter Szilágyi
971df49fe2 Merge pull request #23385 from karalabe/cht-1.10.7
params: update CHTs for the 1.10.7 release
2021-08-12 10:13:05 +03:00
Péter Szilágyi
f4ad493870 params: update CHTs for the 1.10.7 release 2021-08-12 10:11:39 +03:00
Péter Szilágyi
2a451f9eb3 Merge pull request #23384 from holiman/fix_gasfoo
internal/ethapi: add back missing check for maxfee < maxPriorityFee
2021-08-12 09:57:06 +03:00
Martin Holst Swende
278ec7176a internal/ethapi: add back missing check for maxfee < maxPriorityFee 2021-08-12 08:14:21 +02:00
Péter Szilágyi
deff5056fb Merge pull request #23374 from karalabe/fix-docker-tag
build: fix docker tag to include `v` prefix in version string
2021-08-10 17:52:57 +03:00
Péter Szilágyi
c27bd3481e build: fix docker tag to include v prefix in version string 2021-08-10 17:49:52 +03:00
Péter Szilágyi
0fbc94eabc Merge pull request #23373 from karalabe/docker-flip
travis: transition from docker auto builds to manual pushes
2021-08-10 17:31:10 +03:00
Péter Szilágyi
9097d0a325 travis: transition from docker auto builds to manual pushes 2021-08-10 17:13:06 +03:00
Péter Szilágyi
5d0ab07343 Merge pull request #23371 from karalabe/gofmt
core/state/snapshot: gofmt
2021-08-10 16:59:25 +03:00
Péter Szilágyi
9d6480c3cd core/state/snapshot: gofmt 2021-08-10 16:58:38 +03:00
lightclient
a879c42bd3 internal/ethapi, accounts/abi/bind: cap highest gas limit by account balance for 1559 fee parameters (#23309)
* internal/ethapi/api: cap highest gas limit by account balance for 1559 fee parameters

* accounts/abi/bind: port gas limit cap for 1559 parameters to simulated backend

* accounts/abi/bind: add test for 1559 gas estimates for the simulated backend

* internal/ethapi/api: fix comment

* accounts/abi/bind/backends, internal/ethapi: unify naming style

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2021-08-10 16:56:34 +03:00
lightclient
39fe7eca6b internal/ethapi: return maxFeePerGas for gasPrice for EIP-1559 txs (#23345) 2021-08-10 16:39:06 +03:00
Tyler Chambers
66948316f7 core/state/snapshot: clarify comment about snapshot repair (#23305)
Co-authored-by: Tyler Chambers <me@tylerchambers.net>
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-08-10 11:16:53 +02:00
Ziyuan Zhong
57d9e0ac75 core/state/snapshot: fix typo in comment (#23219) 2021-08-10 11:04:29 +02:00
Zachinquarantine
e4b687cf46 mobile: remove deprecated code (#23357) 2021-08-10 11:40:54 +03:00
gary rong
6d175460df cmd, core, eth, miner: deprecate miner.gastarget flag (#23213) 2021-08-10 11:28:33 +03:00
Péter Szilágyi
520f25688a Merge pull request #23370 from karalabe/windows-pruning-fix-b
core/state/pruner: fix state bloom sync permission in Windows
2021-08-10 11:04:39 +03:00
Péter Szilágyi
3b38a83274 core/state/pruner: fix state bloom sync permission in Windows 2021-08-10 10:40:10 +03:00
Felföldi Zsolt
97bd6cd216 internal/ethapi: accept both hex and decimal for blockCount (#23363) 2021-08-10 09:53:40 +03:00
shawn
d60cfd2604 core: fix london-check to avoid duplication (#23333)
Co-authored-by: lxex <liuxmzc1@163.com>
2021-08-09 16:34:20 +02:00
Shihao Xia
9e59474e46 core/rawdb: close database in test to avoid goroutine leak (#23287)
* add db close to avoid goroutine leak

* core/rawdb: move close to defer

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-08-08 15:44:42 +02:00
Martin Holst Swende
8a24b56331 cmd/evm: implement input txs via rlp in t8n tool (#23138)
In many cases, it's desireable to use already-signed transactions as input to the state transition, instead of having the evm sign them internally (for example to use malformed or not-yet-valid transactions). This PR adds support + docs for that feature.
2021-08-07 23:04:34 +02:00
Martin Holst Swende
0658712f65 core: check if sender is EOA (#23303)
This adds a check to verify that a sender-account does not have code, which means that the codehash is either `emptyCodeHash` _OR_ not present. The latter occurs IFF the sender did not previously exist, a situation which can only occur with zero cost gasprices.
2021-08-07 19:38:18 +02:00
Patrick O'Grady
d3e3a460ec core/rawdb: fix logs to print block number, not address (#23328) 2021-08-04 11:10:37 +03:00
Marius van der Wijden
28ba686cbf core/state: add trie prefetcher tests (#23216)
* core/state: add trie prefetcher tests

* core/state: add missing license
2021-08-03 17:35:25 +02:00
gary rong
f311488d2c internal/ethapi: fix trace log marshalling (#23292) 2021-08-03 17:32:13 +02:00
Sina Mahmoodi
c38fab912b core: get header from block cache (#23299) 2021-08-03 17:29:47 +02:00
Martin Holst Swende
4cd6a1458e cmd/devp2p: fix ping/pong race in discv4 tests (#23306)
This PR modifies the post-PING-send expectations to both be laxer and stricter: it doesn't care what order the packets arrive, but also verifies that exactly one PING and one PONG is returned.
2021-08-03 11:21:39 +02:00
aaronbuchwald
82c5085399 cre/state: fix outdated statedb Prepare comment (#23320) 2021-08-03 09:06:58 +03:00
baptiste-b-pegasys
95bbd46eab node, cmd/clef: remove term "whitelist" (#23296)
* node: remove term "whitelist"

* include cmd/clef
2021-08-02 15:43:01 +02:00
baptiste-b-pegasys
85afdeef37 tests: remove whitelist feature (#23297) 2021-07-29 20:23:37 +02:00
baptiste-b-pegasys
860184d542 p2p: remove term "whitelist" (#23295)
Co-authored-by: Felix Lange <fjl@twurst.com>
2021-07-29 17:50:18 +02:00
baptiste-b-pegasys
3526f69047 all: remove term "whitelist" in comments and log messages (#23294) 2021-07-29 17:36:15 +02:00
Martin Holst Swende
295bc35ecf signer/core: move API JSON types to separate package (#23275)
This PR moves (some) account types into a standalone package, to avoid
depending on signer/core from accounts/external.
2021-07-29 16:06:44 +02:00
Evolution404
8f11d279d2 p2p/simulations: fix unlikely crash in probabilistic connect (#23200)
When the nodeCount is less than 10, it will panic with the out of bound error.
How about we just skip this round, when rand1 and rand2 are equal?
2021-07-29 16:03:50 +02:00
Sina Mahmoodi
b157bae2c9 go.mod: bump golang.org/x/text to v0.3.6 (#23291) 2021-07-29 15:20:45 +02:00
ucwong
64a5e125c5 go.mod: upgrade to goupnp v1.0.2 (#23197) 2021-07-29 15:20:15 +02:00
Marius van der Wijden
fb8ea5993f tests: update tests/testdata to v9.0.4 (london) (#23279) 2021-07-29 14:05:22 +02:00
Martin Holst Swende
5c13012b56 accounts/external, internal/ethapi: fixes for London tx signing (#23274)
Ticket #23273 found a flaw where we were unable to sign legacy-transactions
using the external signer, even if we're still on non-london network. That's
fixed in this PR.

Additionally, I found that even when supplying all parameters, it was impossible
to sign a london-transaction on an unsynched node. It's a pretty common usecase
that someone wants to sign a transaction using an unsynced 'vanilla' node,
providing all necessary data. Our setDefaults, however, insisted on checking the
current block against the config. This PR therefore adds a case, so that if both
MaxPriorityFeePerGas and MaxFeePerGas are provided, we accept them as given.

OBS This PR fixes a regression -- on current master, we are unable to sign a
london-transaction unless the node is synched, which may break scenarios where
geth (or clef) is used as a cold wallet.

Fixes #23273
2021-07-29 14:00:06 +02:00
baptiste-b-pegasys
523866c2cc all: change blacklist terms 2021-07-29 11:17:40 +03:00
ligi
56e9001a1a README: fix default sync mode (#23282) 2021-07-28 19:14:46 +03:00
@edgararout
0730acc5a0 consensus/ethash: less allocation during mining (#23199) 2021-07-28 14:24:41 +02:00
Marius van der Wijden
2faf796d2a internal/ethapi: fix panic in accesslist creation (#23225)
* internal/ethapi: revert + fix properly in al tracer

* internal/ethapi: use toMessage instead of creating new message

* internal/ethapi: remove ineffassign

* core: fix invalid unmarshalling, fix test

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-07-28 14:21:35 +02:00
Marius van der Wijden
3aea432b35 accounts/abi/bind: set Context in TransactOpts (#23188) 2021-07-27 16:24:27 +02:00
Marius van der Wijden
b20bc5c0ca accounts/abi/bind: parse ABI only once, create metadata struct (#22583) 2021-07-27 16:22:21 +02:00
Martin Holst Swende
5c89ec9b98 cmd/geth: update vulnerability testdata (#23252) 2021-07-27 16:19:48 +02:00
lightclient
bbfa6488ac Use hexutil.Uint for blockCount parameter in feeHistory method (#23239)
* internal/ethapi/api: use hexutil.uint for blockCount parameter instead of int for feeHistory

* return hex value for oldestBlock instead of number

* return uint64 from oracle.resolveBlockRange

* eth/gasprice: fixed test

Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
2021-07-27 05:27:28 +02:00
Felix Lange
a1f16bc74c params: begin v1.10.7 release cycle 2021-07-22 16:45:22 +02:00
Felix Lange
576681f29b params: release go-ethereum v1.10.6 stable 2021-07-22 16:44:28 +02:00
gary rong
370680a7a9 core/types: revert removal of legacy receipt support (#23247)
* Revert "core/types: go generate (#23177)"

This reverts commit 00b922fc5d.

* Revert "core/types: remove LogForStorage type (#23173)"

This reverts commit 7522642393.

* Revert "core/types: remove support for legacy receipt/log storage encoding (#22852)"

This reverts commit 643fd0efc6.
2021-07-22 15:43:51 +02:00
Marius van der Wijden
97aacd9b35 core: fix pre-check for account balance under EIP-1559 (#23244)
When processing a transaction with London fork rules, EIP-1559 mandates
checking that the sender must have sufficient balance to cover gas * gasFeeCap.

In the EIP's pseudocode, this check happens after the value transferred by the
transaction has already been deducted. However, in go-ethereum, the balance
has not yet been updated when the check happens, and therefore needs to be
added explicitly.

Co-authored-by: Martin Holst Swende <martin@swende.se>
2021-07-22 15:39:40 +02:00
gary rong
f05419f0fb les: fix eth_sendTransaction API (#23215) 2021-07-16 01:52:40 +02:00
aaronbuchwald
a5e3aa693c eth/tracers: fix typo in test name (#23218) 2021-07-15 21:23:16 +03:00
Evolution404
89fde59a80 node: fix stopping websocket rpc.Server (#23211) 2021-07-15 10:15:08 +02:00
Péter Szilágyi
f0b1bddac4 params: begin v1.10.6 release cycle 2021-07-14 11:04:36 +03:00
Péter Szilágyi
d3f018fde8 eth: drop eth/65, the last non-reqid protocol version 2021-06-29 12:31:30 +03:00
118 changed files with 2087 additions and 886 deletions

View File

@@ -41,7 +41,7 @@ jobs:
before_install:
- export DOCKER_CLI_EXPERIMENTAL=enabled
script:
- go run build/ci.go docker -image -manifest amd64,arm64 -upload karalabe/geth-docker-test
- go run build/ci.go docker -image -manifest amd64,arm64 -upload ethereum/client-go
- stage: build
if: type = push
@@ -58,7 +58,7 @@ jobs:
before_install:
- export DOCKER_CLI_EXPERIMENTAL=enabled
script:
- go run build/ci.go docker -image -manifest amd64,arm64 -upload karalabe/geth-docker-test
- go run build/ci.go docker -image -manifest amd64,arm64 -upload ethereum/client-go
# This builder does the Ubuntu PPA upload
- stage: build

View File

@@ -64,7 +64,7 @@ $ geth console
```
This command will:
* Start `geth` in fast sync mode (default, can be changed with the `--syncmode` flag),
* Start `geth` in snap sync mode (default, can be changed with the `--syncmode` flag),
causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive.
* Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console),

View File

@@ -17,6 +17,7 @@
package bind
import (
"context"
"crypto/ecdsa"
"errors"
"io"
@@ -74,6 +75,7 @@ func NewKeyStoreTransactor(keystore *keystore.KeyStore, account accounts.Account
}
return tx.WithSignature(signer, signature)
},
Context: context.Background(),
}, nil
}
@@ -97,6 +99,7 @@ func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
}
return tx.WithSignature(signer, signature)
},
Context: context.Background(),
}
}
@@ -133,6 +136,7 @@ func NewKeyStoreTransactorWithChainID(keystore *keystore.KeyStore, account accou
}
return tx.WithSignature(signer, signature)
},
Context: context.Background(),
}, nil
}
@@ -156,6 +160,7 @@ func NewKeyedTransactorWithChainID(key *ecdsa.PrivateKey, chainID *big.Int) (*Tr
}
return tx.WithSignature(signer, signature)
},
Context: context.Background(),
}, nil
}
@@ -170,5 +175,6 @@ func NewClefTransactor(clef *external.ExternalSigner, account accounts.Account)
}
return clef.SignTx(account, transaction, nil) // Clef enforces its own chain id
},
Context: context.Background(),
}
}

View File

@@ -488,8 +488,19 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
} else {
hi = b.pendingBlock.GasLimit()
}
// Normalize the max fee per gas the call is willing to spend.
var feeCap *big.Int
if call.GasPrice != nil && (call.GasFeeCap != nil || call.GasTipCap != nil) {
return 0, errors.New("both gasPrice and (maxFeePerGas or maxPriorityFeePerGas) specified")
} else if call.GasPrice != nil {
feeCap = call.GasPrice
} else if call.GasFeeCap != nil {
feeCap = call.GasFeeCap
} else {
feeCap = common.Big0
}
// Recap the highest gas allowance with account's balance.
if call.GasPrice != nil && call.GasPrice.BitLen() != 0 {
if feeCap.BitLen() != 0 {
balance := b.pendingState.GetBalance(call.From) // from can't be nil
available := new(big.Int).Set(balance)
if call.Value != nil {
@@ -498,14 +509,14 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
}
available.Sub(available, call.Value)
}
allowance := new(big.Int).Div(available, call.GasPrice)
allowance := new(big.Int).Div(available, feeCap)
if allowance.IsUint64() && hi > allowance.Uint64() {
transfer := call.Value
if transfer == nil {
transfer = new(big.Int)
}
log.Warn("Gas estimation capped by limited funds", "original", hi, "balance", balance,
"sent", transfer, "gasprice", call.GasPrice, "fundable", allowance)
"sent", transfer, "feecap", feeCap, "fundable", allowance)
hi = allowance.Uint64()
}
}

View File

@@ -580,6 +580,26 @@ func TestEstimateGasWithPrice(t *testing.T) {
Value: big.NewInt(100000000000),
Data: nil,
}, 21000, errors.New("gas required exceeds allowance (10999)")}, // 10999=(2.2ether-1000wei)/(2e14)
{"EstimateEIP1559WithHighFees", ethereum.CallMsg{
From: addr,
To: &addr,
Gas: 0,
GasFeeCap: big.NewInt(1e14), // maxgascost = 2.1ether
GasTipCap: big.NewInt(1),
Value: big.NewInt(1e17), // the remaining balance for fee is 2.1ether
Data: nil,
}, params.TxGas, nil},
{"EstimateEIP1559WithSuperHighFees", ethereum.CallMsg{
From: addr,
To: &addr,
Gas: 0,
GasFeeCap: big.NewInt(1e14), // maxgascost = 2.1ether
GasTipCap: big.NewInt(1),
Value: big.NewInt(1e17 + 1), // the remaining balance for fee is 2.1ether
Data: nil,
}, params.TxGas, errors.New("gas required exceeds allowance (20999)")}, // 20999=(2.2ether-0.1ether-1wei)/(1e14)
}
for i, c := range cases {
got, err := sim.EstimateGas(context.Background(), c.message)
@@ -592,6 +612,9 @@ func TestEstimateGasWithPrice(t *testing.T) {
}
continue
}
if c.expectError == nil && err != nil {
t.Fatalf("test %d: didn't expect error, got %v", i, err)
}
if got != c.expect {
t.Fatalf("test %d: gas estimation mismatch, want %d, got %d", i, c.expect, got)
}

View File

@@ -21,6 +21,8 @@ import (
"errors"
"fmt"
"math/big"
"strings"
"sync"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
@@ -76,6 +78,29 @@ type WatchOpts struct {
Context context.Context // Network context to support cancellation and timeouts (nil = no timeout)
}
// MetaData collects all metadata for a bound contract.
type MetaData struct {
mu sync.Mutex
Sigs map[string]string
Bin string
ABI string
ab *abi.ABI
}
func (m *MetaData) GetAbi() (*abi.ABI, error) {
m.mu.Lock()
defer m.mu.Unlock()
if m.ab != nil {
return m.ab, nil
}
if parsed, err := abi.JSON(strings.NewReader(m.ABI)); err != nil {
return nil, err
} else {
m.ab = &parsed
}
return m.ab, nil
}
// BoundContract is the base wrapper object that reflects a contract on the
// Ethereum network. It contains a collection of methods that are used by the
// higher level contract bindings to operate.

View File

@@ -90,6 +90,7 @@ package {{.Package}}
import (
"math/big"
"strings"
"errors"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
@@ -101,6 +102,7 @@ import (
// Reference imports to suppress errors if they are not otherwise used.
var (
_ = errors.New
_ = big.NewInt
_ = strings.NewReader
_ = ethereum.NotFound
@@ -120,32 +122,48 @@ var (
{{end}}
{{range $contract := .Contracts}}
// {{.Type}}ABI is the input ABI used to generate the binding from.
const {{.Type}}ABI = "{{.InputABI}}"
{{if $contract.FuncSigs}}
// {{.Type}}FuncSigs maps the 4-byte function signature to its string representation.
var {{.Type}}FuncSigs = map[string]string{
// {{.Type}}MetaData contains all meta data concerning the {{.Type}} contract.
var {{.Type}}MetaData = &bind.MetaData{
ABI: "{{.InputABI}}",
{{if $contract.FuncSigs -}}
Sigs: map[string]string{
{{range $strsig, $binsig := .FuncSigs}}"{{$binsig}}": "{{$strsig}}",
{{end}}
}
},
{{end -}}
{{if .InputBin -}}
Bin: "0x{{.InputBin}}",
{{end}}
}
// {{.Type}}ABI is the input ABI used to generate the binding from.
// Deprecated: Use {{.Type}}MetaData.ABI instead.
var {{.Type}}ABI = {{.Type}}MetaData.ABI
{{if $contract.FuncSigs}}
// Deprecated: Use {{.Type}}MetaData.Sigs instead.
// {{.Type}}FuncSigs maps the 4-byte function signature to its string representation.
var {{.Type}}FuncSigs = {{.Type}}MetaData.Sigs
{{end}}
{{if .InputBin}}
// {{.Type}}Bin is the compiled bytecode used for deploying new contracts.
var {{.Type}}Bin = "0x{{.InputBin}}"
// Deprecated: Use {{.Type}}MetaData.Bin instead.
var {{.Type}}Bin = {{.Type}}MetaData.Bin
// Deploy{{.Type}} deploys a new Ethereum contract, binding an instance of {{.Type}} to it.
func Deploy{{.Type}}(auth *bind.TransactOpts, backend bind.ContractBackend {{range .Constructor.Inputs}}, {{.Name}} {{bindtype .Type $structs}}{{end}}) (common.Address, *types.Transaction, *{{.Type}}, error) {
parsed, err := abi.JSON(strings.NewReader({{.Type}}ABI))
parsed, err := {{.Type}}MetaData.GetAbi()
if err != nil {
return common.Address{}, nil, nil, err
}
if parsed == nil {
return common.Address{}, nil, nil, errors.New("GetABI returned nil")
}
{{range $pattern, $name := .Libraries}}
{{decapitalise $name}}Addr, _, _, _ := Deploy{{capitalise $name}}(auth, backend)
{{$contract.Type}}Bin = strings.Replace({{$contract.Type}}Bin, "__${{$pattern}}$__", {{decapitalise $name}}Addr.String()[2:], -1)
{{end}}
address, tx, contract, err := bind.DeployContract(auth, parsed, common.FromHex({{.Type}}Bin), backend {{range .Constructor.Inputs}}, {{.Name}}{{end}})
address, tx, contract, err := bind.DeployContract(auth, *parsed, common.FromHex({{.Type}}Bin), backend {{range .Constructor.Inputs}}, {{.Name}}{{end}})
if err != nil {
return common.Address{}, nil, nil, err
}

View File

@@ -29,7 +29,7 @@ import (
"github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/signer/core"
"github.com/ethereum/go-ethereum/signer/core/apitypes"
)
type ExternalBackend struct {
@@ -196,6 +196,10 @@ type signTransactionResult struct {
Tx *types.Transaction `json:"tx"`
}
// SignTx sends the transaction to the external signer.
// If chainID is nil, or tx.ChainID is zero, the chain ID will be assigned
// by the external signer. For non-legacy transactions, the chain ID of the
// transaction overrides the chainID parameter.
func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transaction, chainID *big.Int) (*types.Transaction, error) {
data := hexutil.Bytes(tx.Data())
var to *common.MixedcaseAddress
@@ -203,7 +207,7 @@ func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transactio
t := common.NewMixedcaseAddress(*tx.To())
to = &t
}
args := &core.SendTxArgs{
args := &apitypes.SendTxArgs{
Data: &data,
Nonce: hexutil.Uint64(tx.Nonce()),
Value: hexutil.Big(*tx.Value()),
@@ -211,21 +215,24 @@ func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transactio
To: to,
From: common.NewMixedcaseAddress(account.Address),
}
if tx.GasFeeCap() != nil {
switch tx.Type() {
case types.LegacyTxType, types.AccessListTxType:
args.GasPrice = (*hexutil.Big)(tx.GasPrice())
case types.DynamicFeeTxType:
args.MaxFeePerGas = (*hexutil.Big)(tx.GasFeeCap())
args.MaxPriorityFeePerGas = (*hexutil.Big)(tx.GasTipCap())
} else {
args.GasPrice = (*hexutil.Big)(tx.GasPrice())
default:
return nil, fmt.Errorf("unsupported tx type %d", tx.Type())
}
// We should request the default chain id that we're operating with
// (the chain we're executing on)
if chainID != nil {
if chainID != nil && chainID.Sign() != 0 {
args.ChainID = (*hexutil.Big)(chainID)
}
if tx.Type() != types.LegacyTxType {
// However, if the user asked for a particular chain id, then we should
// use that instead.
if tx.ChainId() != nil {
if tx.ChainId().Sign() != 0 {
args.ChainID = (*hexutil.Big)(tx.ChainId())
}
accessList := tx.AccessList()

View File

@@ -450,7 +450,7 @@ func maybeSkipArchive(env build.Environment) {
os.Exit(0)
}
if env.Branch != "master" && !strings.HasPrefix(env.Tag, "v1.") {
log.Printf("skipping archive creation because branch %q, tag %q is not on the whitelist", env.Branch, env.Tag)
log.Printf("skipping archive creation because branch %q, tag %q is not on the inclusion list", env.Branch, env.Tag)
os.Exit(0)
}
}
@@ -492,7 +492,7 @@ func doDocker(cmdline []string) {
case env.Branch == "master":
tags = []string{"latest"}
case strings.HasPrefix(env.Tag, "v1."):
tags = []string{"stable", fmt.Sprintf("release-1.%d", params.VersionMinor), params.Version}
tags = []string{"stable", fmt.Sprintf("release-1.%d", params.VersionMinor), "v" + params.Version}
}
// If architecture specific image builds are requested, build and push them
if *image {

View File

@@ -23,7 +23,7 @@ import (
"os"
"strings"
"github.com/ethereum/go-ethereum/signer/core"
"github.com/ethereum/go-ethereum/signer/core/apitypes"
"github.com/ethereum/go-ethereum/signer/fourbyte"
)
@@ -41,7 +41,7 @@ func parse(data []byte) {
if err != nil {
die(err)
}
messages := core.ValidationMessages{}
messages := apitypes.ValidationMessages{}
db.ValidateCallData(nil, data, &messages)
for _, m := range messages.Messages {
fmt.Printf("%v: %v\n", m.Typ, m.Message)

View File

@@ -50,10 +50,10 @@ import (
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/signer/core"
"github.com/ethereum/go-ethereum/signer/core/apitypes"
"github.com/ethereum/go-ethereum/signer/fourbyte"
"github.com/ethereum/go-ethereum/signer/rules"
"github.com/ethereum/go-ethereum/signer/storage"
"github.com/mattn/go-colorable"
"github.com/mattn/go-isatty"
"gopkg.in/urfave/cli.v1"
@@ -657,7 +657,7 @@ func signer(c *cli.Context) error {
cors := utils.SplitAndTrim(c.GlobalString(utils.HTTPCORSDomainFlag.Name))
srv := rpc.NewServer()
err := node.RegisterApisFromWhitelist(rpcAPI, []string{"account"}, srv, false)
err := node.RegisterApis(rpcAPI, []string{"account"}, srv, false)
if err != nil {
utils.Fatalf("Could not register API: %w", err)
}
@@ -923,7 +923,7 @@ func testExternalUI(api *core.SignerAPI) {
time.Sleep(delay)
data := hexutil.Bytes([]byte{})
to := common.NewMixedcaseAddress(a)
tx := core.SendTxArgs{
tx := apitypes.SendTxArgs{
Data: &data,
Nonce: 0x1,
Value: hexutil.Big(*big.NewInt(6)),
@@ -1055,11 +1055,11 @@ func GenDoc(ctx *cli.Context) {
data := hexutil.Bytes([]byte{0x01, 0x02, 0x03, 0x04})
add("SignTxRequest", desc, &core.SignTxRequest{
Meta: meta,
Callinfo: []core.ValidationInfo{
Callinfo: []apitypes.ValidationInfo{
{Typ: "Warning", Message: "Something looks odd, show this message as a warning"},
{Typ: "Info", Message: "User should see this as well"},
},
Transaction: core.SendTxArgs{
Transaction: apitypes.SendTxArgs{
Data: &data,
Nonce: 0x1,
Value: hexutil.Big(*big.NewInt(6)),
@@ -1075,7 +1075,7 @@ func GenDoc(ctx *cli.Context) {
add("SignTxResponse - approve", "Response to request to sign a transaction. This response needs to contain the `transaction`"+
", because the UI is free to make modifications to the transaction.",
&core.SignTxResponse{Approved: true,
Transaction: core.SendTxArgs{
Transaction: apitypes.SendTxArgs{
Data: &data,
Nonce: 0x4,
Value: hexutil.Big(*big.NewInt(6)),

View File

@@ -79,14 +79,44 @@ func BasicPing(t *utesting.T) {
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
if err := te.checkPingPong(pingHash); err != nil {
t.Fatal(err)
}
}
// checkPong verifies that reply is a valid PONG matching the given ping hash.
// checkPingPong verifies that the remote side sends both a PONG with the
// correct hash, and a PING.
// The two packets do not have to be in any particular order.
func (te *testenv) checkPingPong(pingHash []byte) error {
var (
pings int
pongs int
)
for i := 0; i < 2; i++ {
reply, _, err := te.read(te.l1)
if err != nil {
return err
}
switch reply.Kind() {
case v4wire.PongPacket:
if err := te.checkPong(reply, pingHash); err != nil {
return err
}
pongs++
case v4wire.PingPacket:
pings++
default:
return fmt.Errorf("expected PING or PONG, got %v %v", reply.Name(), reply)
}
}
if pongs == 1 && pings == 1 {
return nil
}
return fmt.Errorf("expected 1 PING (got %d) and 1 PONG (got %d)", pings, pongs)
}
// checkPong verifies that reply is a valid PONG matching the given ping hash,
// and a PING. The two packets do not have to be in any particular order.
func (te *testenv) checkPong(reply v4wire.Packet, pingHash []byte) error {
if reply == nil {
return fmt.Errorf("expected PONG reply, got nil")
@@ -119,9 +149,7 @@ func PingWrongTo(t *utesting.T) {
To: wrongEndpoint,
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
if err := te.checkPingPong(pingHash); err != nil {
t.Fatal(err)
}
}
@@ -139,8 +167,7 @@ func PingWrongFrom(t *utesting.T) {
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
if err := te.checkPingPong(pingHash); err != nil {
t.Fatal(err)
}
}
@@ -161,8 +188,7 @@ func PingExtraData(t *utesting.T) {
JunkData2: []byte{9, 8, 7, 6, 5, 4, 3, 2, 1},
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
if err := te.checkPingPong(pingHash); err != nil {
t.Fatal(err)
}
}
@@ -183,8 +209,7 @@ func PingExtraDataWrongFrom(t *utesting.T) {
JunkData2: []byte{9, 8, 7, 6, 5, 4, 3, 2, 1},
}
pingHash := te.send(te.l1, &req)
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
if err := te.checkPingPong(pingHash); err != nil {
t.Fatal(err)
}
}
@@ -240,9 +265,9 @@ func BondThenPingWithWrongFrom(t *utesting.T) {
To: te.remoteEndpoint(),
Expiration: futureExpiration(),
})
reply, _, _ := te.read(te.l1)
if err := te.checkPong(reply, pingHash); err != nil {
if reply, _, err := te.read(te.l1); err != nil {
t.Fatal(err)
} else if err := te.checkPong(reply, pingHash); err != nil {
t.Fatal(err)
}
}

View File

@@ -4,15 +4,15 @@ The `evm t8n` tool is a stateless state transition utility. It is a utility
which can
1. Take a prestate, including
- Accounts,
- Block context information,
- Previous blockshashes (*optional)
- Accounts,
- Block context information,
- Previous blockshashes (*optional)
2. Apply a set of transactions,
3. Apply a mining-reward (*optional),
4. And generate a post-state, including
- State root, transaction root, receipt root,
- Information about rejected transactions,
- Optionally: a full or partial post-state dump
- State root, transaction root, receipt root,
- Information about rejected transactions,
- Optionally: a full or partial post-state dump
## Specification
@@ -37,6 +37,8 @@ Command line params that has to be supported are
--output.result result Determines where to put the result (stateroot, txroot etc) of the post-state.
`stdout` - into the stdout output
`stderr` - into the stderr output
--output.body value If set, the RLP of the transactions (block body) will be written to this file.
--input.txs stdin stdin or file name of where to find the transactions to apply. If the file prefix is '.rlp', then the data is interpreted as an RLP list of signed transactions.The '.rlp' format is identical to the output.body format. (default: "txs.json")
--state.fork value Name of ruleset to use.
--state.chainid value ChainID to use (default: 1)
--state.reward value Mining reward. Set to -1 to disable (default: 0)
@@ -110,7 +112,10 @@ Two resulting files:
}
],
"rejected": [
1
{
"index": 1,
"error": "nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
}
]
}
```
@@ -156,7 +161,10 @@ Output:
}
],
"rejected": [
1
{
"index": 1,
"error": "nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
}
]
}
}
@@ -168,9 +176,9 @@ Mining rewards and ommer rewards might need to be added. This is how those are a
- `block_reward` is the block mining reward for the miner (`0xaa`), of a block at height `N`.
- For each ommer (mined by `0xbb`), with blocknumber `N-delta`
- (where `delta` is the difference between the current block and the ommer)
- The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward`
- The account `0xaa` (block miner) is awarded `block_reward / 32`
- (where `delta` is the difference between the current block and the ommer)
- The account `0xbb` (ommer miner) is awarded `(8-delta)/ 8 * block_reward`
- The account `0xaa` (block miner) is awarded `block_reward / 32`
To make `state_t8n` apply these, the following inputs are required:
@@ -220,7 +228,7 @@ Output:
### Future EIPS
It is also possible to experiment with future eips that are not yet defined in a hard fork.
Example, putting EIP-1344 into Frontier:
Example, putting EIP-1344 into Frontier:
```
./evm t8n --state.fork=Frontier+1344 --input.pre=./testdata/1/pre.json --input.txs=./testdata/1/txs.json --input.env=/testdata/1/env.json
```
@@ -229,41 +237,102 @@ Example, putting EIP-1344 into Frontier:
The `BLOCKHASH` opcode requires blockhashes to be provided by the caller, inside the `env`.
If a required blockhash is not provided, the exit code should be `4`:
Example where blockhashes are provided:
Example where blockhashes are provided:
```
./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace
./evm --verbosity=1 t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace
INFO [07-27|11:53:40.960] Trie dumping started root=b7341d..857ea1
INFO [07-27|11:53:40.960] Trie dumping complete accounts=3 elapsed="103.298µs"
INFO [07-27|11:53:40.960] Wrote file file=alloc.json
INFO [07-27|11:53:40.960] Wrote file file=result.json
```
```
cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2
```
```
{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memory":"0x","memSize":0,"stack":[],"returnStack":[],"returnData":"0x","depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memory":"0x","memSize":0,"stack":["0x1"],"returnStack":[],"returnData":"0x","depth":1,"refund":0,"opName":"BLOCKHASH","error":""}
{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memory":"0x","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"returnStack":[],"returnData":"0x","depth":1,"refund":0,"opName":"STOP","error":""}
{"output":"","gasUsed":"0x17","time":142709}
{"pc":0,"op":96,"gas":"0x5f58ef8","gasCost":"0x3","memory":"0x","memSize":0,"stack":[],"returnData":"0x","depth":1,"refund":0,"opName":"PUSH1","error":""}
{"pc":2,"op":64,"gas":"0x5f58ef5","gasCost":"0x14","memory":"0x","memSize":0,"stack":["0x1"],"returnData":"0x","depth":1,"refund":0,"opName":"BLOCKHASH","error":""}
{"pc":3,"op":0,"gas":"0x5f58ee1","gasCost":"0x0","memory":"0x","memSize":0,"stack":["0xdac58aa524e50956d0c0bae7f3f8bb9d35381365d07804dd5b48a5a297c06af4"],"returnData":"0x","depth":1,"refund":0,"opName":"STOP","error":""}
{"output":"","gasUsed":"0x17","time":156276}
```
In this example, the caller has not provided the required blockhash:
```
./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace
```
```
ERROR(4): getHash(3) invoked, blockhash for that block not provided
```
Error code: 4
### Chaining
Another thing that can be done, is to chain invocations:
```
./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json
INFO [01-21|22:41:22.963] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.966] rejected tx index=0 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [01-21|22:41:22.967] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [07-27|11:53:41.049] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [07-27|11:53:41.050] Trie dumping started root=84208a..ae4e13
INFO [07-27|11:53:41.050] Trie dumping complete accounts=3 elapsed="59.412µs"
INFO [07-27|11:53:41.050] Wrote file file=result.json
INFO [07-27|11:53:41.051] rejected tx index=0 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [07-27|11:53:41.051] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1"
INFO [07-27|11:53:41.052] Trie dumping started root=84208a..ae4e13
INFO [07-27|11:53:41.052] Trie dumping complete accounts=3 elapsed="45.734µs"
INFO [07-27|11:53:41.052] Wrote file file=alloc.json
INFO [07-27|11:53:41.052] Wrote file file=result.json
```
What happened here, is that we first applied two identical transactions, so the second one was rejected.
What happened here, is that we first applied two identical transactions, so the second one was rejected.
Then, taking the poststate alloc as the input for the next state, we tried again to include
the same two transactions: this time, both failed due to too low nonce.
In order to meaningfully chain invocations, one would need to provide meaningful new `env`, otherwise the
actual blocknumber (exposed to the EVM) would not increase.
### Transactions in RLP form
It is possible to provide already-signed transactions as input to, using an `input.txs` which ends with the `rlp` suffix.
The input format for RLP-form transactions is _identical_ to the _output_ format for block bodies. Therefore, it's fully possible
to use the evm to go from `json` input to `rlp` input.
The following command takes **json** the transactions in `./testdata/13/txs.json` and signs them. After execution, they are output to `signed_txs.rlp`.:
```
./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./testdata/13/txs.json --input.env=./testdata/13/env.json --output.result=alloc_jsontx.json --output.body=signed_txs.rlp
INFO [07-27|11:53:41.124] Trie dumping started root=e4b924..6aef61
INFO [07-27|11:53:41.124] Trie dumping complete accounts=3 elapsed="94.284µs"
INFO [07-27|11:53:41.125] Wrote file file=alloc.json
INFO [07-27|11:53:41.125] Wrote file file=alloc_jsontx.json
INFO [07-27|11:53:41.125] Wrote file file=signed_txs.rlp
```
The `output.body` is the rlp-list of transactions, encoded in hex and placed in a string a'la `json` encoding rules:
```
cat signed_txs.rlp
"0xf8d2b86702f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904b86702f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9"
```
We can use `rlpdump` to check what the contents are:
```
rlpdump -hex $(cat signed_txs.rlp | jq -r )
[
02f864010180820fa08284d09411111111111111111111111111111111111111118080c001a0b7dfab36232379bb3d1497a4f91c1966b1f932eae3ade107bf5d723b9cb474e0a06261c359a10f2132f126d250485b90cf20f30340801244a08ef6142ab33d1904,
02f864010280820fa08284d09411111111111111111111111111111111111111118080c080a0d4ec563b6568cd42d998fc4134b36933c6568d01533b5adf08769270243c6c7fa072bf7c21eac6bbeae5143371eef26d5e279637f3bd73482b55979d76d935b1e9,
]
```
Now, we can now use those (or any other already signed transactions), as input, like so:
```
./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./signed_txs.rlp --input.env=./testdata/13/env.json --output.result=alloc_rlptx.json
INFO [07-27|11:53:41.253] Trie dumping started root=e4b924..6aef61
INFO [07-27|11:53:41.253] Trie dumping complete accounts=3 elapsed="128.445µs"
INFO [07-27|11:53:41.253] Wrote file file=alloc.json
INFO [07-27|11:53:41.255] Wrote file file=alloc_rlptx.json
```
You might have noticed that the results from these two invocations were stored in two separate files.
And we can now finally check that they match.
```
cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot
"0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61"
"0xe4b924a6adb5959fccf769d5b7bb2f6359e26d1e76a2443c5a91a36d826aef61"
```

View File

@@ -79,8 +79,10 @@ var (
Value: "env.json",
}
InputTxsFlag = cli.StringFlag{
Name: "input.txs",
Usage: "`stdin` or file name of where to find the transactions to apply.",
Name: "input.txs",
Usage: "`stdin` or file name of where to find the transactions to apply. " +
"If the file prefix is '.rlp', then the data is interpreted as an RLP list of signed transactions." +
"The '.rlp' format is identical to the output.body format.",
Value: "txs.json",
}
RewardFlag = cli.Int64Flag{

View File

@@ -25,6 +25,7 @@ import (
"math/big"
"os"
"path"
"strings"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -72,6 +73,7 @@ type input struct {
Alloc core.GenesisAlloc `json:"alloc,omitempty"`
Env *stEnv `json:"env,omitempty"`
Txs []*txWithKey `json:"txs,omitempty"`
TxRlp string `json:"txsRlp,omitempty"`
}
func Main(ctx *cli.Context) error {
@@ -199,11 +201,44 @@ func Main(ctx *cli.Context) error {
}
defer inFile.Close()
decoder := json.NewDecoder(inFile)
if err := decoder.Decode(&txsWithKeys); err != nil {
return NewError(ErrorJson, fmt.Errorf("failed unmarshaling txs-file: %v", err))
if strings.HasSuffix(txStr, ".rlp") {
var body hexutil.Bytes
if err := decoder.Decode(&body); err != nil {
return err
}
var txs types.Transactions
if err := rlp.DecodeBytes(body, &txs); err != nil {
return err
}
for _, tx := range txs {
txsWithKeys = append(txsWithKeys, &txWithKey{
key: nil,
tx: tx,
})
}
} else {
if err := decoder.Decode(&txsWithKeys); err != nil {
return NewError(ErrorJson, fmt.Errorf("failed unmarshaling txs-file: %v", err))
}
}
} else {
txsWithKeys = inputData.Txs
if len(inputData.TxRlp) > 0 {
// Decode the body of already signed transactions
body := common.FromHex(inputData.TxRlp)
var txs types.Transactions
if err := rlp.DecodeBytes(body, &txs); err != nil {
return err
}
for _, tx := range txs {
txsWithKeys = append(txsWithKeys, &txWithKey{
key: nil,
tx: tx,
})
}
} else {
// JSON encoded transactions
txsWithKeys = inputData.Txs
}
}
// We may have to sign the transactions.
signer := types.MakeSigner(chainConfig, big.NewInt(int64(prestate.Env.Number)))
@@ -365,6 +400,7 @@ func dispatchOutput(ctx *cli.Context, baseDir string, result *ExecutionResult, a
return NewError(ErrorJson, fmt.Errorf("failed marshalling output: %v", err))
}
os.Stdout.Write(b)
os.Stdout.Write([]byte("\n"))
}
if len(stdErrObject) > 0 {
b, err := json.MarshalIndent(stdErrObject, "", " ")
@@ -372,6 +408,7 @@ func dispatchOutput(ctx *cli.Context, baseDir string, result *ExecutionResult, a
return NewError(ErrorJson, fmt.Errorf("failed marshalling output: %v", err))
}
os.Stderr.Write(b)
os.Stderr.Write([]byte("\n"))
}
return nil
}

11
cmd/evm/testdata/12/alloc.json vendored Normal file
View File

@@ -0,0 +1,11 @@
{
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b" : {
"balance" : "84000000",
"code" : "0x",
"nonce" : "0x00",
"storage" : {
"0x00" : "0x00"
}
}
}

10
cmd/evm/testdata/12/env.json vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"currentCoinbase" : "0x2adc25665018aa1fe0e6bc666dac8fc2697ff9ba",
"currentDifficulty" : "0x020000",
"currentNumber" : "0x01",
"currentTimestamp" : "0x03e8",
"previousHash" : "0xfda4419b3660e99f37e536dae1ab081c180136bb38c837a93e93d9aab58553b2",
"currentGasLimit" : "0x0f4240",
"currentBaseFee" : "0x20"
}

40
cmd/evm/testdata/12/readme.md vendored Normal file
View File

@@ -0,0 +1,40 @@
## Test 1559 balance + gasCap
This test contains an EIP-1559 consensus issue which happened on Ropsten, where
`geth` did not properly account for the value transfer while doing the check on `max_fee_per_gas * gas_limit`.
Before the issue was fixed, this invocation allowed the transaction to pass into a block:
```
dir=./testdata/12 && ./evm t8n --state.fork=London --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout --output.result=stdout
```
With the fix applied, the result is:
```
dir=./testdata/12 && ./evm t8n --state.fork=London --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json --output.alloc=stdout --output.result=stdout
INFO [07-21|19:03:50.276] rejected tx index=0 hash=ccc996..d83435 from=0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B error="insufficient funds for gas * price + value: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B have 84000000 want 84000032"
INFO [07-21|19:03:50.276] Trie dumping started root=e05f81..6597a5
INFO [07-21|19:03:50.276] Trie dumping complete accounts=1 elapsed="39.549µs"
{
"alloc": {
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b": {
"balance": "0x501bd00"
}
},
"result": {
"stateRoot": "0xe05f81f8244a76503ceec6f88abfcd03047a612a1001217f37d30984536597a5",
"txRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"receiptRoot": "0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
"logsHash": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"receipts": [],
"rejected": [
{
"index": 0,
"error": "insufficient funds for gas * price + value: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B have 84000000 want 84000032"
}
]
}
}
```
The transaction is rejected.

20
cmd/evm/testdata/12/txs.json vendored Normal file
View File

@@ -0,0 +1,20 @@
[
{
"input" : "0x",
"gas" : "0x5208",
"nonce" : "0x0",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x20",
"v" : "0x0",
"r" : "0x0",
"s" : "0x0",
"secretKey" : "0x45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x20",
"accessList" : [
]
}
]

23
cmd/evm/testdata/13/alloc.json vendored Normal file
View File

@@ -0,0 +1,23 @@
{
"0x1111111111111111111111111111111111111111" : {
"balance" : "0x010000000000",
"code" : "0xfe",
"nonce" : "0x01",
"storage" : {
}
},
"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b" : {
"balance" : "0x010000000000",
"code" : "0x",
"nonce" : "0x01",
"storage" : {
}
},
"0xd02d72e067e77158444ef2020ff2d325f929b363" : {
"balance" : "0x01000000000000",
"code" : "0x",
"nonce" : "0x01",
"storage" : {
}
}
}

12
cmd/evm/testdata/13/env.json vendored Normal file
View File

@@ -0,0 +1,12 @@
{
"currentCoinbase" : "0x2adc25665018aa1fe0e6bc666dac8fc2697ff9ba",
"currentDifficulty" : "0x020000",
"currentNumber" : "0x01",
"currentTimestamp" : "0x079e",
"previousHash" : "0xcb23ee65a163121f640673b41788ee94633941405f95009999b502eedfbbfd4f",
"currentGasLimit" : "0x40000000",
"currentBaseFee" : "0x036b",
"blockHashes" : {
"0" : "0xcb23ee65a163121f640673b41788ee94633941405f95009999b502eedfbbfd4f"
}
}

4
cmd/evm/testdata/13/readme.md vendored Normal file
View File

@@ -0,0 +1,4 @@
## Input transactions in RLP form
This testdata folder is used to examplify how transaction input can be provided in rlp form.
Please see the README in `evm` folder for how this is performed.

34
cmd/evm/testdata/13/txs.json vendored Normal file
View File

@@ -0,0 +1,34 @@
[
{
"input" : "0x",
"gas" : "0x84d0",
"nonce" : "0x1",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x0",
"v" : "0x0",
"r" : "0x0",
"s" : "0x0",
"secretKey" : "0x41f6e321b31e72173f8ff2e292359e1862f24fba42fe6f97efaf641980eff298",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x0",
"accessList" : []
},
{
"input" : "0x",
"gas" : "0x84d0",
"nonce" : "0x2",
"to" : "0x1111111111111111111111111111111111111111",
"value" : "0x0",
"v" : "0x0",
"r" : "0x0",
"s" : "0x0",
"secretKey" : "0x41f6e321b31e72173f8ff2e292359e1862f24fba42fe6f97efaf641980eff298",
"chainId" : "0x1",
"type" : "0x2",
"maxFeePerGas" : "0xfa0",
"maxPriorityFeePerGas" : "0x0",
"accessList" : []
}
]

View File

@@ -11,6 +11,8 @@ function showjson(){
function demo(){
echo "$ticks"
echo "$1"
$1
echo ""
echo "$ticks"
echo ""
}
@@ -152,9 +154,7 @@ echo ""
echo "The \`BLOCKHASH\` opcode requires blockhashes to be provided by the caller, inside the \`env\`."
echo "If a required blockhash is not provided, the exit code should be \`4\`:"
echo "Example where blockhashes are provided: "
cmd="./evm t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace"
tick && echo $cmd && tick
$cmd 2>&1 >/dev/null
demo "./evm --verbosity=1 t8n --input.alloc=./testdata/3/alloc.json --input.txs=./testdata/3/txs.json --input.env=./testdata/3/env.json --trace"
cmd="cat trace-0-0x72fadbef39cd251a437eea619cfeda752271a5faaaa2147df012e112159ffb81.jsonl | grep BLOCKHASH -C2"
tick && echo $cmd && tick
echo "$ticks"
@@ -164,13 +164,11 @@ echo ""
echo "In this example, the caller has not provided the required blockhash:"
cmd="./evm t8n --input.alloc=./testdata/4/alloc.json --input.txs=./testdata/4/txs.json --input.env=./testdata/4/env.json --trace"
tick && echo $cmd && tick
tick
$cmd
tick && echo $cmd && $cmd
errc=$?
tick
echo "Error code: $errc"
echo ""
echo "### Chaining"
echo ""
@@ -189,3 +187,28 @@ echo ""
echo "In order to meaningfully chain invocations, one would need to provide meaningful new \`env\`, otherwise the"
echo "actual blocknumber (exposed to the EVM) would not increase."
echo ""
echo "### Transactions in RLP form"
echo ""
echo "It is possible to provide already-signed transactions as input to, using an \`input.txs\` which ends with the \`rlp\` suffix."
echo "The input format for RLP-form transactions is _identical_ to the _output_ format for block bodies. Therefore, it's fully possible"
echo "to use the evm to go from \`json\` input to \`rlp\` input."
echo ""
echo "The following command takes **json** the transactions in \`./testdata/13/txs.json\` and signs them. After execution, they are output to \`signed_txs.rlp\`.:"
demo "./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./testdata/13/txs.json --input.env=./testdata/13/env.json --output.result=alloc_jsontx.json --output.body=signed_txs.rlp"
echo "The \`output.body\` is the rlp-list of transactions, encoded in hex and placed in a string a'la \`json\` encoding rules:"
demo "cat signed_txs.rlp"
echo "We can use \`rlpdump\` to check what the contents are: "
echo "$ticks"
echo "rlpdump -hex \$(cat signed_txs.rlp | jq -r )"
rlpdump -hex $(cat signed_txs.rlp | jq -r )
echo "$ticks"
echo "Now, we can now use those (or any other already signed transactions), as input, like so: "
demo "./evm t8n --state.fork=London --input.alloc=./testdata/13/alloc.json --input.txs=./signed_txs.rlp --input.env=./testdata/13/env.json --output.result=alloc_rlptx.json"
echo "You might have noticed that the results from these two invocations were stored in two separate files. "
echo "And we can now finally check that they match."
echo "$ticks"
echo "cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot"
cat alloc_jsontx.json | jq .stateRoot && cat alloc_rlptx.json | jq .stateRoot
echo "$ticks"

View File

@@ -68,7 +68,6 @@ It expects the genesis file as argument.`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Category: "BLOCKCHAIN COMMANDS",
Description: `
@@ -92,11 +91,15 @@ The dumpgenesis command dumps the genesis block configuration in JSON format to
utils.MetricsHTTPFlag,
utils.MetricsPortFlag,
utils.MetricsEnableInfluxDBFlag,
utils.MetricsEnableInfluxDBV2Flag,
utils.MetricsInfluxDBEndpointFlag,
utils.MetricsInfluxDBDatabaseFlag,
utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBTagsFlag,
utils.MetricsInfluxDBTokenFlag,
utils.MetricsInfluxDBBucketFlag,
utils.MetricsInfluxDBOrganizationFlag,
utils.TxLookupLimitFlag,
},
Category: "BLOCKCHAIN COMMANDS",

View File

@@ -233,6 +233,18 @@ func applyMetricConfig(ctx *cli.Context, cfg *gethConfig) {
if ctx.GlobalIsSet(utils.MetricsInfluxDBTagsFlag.Name) {
cfg.Metrics.InfluxDBTags = ctx.GlobalString(utils.MetricsInfluxDBTagsFlag.Name)
}
if ctx.GlobalIsSet(utils.MetricsEnableInfluxDBV2Flag.Name) {
cfg.Metrics.EnableInfluxDBV2 = ctx.GlobalBool(utils.MetricsEnableInfluxDBV2Flag.Name)
}
if ctx.GlobalIsSet(utils.MetricsInfluxDBTokenFlag.Name) {
cfg.Metrics.InfluxDBToken = ctx.GlobalString(utils.MetricsInfluxDBTokenFlag.Name)
}
if ctx.GlobalIsSet(utils.MetricsInfluxDBBucketFlag.Name) {
cfg.Metrics.InfluxDBBucket = ctx.GlobalString(utils.MetricsInfluxDBBucketFlag.Name)
}
if ctx.GlobalIsSet(utils.MetricsInfluxDBOrganizationFlag.Name) {
cfg.Metrics.InfluxDBOrganization = ctx.GlobalString(utils.MetricsInfluxDBOrganizationFlag.Name)
}
}
func deprecated(field string) bool {

View File

@@ -134,8 +134,6 @@ func remoteConsole(ctx *cli.Context) error {
path = filepath.Join(path, "rinkeby")
} else if ctx.GlobalBool(utils.GoerliFlag.Name) {
path = filepath.Join(path, "goerli")
} else if ctx.GlobalBool(utils.CalaverasFlag.Name) {
path = filepath.Join(path, "calaveras")
}
}
endpoint = fmt.Sprintf("%s/geth.ipc", path)

View File

@@ -75,7 +75,6 @@ Remove blockchain and state databases`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Usage: "Inspect the storage size for each type of data in the database",
Description: `This commands iterates the entire database. If the optional 'prefix' and 'start' arguments are provided, then the iteration is limited to the given subset of data.`,
@@ -91,7 +90,6 @@ Remove blockchain and state databases`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
}
dbCompactCmd = cli.Command{
@@ -105,7 +103,6 @@ Remove blockchain and state databases`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
utils.CacheFlag,
utils.CacheDatabaseFlag,
},
@@ -125,7 +122,6 @@ corruption if it is aborted during execution'!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Description: "This command looks up the specified database key from the database.",
}
@@ -141,7 +137,6 @@ corruption if it is aborted during execution'!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Description: `This command deletes the specified database key from the database.
WARNING: This is a low-level operation which may cause database corruption!`,
@@ -158,7 +153,6 @@ WARNING: This is a low-level operation which may cause database corruption!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Description: `This command sets a given database key to the given value.
WARNING: This is a low-level operation which may cause database corruption!`,
@@ -175,7 +169,6 @@ WARNING: This is a low-level operation which may cause database corruption!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Description: "This command looks up the specified database key from the database.",
}
@@ -191,7 +184,6 @@ WARNING: This is a low-level operation which may cause database corruption!`,
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
},
Description: "This command displays information about the freezer index.",
}

View File

@@ -118,7 +118,7 @@ var (
utils.MiningEnabledFlag,
utils.MinerThreadsFlag,
utils.MinerNotifyFlag,
utils.MinerGasTargetFlag,
utils.LegacyMinerGasTargetFlag,
utils.MinerGasLimitFlag,
utils.MinerGasPriceFlag,
utils.MinerEtherbaseFlag,
@@ -138,7 +138,6 @@ var (
utils.RopstenFlag,
utils.RinkebyFlag,
utils.GoerliFlag,
utils.CalaverasFlag,
utils.VMEnableDebugFlag,
utils.NetworkIdFlag,
utils.EthStatsURLFlag,
@@ -195,6 +194,10 @@ var (
utils.MetricsInfluxDBUsernameFlag,
utils.MetricsInfluxDBPasswordFlag,
utils.MetricsInfluxDBTagsFlag,
utils.MetricsEnableInfluxDBV2Flag,
utils.MetricsInfluxDBTokenFlag,
utils.MetricsInfluxDBBucketFlag,
utils.MetricsInfluxDBOrganizationFlag,
}
)
@@ -274,9 +277,6 @@ func prepare(ctx *cli.Context) {
case ctx.GlobalIsSet(utils.GoerliFlag.Name):
log.Info("Starting Geth on Görli testnet...")
case ctx.GlobalIsSet(utils.CalaverasFlag.Name):
log.Info("Starting Geth on Calaveras testnet...")
case ctx.GlobalIsSet(utils.DeveloperFlag.Name):
log.Info("Starting Geth in ephemeral dev mode...")

View File

@@ -52,13 +52,16 @@
"check": "Geth\\/v1\\.9\\.(7|8|9|10|11|12|13|14|15|16).*$"
},
{
"name": "GethCrash",
"name": "Geth DoS via MULMOD",
"uid": "GETH-2020-04",
"summary": "A denial-of-service issue can be used to crash Geth nodes during block processing",
"description": "Full details to be disclosed at a later date",
"description": "Affected versions suffer from a vulnerability which can be exploited through the `MULMOD` operation, by specifying a modulo of `0`: `mulmod(a,b,0)`, causing a `panic` in the underlying library. \nThe crash was in the `uint256` library, where a buffer [underflowed](https://github.com/holiman/uint256/blob/4ce82e695c10ddad57215bdbeafb68b8c5df2c30/uint256.go#L442).\n\n\tif `d == 0`, `dLen` remains `0`\n\nand https://github.com/holiman/uint256/blob/4ce82e695c10ddad57215bdbeafb68b8c5df2c30/uint256.go#L451 will try to access index `[-1]`.\n\nThe `uint256` library was first merged in this [commit](https://github.com/ethereum/go-ethereum/commit/cf6674539c589f80031f3371a71c6a80addbe454), on 2020-06-08. \nExploiting this vulnerabilty would cause all vulnerable nodes to drop off the network. \n\nThe issue was brought to our attention through a [bug report](https://github.com/ethereum/go-ethereum/issues/21367), showing a `panic` occurring on sync from genesis on the Ropsten network.\n \nIt was estimated that the least obvious way to fix this would be to merge the fix into `uint256`, make a new release of that library and then update the geth-dependency.\n",
"links": [
"https://blog.ethereum.org/2020/11/12/geth_security_release/",
"https://github.com/ethereum/go-ethereum/security/advisories/GHSA-jm5c-rv3w-w83m"
"https://github.com/ethereum/go-ethereum/security/advisories/GHSA-jm5c-rv3w-w83m",
"https://github.com/holiman/uint256/releases/tag/v1.1.1",
"https://github.com/holiman/uint256/pull/80",
"https://github.com/ethereum/go-ethereum/pull/21368"
],
"introduced": "v1.9.16",
"fixed": "v1.9.18",
@@ -66,5 +69,51 @@
"severity": "Critical",
"CVE": "CVE-2020-26242",
"check": "Geth\\/v1\\.9.(16|17).*$"
},
{
"name": "LES Server DoS via GetProofsV2",
"uid": "GETH-2020-05",
"summary": "A DoS vulnerability can make a LES server crash.",
"description": "A DoS vulnerability can make a LES server crash via malicious GetProofsV2 request from a connected LES client.\n\nThe vulnerability was patched in #21896.\n\nThis vulnerability only concern users explicitly running geth as a light server",
"links": [
"https://github.com/ethereum/go-ethereum/security/advisories/GHSA-r33q-22hv-j29q",
"https://github.com/ethereum/go-ethereum/pull/21896"
],
"introduced": "v1.8.0",
"fixed": "v1.9.25",
"published": "2020-12-10",
"severity": "Medium",
"CVE": "CVE-2020-26264",
"check": "(Geth\\/v1\\.8\\.*)|(Geth\\/v1\\.9\\.\\d-.*)|(Geth\\/v1\\.9\\.1\\d-.*)|(Geth\\/v1\\.9\\.(20|21|22|23|24)-.*)$"
},
{
"name": "SELFDESTRUCT-recreate consensus flaw",
"uid": "GETH-2020-06",
"introduced": "v1.9.4",
"fixed": "v1.9.20",
"summary": "A consensus-vulnerability in Geth could cause a chain split, where vulnerable versions refuse to accept the canonical chain.",
"description": "A flaw was repoted at 2020-08-11 by John Youngseok Yang (Software Platform Lab), where a particular sequence of transactions could cause a consensus failure.\n\n- Tx 1:\n - `sender` invokes `caller`.\n - `caller` invokes `0xaa`. `0xaa` has 3 wei, does a self-destruct-to-self\n - `caller` does a `1 wei` -call to `0xaa`, who thereby has 1 wei (the code in `0xaa` still executed, since the tx is still ongoing, but doesn't redo the selfdestruct, it takes a different path if callvalue is non-zero)\n\n-Tx 2:\n - `sender` does a 5-wei call to 0xaa. No exec (since no code). \n\nIn geth, the result would be that `0xaa` had `6 wei`, whereas OE reported (correctly) `5` wei. Furthermore, in geth, if the second tx was not executed, the `0xaa` would be destructed, resulting in `0 wei`. Thus obviously wrong. \n\nIt was determined that the root cause was this [commit](https://github.com/ethereum/go-ethereum/commit/223b950944f494a5b4e0957fd9f92c48b09037ad) from [this PR](https://github.com/ethereum/go-ethereum/pull/19953). The semantics of `createObject` was subtly changd, into returning a non-nil object (with `deleted=true`) where it previously did not if the account had been destructed. This return value caused the new object to inherit the old `balance`.\n",
"links": [
"https://github.com/ethereum/go-ethereum/security/advisories/GHSA-xw37-57qp-9mm4"
],
"published": "2020-12-10",
"severity": "High",
"CVE": "CVE-2020-26265",
"check": "(Geth\\/v1\\.9\\.(4|5|6|7|8|9)-.*)|(Geth\\/v1\\.9\\.1\\d-.*)$"
},
{
"name": "Not ready for London upgrade",
"uid": "GETH-2021-01",
"summary": "The client is not ready for the 'London' technical upgrade, and will deviate from the canonical chain when the London upgrade occurs (at block '12965000' around August 4, 2021.",
"description": "At (or around) August 4, Ethereum will undergo a technical upgrade called 'London'. Clients not upgraded will fail to progress on the canonical chain.",
"links": [
"https://github.com/ethereum/eth1.0-specs/blob/master/network-upgrades/mainnet-upgrades/london.md",
"https://notes.ethereum.org/@timbeiko/ropsten-postmortem"
],
"introduced": "v1.10.1",
"fixed": "v1.10.6",
"published": "2020-12-10",
"severity": "High",
"check": "(Geth\\/v1\\.10\\.(1|2|3|4|5)-.*)$"
}
]

View File

@@ -44,7 +44,6 @@ var AppHelpFlagGroups = []flags.FlagGroup{
utils.MainnetFlag,
utils.GoerliFlag,
utils.RinkebyFlag,
utils.CalaverasFlag,
utils.RopstenFlag,
utils.SyncModeFlag,
utils.ExitWhenSyncedFlag,
@@ -182,7 +181,6 @@ var AppHelpFlagGroups = []flags.FlagGroup{
utils.MinerNotifyFlag,
utils.MinerNotifyFullFlag,
utils.MinerGasPriceFlag,
utils.MinerGasTargetFlag,
utils.MinerGasLimitFlag,
utils.MinerEtherbaseFlag,
utils.MinerExtraDataFlag,
@@ -226,6 +224,7 @@ var AppHelpFlagGroups = []flags.FlagGroup{
utils.LegacyRPCCORSDomainFlag,
utils.LegacyRPCVirtualHostsFlag,
utils.LegacyRPCApiFlag,
utils.LegacyMinerGasTargetFlag,
},
},
{

View File

@@ -158,7 +158,7 @@ func checkEthstats(client *sshClient, network string) (*ethstatsInfos, error) {
if port != 80 && port != 443 {
config += fmt.Sprintf(":%d", port)
}
// Retrieve the IP blacklist
// Retrieve the IP banned list
banned := strings.Split(infos.envvars["BANNED"], ",")
// Run a sanity check to see if the port is reachable

View File

@@ -63,20 +63,20 @@ func (w *wizard) deployEthstats() {
fmt.Printf("What should be the secret password for the API? (default = %s)\n", infos.secret)
infos.secret = w.readDefaultString(infos.secret)
}
// Gather any blacklists to ban from reporting
// Gather any banned lists to ban from reporting
if existed {
fmt.Println()
fmt.Printf("Keep existing IP %v blacklist (y/n)? (default = yes)\n", infos.banned)
fmt.Printf("Keep existing IP %v in the banned list (y/n)? (default = yes)\n", infos.banned)
if !w.readDefaultYesNo(true) {
// The user might want to clear the entire list, although generally probably not
fmt.Println()
fmt.Printf("Clear out blacklist and start over (y/n)? (default = no)\n")
fmt.Printf("Clear out the banned list and start over (y/n)? (default = no)\n")
if w.readDefaultYesNo(false) {
infos.banned = nil
}
// Offer the user to explicitly add/remove certain IP addresses
fmt.Println()
fmt.Println("Which additional IP addresses should be blacklisted?")
fmt.Println("Which additional IP addresses should be in the banned list?")
for {
if ip := w.readIPAddress(); ip != "" {
infos.banned = append(infos.banned, ip)
@@ -85,7 +85,7 @@ func (w *wizard) deployEthstats() {
break
}
fmt.Println()
fmt.Println("Which IP addresses should not be blacklisted?")
fmt.Println("Which IP addresses should not be in the banned list?")
for {
if ip := w.readIPAddress(); ip != "" {
for i, addr := range infos.banned {

View File

@@ -125,10 +125,6 @@ var (
Name: "keystore",
Usage: "Directory for the keystore (default = inside the datadir)",
}
NoUSBFlag = cli.BoolFlag{
Name: "nousb",
Usage: "Disables monitoring for and managing USB hardware wallets (deprecated)",
}
USBFlag = cli.BoolFlag{
Name: "usb",
Usage: "Enable monitoring and management of USB hardware wallets",
@@ -151,10 +147,6 @@ var (
Name: "goerli",
Usage: "Görli network: pre-configured proof-of-authority test network",
}
CalaverasFlag = cli.BoolFlag{
Name: "calaveras",
Usage: "Calaveras network: pre-configured proof-of-authority shortlived test network.",
}
RinkebyFlag = cli.BoolFlag{
Name: "rinkeby",
Usage: "Rinkeby network: pre-configured proof-of-authority test network",
@@ -444,11 +436,6 @@ var (
Name: "miner.notify.full",
Usage: "Notify with pending block headers instead of work packages",
}
MinerGasTargetFlag = cli.Uint64Flag{
Name: "miner.gastarget",
Usage: "Target gas floor for mined blocks",
Value: ethconfig.Defaults.Miner.GasFloor,
}
MinerGasLimitFlag = cli.Uint64Flag{
Name: "miner.gaslimit",
Usage: "Target gas ceiling for mined blocks",
@@ -761,6 +748,29 @@ var (
Value: metrics.DefaultConfig.InfluxDBTags,
}
MetricsEnableInfluxDBV2Flag = cli.BoolFlag{
Name: "metrics.influxdbv2",
Usage: "Enable metrics export/push to an external InfluxDB v2 database",
}
MetricsInfluxDBTokenFlag = cli.StringFlag{
Name: "metrics.influxdb.token",
Usage: "Token to authorize access to the database (v2 only)",
Value: metrics.DefaultConfig.InfluxDBToken,
}
MetricsInfluxDBBucketFlag = cli.StringFlag{
Name: "metrics.influxdb.bucket",
Usage: "InfluxDB bucket name to push reported metrics to (v2 only)",
Value: metrics.DefaultConfig.InfluxDBBucket,
}
MetricsInfluxDBOrganizationFlag = cli.StringFlag{
Name: "metrics.influxdb.organization",
Usage: "InfluxDB organization name (v2 only)",
Value: metrics.DefaultConfig.InfluxDBOrganization,
}
CatalystFlag = cli.BoolFlag{
Name: "catalyst",
Usage: "Catalyst mode (eth2 integration testing)",
@@ -783,9 +793,6 @@ func MakeDataDir(ctx *cli.Context) string {
if ctx.GlobalBool(GoerliFlag.Name) {
return filepath.Join(path, "goerli")
}
if ctx.GlobalBool(CalaverasFlag.Name) {
return filepath.Join(path, "calaveras")
}
return path
}
Fatalf("Cannot determine default data directory, please set manually (--datadir)")
@@ -838,8 +845,6 @@ func setBootstrapNodes(ctx *cli.Context, cfg *p2p.Config) {
urls = params.RinkebyBootnodes
case ctx.GlobalBool(GoerliFlag.Name):
urls = params.GoerliBootnodes
case ctx.GlobalBool(CalaverasFlag.Name):
urls = params.CalaverasBootnodes
case cfg.BootstrapNodes != nil:
return // already set, don't apply defaults.
}
@@ -1283,8 +1288,6 @@ func setDataDir(ctx *cli.Context, cfg *node.Config) {
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby")
case ctx.GlobalBool(GoerliFlag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "goerli")
case ctx.GlobalBool(CalaverasFlag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "calaveras")
}
}
@@ -1386,9 +1389,6 @@ func setMiner(ctx *cli.Context, cfg *miner.Config) {
if ctx.GlobalIsSet(MinerExtraDataFlag.Name) {
cfg.ExtraData = []byte(ctx.GlobalString(MinerExtraDataFlag.Name))
}
if ctx.GlobalIsSet(MinerGasTargetFlag.Name) {
cfg.GasFloor = ctx.GlobalUint64(MinerGasTargetFlag.Name)
}
if ctx.GlobalIsSet(MinerGasLimitFlag.Name) {
cfg.GasCeil = ctx.GlobalUint64(MinerGasLimitFlag.Name)
}
@@ -1401,6 +1401,9 @@ func setMiner(ctx *cli.Context, cfg *miner.Config) {
if ctx.GlobalIsSet(MinerNoVerfiyFlag.Name) {
cfg.Noverify = ctx.GlobalBool(MinerNoVerfiyFlag.Name)
}
if ctx.GlobalIsSet(LegacyMinerGasTargetFlag.Name) {
log.Warn("The generic --miner.gastarget flag is deprecated and will be removed in the future!")
}
}
func setWhitelist(ctx *cli.Context, cfg *ethconfig.Config) {
@@ -1470,7 +1473,7 @@ func CheckExclusive(ctx *cli.Context, args ...interface{}) {
// SetEthConfig applies eth-related command line flags to the config.
func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
// Avoid conflicting network flags
CheckExclusive(ctx, MainnetFlag, DeveloperFlag, RopstenFlag, RinkebyFlag, GoerliFlag, CalaverasFlag)
CheckExclusive(ctx, MainnetFlag, DeveloperFlag, RopstenFlag, RinkebyFlag, GoerliFlag)
CheckExclusive(ctx, LightServeFlag, SyncModeFlag, "light")
CheckExclusive(ctx, DeveloperFlag, ExternalSignerFlag) // Can't use both ephemeral unlocked and external signer
if ctx.GlobalString(GCModeFlag.Name) == "archive" && ctx.GlobalUint64(TxLookupLimitFlag.Name) != 0 {
@@ -1623,11 +1626,6 @@ func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
}
cfg.Genesis = core.DefaultGoerliGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.GoerliGenesisHash)
case ctx.GlobalBool(CalaverasFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 123 // https://gist.github.com/holiman/c5697b041b3dc18c50a5cdd382cbdd16
}
cfg.Genesis = core.DefaultCalaverasGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 1337
@@ -1744,11 +1742,36 @@ func SetupMetrics(ctx *cli.Context) {
log.Info("Enabling metrics collection")
var (
enableExport = ctx.GlobalBool(MetricsEnableInfluxDBFlag.Name)
endpoint = ctx.GlobalString(MetricsInfluxDBEndpointFlag.Name)
database = ctx.GlobalString(MetricsInfluxDBDatabaseFlag.Name)
username = ctx.GlobalString(MetricsInfluxDBUsernameFlag.Name)
password = ctx.GlobalString(MetricsInfluxDBPasswordFlag.Name)
enableExport = ctx.GlobalBool(MetricsEnableInfluxDBFlag.Name)
enableExportV2 = ctx.GlobalBool(MetricsEnableInfluxDBV2Flag.Name)
)
if enableExport || enableExportV2 {
CheckExclusive(ctx, MetricsEnableInfluxDBFlag, MetricsEnableInfluxDBV2Flag)
v1FlagIsSet := ctx.GlobalIsSet(MetricsInfluxDBUsernameFlag.Name) ||
ctx.GlobalIsSet(MetricsInfluxDBPasswordFlag.Name)
v2FlagIsSet := ctx.GlobalIsSet(MetricsInfluxDBTokenFlag.Name) ||
ctx.GlobalIsSet(MetricsInfluxDBOrganizationFlag.Name) ||
ctx.GlobalIsSet(MetricsInfluxDBBucketFlag.Name)
if enableExport && v2FlagIsSet {
Fatalf("Flags --influxdb.metrics.organization, --influxdb.metrics.token, --influxdb.metrics.bucket are only available for influxdb-v2")
} else if enableExportV2 && v1FlagIsSet {
Fatalf("Flags --influxdb.metrics.username, --influxdb.metrics.password are only available for influxdb-v1")
}
}
var (
endpoint = ctx.GlobalString(MetricsInfluxDBEndpointFlag.Name)
database = ctx.GlobalString(MetricsInfluxDBDatabaseFlag.Name)
username = ctx.GlobalString(MetricsInfluxDBUsernameFlag.Name)
password = ctx.GlobalString(MetricsInfluxDBPasswordFlag.Name)
token = ctx.GlobalString(MetricsInfluxDBTokenFlag.Name)
bucket = ctx.GlobalString(MetricsInfluxDBBucketFlag.Name)
organization = ctx.GlobalString(MetricsInfluxDBOrganizationFlag.Name)
)
if enableExport {
@@ -1757,6 +1780,12 @@ func SetupMetrics(ctx *cli.Context) {
log.Info("Enabling metrics export to InfluxDB")
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", tagsMap)
} else if enableExportV2 {
tagsMap := SplitTagsFlag(ctx.GlobalString(MetricsInfluxDBTagsFlag.Name))
log.Info("Enabling metrics export to InfluxDB (v2)")
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, token, bucket, organization, "geth.", tagsMap)
}
if ctx.GlobalIsSet(MetricsHTTPFlag.Name) {
@@ -1817,8 +1846,6 @@ func MakeGenesis(ctx *cli.Context) *core.Genesis {
genesis = core.DefaultRinkebyGenesisBlock()
case ctx.GlobalBool(GoerliFlag.Name):
genesis = core.DefaultGoerliGenesisBlock()
case ctx.GlobalBool(CalaverasFlag.Name):
genesis = core.DefaultCalaverasGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name):
Fatalf("Developer chains are ephemeral")
}

View File

@@ -20,6 +20,7 @@ import (
"fmt"
"strings"
"github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/node"
"gopkg.in/urfave/cli.v1"
)
@@ -33,10 +34,17 @@ var ShowDeprecated = cli.Command{
Description: "Show flags that have been deprecated and will soon be removed",
}
var DeprecatedFlags = []cli.Flag{}
var DeprecatedFlags = []cli.Flag{
LegacyMinerGasTargetFlag,
NoUSBFlag,
}
var (
// (Deprecated May 2020, shown in aliased flags section)
NoUSBFlag = cli.BoolFlag{
Name: "nousb",
Usage: "Disables monitoring for and managing USB hardware wallets (deprecated)",
}
LegacyRPCEnabledFlag = cli.BoolFlag{
Name: "rpc",
Usage: "Enable the HTTP-RPC server (deprecated and will be removed June 2021, use --http)",
@@ -66,6 +74,12 @@ var (
Usage: "API's offered over the HTTP-RPC interface (deprecated and will be removed June 2021, use --http.api)",
Value: "",
}
// (Deprecated July 2021, shown in aliased flags section)
LegacyMinerGasTargetFlag = cli.Uint64Flag{
Name: "miner.gastarget",
Usage: "Target gas floor for mined blocks (deprecated)",
Value: ethconfig.Defaults.Miner.GasFloor,
}
)
// showDeprecated displays deprecated flags that will be soon removed from the codebase.
@@ -74,7 +88,8 @@ func showDeprecated(*cli.Context) {
fmt.Println("The following flags are deprecated and will be removed in the future!")
fmt.Println("--------------------------------------------------------------------")
fmt.Println()
// TODO remove when there are newly deprecated flags
fmt.Println("no deprecated flags to show at this time")
for _, flag := range DeprecatedFlags {
fmt.Println(flag.String())
}
fmt.Println()
}

View File

@@ -215,7 +215,7 @@ func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Blo
ancestors[parent] = ancestorHeader
// If the ancestor doesn't have any uncles, we don't have to iterate them
if ancestorHeader.UncleHash != types.EmptyUncleHash {
// Need to add those uncles to the blacklist too
// Need to add those uncles to the banned list too
ancestor := chain.GetBlock(parent, number)
if ancestor == nil {
break

View File

@@ -140,8 +140,9 @@ func (ethash *Ethash) mine(block *types.Block, id int, seed uint64, abort chan s
)
// Start generating random nonces until we abort or find a good one
var (
attempts = int64(0)
nonce = seed
attempts = int64(0)
nonce = seed
powBuffer = new(big.Int)
)
logger := ethash.config.Log.New("miner", id)
logger.Trace("Started ethash search for new nonces", "seed", seed)
@@ -163,7 +164,7 @@ search:
}
// Compute the PoW value of this nonce
digest, result := hashimotoFull(dataset.dataset, hash, nonce)
if new(big.Int).SetBytes(result).Cmp(target) <= 0 {
if powBuffer.SetBytes(result).Cmp(target) <= 0 {
// Correct nonce found, create a new header with it
header = types.CopyHeader(header)
header.Nonce = types.EncodeNonce(nonce)

View File

@@ -103,44 +103,9 @@ func (v *BlockValidator) ValidateState(block *types.Block, statedb *state.StateD
}
// CalcGasLimit computes the gas limit of the next block after parent. It aims
// to keep the baseline gas above the provided floor, and increase it towards the
// ceil if the blocks are full. If the ceil is exceeded, it will always decrease
// the gas allowance.
func CalcGasLimit(parentGasUsed, parentGasLimit, gasFloor, gasCeil uint64) uint64 {
// contrib = (parentGasUsed * 3 / 2) / 1024
contrib := (parentGasUsed + parentGasUsed/2) / params.GasLimitBoundDivisor
// decay = parentGasLimit / 1024 -1
decay := parentGasLimit/params.GasLimitBoundDivisor - 1
/*
strategy: gasLimit of block-to-mine is set based on parent's
gasUsed value. if parentGasUsed > parentGasLimit * (2/3) then we
increase it, otherwise lower it (or leave it unchanged if it's right
at that usage) the amount increased/decreased depends on how far away
from parentGasLimit * (2/3) parentGasUsed is.
*/
limit := parentGasLimit - decay + contrib
if limit < params.MinGasLimit {
limit = params.MinGasLimit
}
// If we're outside our allowed gas range, we try to hone towards them
if limit < gasFloor {
limit = parentGasLimit + decay
if limit > gasFloor {
limit = gasFloor
}
} else if limit > gasCeil {
limit = parentGasLimit - decay
if limit < gasCeil {
limit = gasCeil
}
}
return limit
}
// CalcGasLimit1559 calculates the next block gas limit under 1559 rules.
func CalcGasLimit1559(parentGasLimit, desiredLimit uint64) uint64 {
// to keep the baseline gas close to the provided target, and increase it towards
// the target if the baseline gas is lower.
func CalcGasLimit(parentGasLimit, desiredLimit uint64) uint64 {
delta := parentGasLimit/params.GasLimitBoundDivisor - 1
limit := parentGasLimit
if desiredLimit < params.MinGasLimit {

View File

@@ -198,8 +198,7 @@ func testHeaderConcurrentAbortion(t *testing.T, threads int) {
}
}
func TestCalcGasLimit1559(t *testing.T) {
func TestCalcGasLimit(t *testing.T) {
for i, tc := range []struct {
pGasLimit uint64
max uint64
@@ -209,23 +208,23 @@ func TestCalcGasLimit1559(t *testing.T) {
{40000000, 40039061, 39960939},
} {
// Increase
if have, want := CalcGasLimit1559(tc.pGasLimit, 2*tc.pGasLimit), tc.max; have != want {
if have, want := CalcGasLimit(tc.pGasLimit, 2*tc.pGasLimit), tc.max; have != want {
t.Errorf("test %d: have %d want <%d", i, have, want)
}
// Decrease
if have, want := CalcGasLimit1559(tc.pGasLimit, 0), tc.min; have != want {
if have, want := CalcGasLimit(tc.pGasLimit, 0), tc.min; have != want {
t.Errorf("test %d: have %d want >%d", i, have, want)
}
// Small decrease
if have, want := CalcGasLimit1559(tc.pGasLimit, tc.pGasLimit-1), tc.pGasLimit-1; have != want {
if have, want := CalcGasLimit(tc.pGasLimit, tc.pGasLimit-1), tc.pGasLimit-1; have != want {
t.Errorf("test %d: have %d want %d", i, have, want)
}
// Small increase
if have, want := CalcGasLimit1559(tc.pGasLimit, tc.pGasLimit+1), tc.pGasLimit+1; have != want {
if have, want := CalcGasLimit(tc.pGasLimit, tc.pGasLimit+1), tc.pGasLimit+1; have != want {
t.Errorf("test %d: have %d want %d", i, have, want)
}
// No change
if have, want := CalcGasLimit1559(tc.pGasLimit, tc.pGasLimit), tc.pGasLimit; have != want {
if have, want := CalcGasLimit(tc.pGasLimit, tc.pGasLimit), tc.pGasLimit; have != want {
t.Errorf("test %d: have %d want %d", i, have, want)
}
}

View File

@@ -1785,8 +1785,8 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, er
}
// If the header is a banned one, straight out abort
if BadHashes[block.Hash()] {
bc.reportBlock(block, nil, ErrBlacklistedHash)
return it.index, ErrBlacklistedHash
bc.reportBlock(block, nil, ErrBannedHash)
return it.index, ErrBannedHash
}
// If the block is known (in the middle of the chain), it's a special case for
// Clique blocks where they can share state among each other, so importing an
@@ -2417,12 +2417,22 @@ func (bc *BlockChain) GetTdByHash(hash common.Hash) *big.Int {
// GetHeader retrieves a block header from the database by hash and number,
// caching it if found.
func (bc *BlockChain) GetHeader(hash common.Hash, number uint64) *types.Header {
// Blockchain might have cached the whole block, only if not go to headerchain
if block, ok := bc.blockCache.Get(hash); ok {
return block.(*types.Block).Header()
}
return bc.hc.GetHeader(hash, number)
}
// GetHeaderByHash retrieves a block header from the database by hash, caching it if
// found.
func (bc *BlockChain) GetHeaderByHash(hash common.Hash) *types.Header {
// Blockchain might have cached the whole block, only if not go to headerchain
if block, ok := bc.blockCache.Get(hash); ok {
return block.(*types.Block).Header()
}
return bc.hc.GetHeaderByHash(hash)
}

View File

@@ -473,8 +473,8 @@ func testBadHashes(t *testing.T, full bool) {
_, err = blockchain.InsertHeaderChain(headers, 1)
}
if !errors.Is(err, ErrBlacklistedHash) {
t.Errorf("error mismatch: have: %v, want: %v", err, ErrBlacklistedHash)
if !errors.Is(err, ErrBannedHash) {
t.Errorf("error mismatch: have: %v, want: %v", err, ErrBannedHash)
}
}

View File

@@ -273,11 +273,10 @@ func makeHeader(chain consensus.ChainReader, parent *types.Block, state *state.S
}
if chain.Config().IsLondon(header.Number) {
header.BaseFee = misc.CalcBaseFee(chain.Config(), parent.Header())
parentGasLimit := parent.GasLimit()
if !chain.Config().IsLondon(parent.Number()) {
parentGasLimit = parent.GasLimit() * params.ElasticityMultiplier
parentGasLimit := parent.GasLimit() * params.ElasticityMultiplier
header.GasLimit = CalcGasLimit(parentGasLimit, parentGasLimit)
}
header.GasLimit = CalcGasLimit1559(parentGasLimit, parentGasLimit)
}
return header
}

View File

@@ -26,8 +26,8 @@ var (
// ErrKnownBlock is returned when a block to import is already known locally.
ErrKnownBlock = errors.New("block already known")
// ErrBlacklistedHash is returned if a block to import is on the blacklist.
ErrBlacklistedHash = errors.New("blacklisted hash")
// ErrBannedHash is returned if a block to import is on the banned list.
ErrBannedHash = errors.New("banned hash")
// ErrNoGenesis is returned when there is no Genesis Block.
ErrNoGenesis = errors.New("genesis not found in chain")
@@ -87,4 +87,7 @@ var (
// ErrFeeCapTooLow is returned if the transaction fee cap is less than the
// the base fee of the block.
ErrFeeCapTooLow = errors.New("max fee per gas less than block base fee")
// ErrSenderNoEOA is returned if the sender of a transaction is a contract.
ErrSenderNoEOA = errors.New("sender not an eoa")
)

View File

@@ -248,8 +248,6 @@ func (g *Genesis) configOrDefault(ghash common.Hash) *params.ChainConfig {
return params.RinkebyChainConfig
case ghash == params.GoerliGenesisHash:
return params.GoerliChainConfig
case ghash == params.CalaverasGenesisHash:
return params.CalaverasChainConfig
default:
return params.AllEthashProtocolChanges
}
@@ -399,18 +397,6 @@ func DefaultGoerliGenesisBlock() *Genesis {
}
}
func DefaultCalaverasGenesisBlock() *Genesis {
// Full genesis: https://gist.github.com/holiman/c6ed9269dce28304ad176314caa75e97
return &Genesis{
Config: params.CalaverasChainConfig,
Timestamp: 0x60b3877f,
ExtraData: hexutil.MustDecode("0x00000000000000000000000000000000000000000000000000000000000000005211cea3870c7ba7c6c44b185e62eecdb864cd8c560228ce57d31efbf64c200b2c200aacec78cf17a7148e784fe95a7a750335f8b9572ee28d72e7650000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
GasLimit: 0x47b760,
Difficulty: big.NewInt(1),
Alloc: decodePrealloc(calaverasAllocData),
}
}
// DeveloperGenesisBlock returns the 'geth --dev' genesis block.
func DeveloperGenesisBlock(period uint64, faucet common.Address) *Genesis {
// Override the default period to the user requested one

View File

@@ -185,10 +185,6 @@ func TestGenesisHashes(t *testing.T) {
genesis: DefaultRinkebyGenesisBlock(),
hash: params.RinkebyGenesisHash,
},
{
genesis: DefaultCalaverasGenesisBlock(),
hash: params.CalaverasGenesisHash,
},
}
for i, c := range cases {
b := c.genesis.MustCommit(rawdb.NewMemoryDatabase())

View File

@@ -315,11 +315,11 @@ func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int)
}
// If the header is a banned one, straight out abort
if BadHashes[chain[i].ParentHash] {
return i - 1, ErrBlacklistedHash
return i - 1, ErrBannedHash
}
// If it's the last header in the cunk, we need to check it too
if i == len(chain)-1 && BadHashes[chain[i].Hash()] {
return i, ErrBlacklistedHash
return i, ErrBannedHash
}
}

View File

@@ -444,6 +444,7 @@ func TestAncientStorage(t *testing.T) {
if err != nil {
t.Fatalf("failed to create database with ancient backend")
}
defer db.Close()
// Create a test block
block := types.NewBlockWithHeader(&types.Header{
Number: big.NewInt(0),

View File

@@ -106,7 +106,7 @@ func ReadTransaction(db ethdb.Reader, hash common.Hash) (*types.Transaction, com
}
body := ReadBody(db, blockHash, *blockNumber)
if body == nil {
log.Error("Transaction referenced missing", "number", blockNumber, "hash", blockHash)
log.Error("Transaction referenced missing", "number", *blockNumber, "hash", blockHash)
return nil, common.Hash{}, 0, 0
}
for txIndex, tx := range body.Transactions {
@@ -114,7 +114,7 @@ func ReadTransaction(db ethdb.Reader, hash common.Hash) (*types.Transaction, com
return tx, blockHash, *blockNumber, uint64(txIndex)
}
}
log.Error("Transaction not found", "number", blockNumber, "hash", blockHash, "txhash", hash)
log.Error("Transaction not found", "number", *blockNumber, "hash", blockHash, "txhash", hash)
return nil, common.Hash{}, 0, 0
}
@@ -137,7 +137,7 @@ func ReadReceipt(db ethdb.Reader, hash common.Hash, config *params.ChainConfig)
return receipt, blockHash, *blockNumber, uint64(receiptIndex)
}
}
log.Error("Receipt not found", "number", blockNumber, "hash", blockHash, "txhash", hash)
log.Error("Receipt not found", "number", *blockNumber, "hash", blockHash, "txhash", hash)
return nil, common.Hash{}, 0, 0
}

View File

@@ -89,6 +89,11 @@ func (db *nofreezedb) Ancient(kind string, number uint64) ([]byte, error) {
return nil, errNotSupported
}
// ReadAncients returns an error as we don't have a backing chain freezer.
func (db *nofreezedb) ReadAncients(kind string, start, max, maxByteSize uint64) ([][]byte, error) {
return nil, errNotSupported
}
// Ancients returns an error as we don't have a backing chain freezer.
func (db *nofreezedb) Ancients() (uint64, error) {
return 0, errNotSupported

View File

@@ -180,6 +180,18 @@ func (f *freezer) Ancient(kind string, number uint64) ([]byte, error) {
return nil, errUnknownTable
}
// ReadAncients retrieves multiple items in sequence, starting from the index 'start'.
// It will return
// - at most 'max' items,
// - at least 1 item (even if exceeding the maxByteSize), but will otherwise
// return as many items as fit into maxByteSize.
func (f *freezer) ReadAncients(kind string, start, count, maxBytes uint64) ([][]byte, error) {
if table := f.tables[kind]; table != nil {
return table.RetrieveItems(start, count, maxBytes)
}
return nil, errUnknownTable
}
// Ancients returns the length of the frozen items.
func (f *freezer) Ancients() (uint64, error) {
return atomic.LoadUint64(&f.frozen), nil
@@ -402,7 +414,7 @@ func (f *freezer) freeze(db ethdb.KeyValueStore) {
}
batch.Reset()
// Wipe out side chains also and track dangling side chians
// Wipe out side chains also and track dangling side chains
var dangling []common.Hash
for number := first; number < f.frozen; number++ {
// Always keep the genesis block in active database

View File

@@ -70,6 +70,19 @@ func (i *indexEntry) marshallBinary() []byte {
return b
}
// bounds returns the start- and end- offsets, and the file number of where to
// read there data item marked by the two index entries. The two entries are
// assumed to be sequential.
func (start *indexEntry) bounds(end *indexEntry) (startOffset, endOffset, fileId uint32) {
if start.filenum != end.filenum {
// If a piece of data 'crosses' a data-file,
// it's actually in one piece on the second data-file.
// We return a zero-indexEntry for the second file as start
return 0, end.offset, end.filenum
}
return start.offset, end.offset, end.filenum
}
// freezerTable represents a single chained data table within the freezer (e.g. blocks).
// It consists of a data file (snappy encoded arbitrary data blobs) and an indexEntry
// file (uncompressed 64 bit indices into the data file).
@@ -546,84 +559,183 @@ func (t *freezerTable) append(item uint64, encodedBlob []byte, wlock bool) (bool
return false, nil
}
// getBounds returns the indexes for the item
// returns start, end, filenumber and error
func (t *freezerTable) getBounds(item uint64) (uint32, uint32, uint32, error) {
buffer := make([]byte, indexEntrySize)
var startIdx, endIdx indexEntry
// Read second index
if _, err := t.index.ReadAt(buffer, int64((item+1)*indexEntrySize)); err != nil {
return 0, 0, 0, err
// getIndices returns the index entries for the given from-item, covering 'count' items.
// N.B: The actual number of returned indices for N items will always be N+1 (unless an
// error is returned).
// OBS: This method assumes that the caller has already verified (and/or trimmed) the range
// so that the items are within bounds. If this method is used to read out of bounds,
// it will return error.
func (t *freezerTable) getIndices(from, count uint64) ([]*indexEntry, error) {
// Apply the table-offset
from = from - uint64(t.itemOffset)
// For reading N items, we need N+1 indices.
buffer := make([]byte, (count+1)*indexEntrySize)
if _, err := t.index.ReadAt(buffer, int64(from*indexEntrySize)); err != nil {
return nil, err
}
endIdx.unmarshalBinary(buffer)
// Read first index (unless it's the very first item)
if item != 0 {
if _, err := t.index.ReadAt(buffer, int64(item*indexEntrySize)); err != nil {
return 0, 0, 0, err
}
startIdx.unmarshalBinary(buffer)
} else {
var (
indices []*indexEntry
offset int
)
for i := from; i <= from+count; i++ {
index := new(indexEntry)
index.unmarshalBinary(buffer[offset:])
offset += indexEntrySize
indices = append(indices, index)
}
if from == 0 {
// Special case if we're reading the first item in the freezer. We assume that
// the first item always start from zero(regarding the deletion, we
// only support deletion by files, so that the assumption is held).
// This means we can use the first item metadata to carry information about
// the 'global' offset, for the deletion-case
return 0, endIdx.offset, endIdx.filenum, nil
indices[0].offset = 0
indices[0].filenum = indices[1].filenum
}
if startIdx.filenum != endIdx.filenum {
// If a piece of data 'crosses' a data-file,
// it's actually in one piece on the second data-file.
// We return a zero-indexEntry for the second file as start
return 0, endIdx.offset, endIdx.filenum, nil
}
return startIdx.offset, endIdx.offset, endIdx.filenum, nil
return indices, nil
}
// Retrieve looks up the data offset of an item with the given number and retrieves
// the raw binary blob from the data file.
func (t *freezerTable) Retrieve(item uint64) ([]byte, error) {
blob, err := t.retrieve(item)
items, err := t.RetrieveItems(item, 1, 0)
if err != nil {
return nil, err
}
if t.noCompression {
return blob, nil
}
return snappy.Decode(nil, blob)
return items[0], nil
}
// retrieve looks up the data offset of an item with the given number and retrieves
// the raw binary blob from the data file. OBS! This method does not decode
// compressed data.
func (t *freezerTable) retrieve(item uint64) ([]byte, error) {
// RetrieveItems returns multiple items in sequence, starting from the index 'start'.
// It will return at most 'max' items, but will abort earlier to respect the
// 'maxBytes' argument. However, if the 'maxBytes' is smaller than the size of one
// item, it _will_ return one element and possibly overflow the maxBytes.
func (t *freezerTable) RetrieveItems(start, count, maxBytes uint64) ([][]byte, error) {
// First we read the 'raw' data, which might be compressed.
diskData, sizes, err := t.retrieveItems(start, count, maxBytes)
if err != nil {
return nil, err
}
var (
output = make([][]byte, 0, count)
offset int // offset for reading
outputSize int // size of uncompressed data
)
// Now slice up the data and decompress.
for i, diskSize := range sizes {
item := diskData[offset : offset+diskSize]
offset += diskSize
decompressedSize := diskSize
if !t.noCompression {
decompressedSize, _ = snappy.DecodedLen(item)
}
if i > 0 && uint64(outputSize+decompressedSize) > maxBytes {
break
}
if !t.noCompression {
data, err := snappy.Decode(nil, item)
if err != nil {
return nil, err
}
output = append(output, data)
} else {
output = append(output, item)
}
outputSize += decompressedSize
}
return output, nil
}
// retrieveItems reads up to 'count' items from the table. It reads at least
// one item, but otherwise avoids reading more than maxBytes bytes.
// It returns the (potentially compressed) data, and the sizes.
func (t *freezerTable) retrieveItems(start, count, maxBytes uint64) ([]byte, []int, error) {
t.lock.RLock()
defer t.lock.RUnlock()
// Ensure the table and the item is accessible
if t.index == nil || t.head == nil {
return nil, errClosed
return nil, nil, errClosed
}
if atomic.LoadUint64(&t.items) <= item {
return nil, errOutOfBounds
itemCount := atomic.LoadUint64(&t.items) // max number
// Ensure the start is written, not deleted from the tail, and that the
// caller actually wants something
if itemCount <= start || uint64(t.itemOffset) > start || count == 0 {
return nil, nil, errOutOfBounds
}
// Ensure the item was not deleted from the tail either
if uint64(t.itemOffset) > item {
return nil, errOutOfBounds
if start+count > itemCount {
count = itemCount - start
}
startOffset, endOffset, filenum, err := t.getBounds(item - uint64(t.itemOffset))
var (
output = make([]byte, maxBytes) // Buffer to read data into
outputSize int // Used size of that buffer
)
// readData is a helper method to read a single data item from disk.
readData := func(fileId, start uint32, length int) error {
// In case a small limit is used, and the elements are large, may need to
// realloc the read-buffer when reading the first (and only) item.
if len(output) < length {
output = make([]byte, length)
}
dataFile, exist := t.files[fileId]
if !exist {
return fmt.Errorf("missing data file %d", fileId)
}
if _, err := dataFile.ReadAt(output[outputSize:outputSize+length], int64(start)); err != nil {
return err
}
outputSize += length
return nil
}
// Read all the indexes in one go
indices, err := t.getIndices(start, count)
if err != nil {
return nil, err
return nil, nil, err
}
dataFile, exist := t.files[filenum]
if !exist {
return nil, fmt.Errorf("missing data file %d", filenum)
var (
sizes []int // The sizes for each element
totalSize = 0 // The total size of all data read so far
readStart = indices[0].offset // Where, in the file, to start reading
unreadSize = 0 // The size of the as-yet-unread data
)
for i, firstIndex := range indices[:len(indices)-1] {
secondIndex := indices[i+1]
// Determine the size of the item.
offset1, offset2, _ := firstIndex.bounds(secondIndex)
size := int(offset2 - offset1)
// Crossing a file boundary?
if secondIndex.filenum != firstIndex.filenum {
// If we have unread data in the first file, we need to do that read now.
if unreadSize > 0 {
if err := readData(firstIndex.filenum, readStart, unreadSize); err != nil {
return nil, nil, err
}
unreadSize = 0
}
readStart = 0
}
if i > 0 && uint64(totalSize+size) > maxBytes {
// About to break out due to byte limit being exceeded. We don't
// read this last item, but we need to do the deferred reads now.
if unreadSize > 0 {
if err := readData(secondIndex.filenum, readStart, unreadSize); err != nil {
return nil, nil, err
}
}
break
}
// Defer the read for later
unreadSize += size
totalSize += size
sizes = append(sizes, size)
if i == len(indices)-2 || uint64(totalSize) > maxBytes {
// Last item, need to do the read now
if err := readData(secondIndex.filenum, readStart, unreadSize); err != nil {
return nil, nil, err
}
break
}
}
// Retrieve the data itself, decompress and return
blob := make([]byte, endOffset-startOffset)
if _, err := dataFile.ReadAt(blob, int64(startOffset)); err != nil {
return nil, err
}
t.readMeter.Mark(int64(len(blob) + 2*indexEntrySize))
return blob, nil
return output[:outputSize], sizes, nil
}
// has returns an indicator whether the specified number data

View File

@@ -74,7 +74,7 @@ func TestFreezerBasics(t *testing.T) {
exp := getChunk(15, y)
got, err := f.Retrieve(uint64(y))
if err != nil {
t.Fatal(err)
t.Fatalf("reading item %d: %v", y, err)
}
if !bytes.Equal(got, exp) {
t.Fatalf("test %d, got \n%x != \n%x", y, got, exp)
@@ -692,3 +692,118 @@ func TestAppendTruncateParallel(t *testing.T) {
}
}
}
// TestSequentialRead does some basic tests on the RetrieveItems.
func TestSequentialRead(t *testing.T) {
rm, wm, sg := metrics.NewMeter(), metrics.NewMeter(), metrics.NewGauge()
fname := fmt.Sprintf("batchread-%d", rand.Uint64())
{ // Fill table
f, err := newCustomTable(os.TempDir(), fname, rm, wm, sg, 50, true)
if err != nil {
t.Fatal(err)
}
// Write 15 bytes 30 times
for x := 0; x < 30; x++ {
data := getChunk(15, x)
f.Append(uint64(x), data)
}
f.DumpIndex(0, 30)
f.Close()
}
{ // Open it, iterate, verify iteration
f, err := newCustomTable(os.TempDir(), fname, rm, wm, sg, 50, true)
if err != nil {
t.Fatal(err)
}
items, err := f.RetrieveItems(0, 10000, 100000)
if err != nil {
t.Fatal(err)
}
if have, want := len(items), 30; have != want {
t.Fatalf("want %d items, have %d ", want, have)
}
for i, have := range items {
want := getChunk(15, i)
if !bytes.Equal(want, have) {
t.Fatalf("data corruption: have\n%x\n, want \n%x\n", have, want)
}
}
f.Close()
}
{ // Open it, iterate, verify byte limit. The byte limit is less than item
// size, so each lookup should only return one item
f, err := newCustomTable(os.TempDir(), fname, rm, wm, sg, 40, true)
if err != nil {
t.Fatal(err)
}
items, err := f.RetrieveItems(0, 10000, 10)
if err != nil {
t.Fatal(err)
}
if have, want := len(items), 1; have != want {
t.Fatalf("want %d items, have %d ", want, have)
}
for i, have := range items {
want := getChunk(15, i)
if !bytes.Equal(want, have) {
t.Fatalf("data corruption: have\n%x\n, want \n%x\n", have, want)
}
}
f.Close()
}
}
// TestSequentialReadByteLimit does some more advanced tests on batch reads.
// These tests check that when the byte limit hits, we correctly abort in time,
// but also properly do all the deferred reads for the previous data, regardless
// of whether the data crosses a file boundary or not.
func TestSequentialReadByteLimit(t *testing.T) {
rm, wm, sg := metrics.NewMeter(), metrics.NewMeter(), metrics.NewGauge()
fname := fmt.Sprintf("batchread-2-%d", rand.Uint64())
{ // Fill table
f, err := newCustomTable(os.TempDir(), fname, rm, wm, sg, 100, true)
if err != nil {
t.Fatal(err)
}
// Write 10 bytes 30 times,
// Splitting it at every 100 bytes (10 items)
for x := 0; x < 30; x++ {
data := getChunk(10, x)
f.Append(uint64(x), data)
}
f.Close()
}
for i, tc := range []struct {
items uint64
limit uint64
want int
}{
{9, 89, 8},
{10, 99, 9},
{11, 109, 10},
{100, 89, 8},
{100, 99, 9},
{100, 109, 10},
} {
{
f, err := newCustomTable(os.TempDir(), fname, rm, wm, sg, 100, true)
if err != nil {
t.Fatal(err)
}
items, err := f.RetrieveItems(0, tc.items, tc.limit)
if err != nil {
t.Fatal(err)
}
if have, want := len(items), tc.want; have != want {
t.Fatalf("test %d: want %d items, have %d ", i, want, have)
}
for ii, have := range items {
want := getChunk(10, ii)
if !bytes.Equal(want, have) {
t.Fatalf("test %d: data corruption item %d: have\n%x\n, want \n%x\n", i, ii, have, want)
}
}
f.Close()
}
}
}

View File

@@ -62,6 +62,12 @@ func (t *table) Ancient(kind string, number uint64) ([]byte, error) {
return t.db.Ancient(kind, number)
}
// ReadAncients is a noop passthrough that just forwards the request to the underlying
// database.
func (t *table) ReadAncients(kind string, start, count, maxBytes uint64) ([][]byte, error) {
return t.db.ReadAncients(kind, start, count, maxBytes)
}
// Ancients is a noop passthrough that just forwards the request to the underlying
// database.
func (t *table) Ancients() (uint64, error) {

View File

@@ -90,7 +90,7 @@ func (bloom *stateBloom) Commit(filename, tempname string) error {
return err
}
// Ensure the file is synced to disk
f, err := os.Open(tempname)
f, err := os.OpenFile(tempname, os.O_RDWR, 0666)
if err != nil {
return err
}

View File

@@ -169,11 +169,17 @@ type Tree struct {
// store (with a number of memory layers from a journal), ensuring that the head
// of the snapshot matches the expected one.
//
// If the snapshot is missing or the disk layer is broken, the entire is deleted
// and will be reconstructed from scratch based on the tries in the key-value
// store, on a background thread. If the memory layers from the journal is not
// continuous with disk layer or the journal is missing, all diffs will be discarded
// iff it's in "recovery" mode, otherwise rebuild is mandatory.
// If the snapshot is missing or the disk layer is broken, the snapshot will be
// reconstructed using both the existing data and the state trie.
// The repair happens on a background thread.
//
// If the memory layers in the journal do not match the disk layer (e.g. there is
// a gap) or the journal is missing, there are two repair cases:
//
// - if the 'recovery' parameter is true, all memory diff-layers will be discarded.
// This case happens when the snapshot is 'ahead' of the state trie.
// - otherwise, the entire snapshot is considered invalid and will be recreated on
// a background thread.
func New(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, root common.Hash, async bool, rebuild bool, recovery bool) (*Tree, error) {
// Create a new, empty snapshot tree
snap := &Tree{
@@ -469,7 +475,7 @@ func (t *Tree) cap(diff *diffLayer, layers int) *diskLayer {
if flattened.memory < aggregatorMemoryLimit {
// Accumulator layer is smaller than the limit, so we can abort, unless
// there's a snapshot being generated currently. In that case, the trie
// will move fron underneath the generator so we **must** merge all the
// will move from underneath the generator so we **must** merge all the
// partial data down into the snapshot and restart the generation.
if flattened.parent.(*diskLayer).genAbort == nil {
return nil

View File

@@ -878,7 +878,7 @@ func (s *StateDB) IntermediateRoot(deleteEmptyObjects bool) common.Hash {
return s.trie.Hash()
}
// Prepare sets the current transaction hash and index and block hash which is
// Prepare sets the current transaction hash and index which are
// used when the EVM emits new state logs.
func (s *StateDB) Prepare(thash common.Hash, ti int) {
s.thash = thash

View File

@@ -0,0 +1,110 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package state
import (
"math/big"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
)
func filledStateDB() *StateDB {
state, _ := New(common.Hash{}, NewDatabase(rawdb.NewMemoryDatabase()), nil)
// Create an account and check if the retrieved balance is correct
addr := common.HexToAddress("0xaffeaffeaffeaffeaffeaffeaffeaffeaffeaffe")
skey := common.HexToHash("aaa")
sval := common.HexToHash("bbb")
state.SetBalance(addr, big.NewInt(42)) // Change the account trie
state.SetCode(addr, []byte("hello")) // Change an external metadata
state.SetState(addr, skey, sval) // Change the storage trie
for i := 0; i < 100; i++ {
sk := common.BigToHash(big.NewInt(int64(i)))
state.SetState(addr, sk, sk) // Change the storage trie
}
return state
}
func TestCopyAndClose(t *testing.T) {
db := filledStateDB()
prefetcher := newTriePrefetcher(db.db, db.originalRoot, "")
skey := common.HexToHash("aaa")
prefetcher.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
prefetcher.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
time.Sleep(1 * time.Second)
a := prefetcher.trie(db.originalRoot)
prefetcher.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
b := prefetcher.trie(db.originalRoot)
cpy := prefetcher.copy()
cpy.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
cpy.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
c := cpy.trie(db.originalRoot)
prefetcher.close()
cpy2 := cpy.copy()
cpy2.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
d := cpy2.trie(db.originalRoot)
cpy.close()
cpy2.close()
if a.Hash() != b.Hash() || a.Hash() != c.Hash() || a.Hash() != d.Hash() {
t.Fatalf("Invalid trie, hashes should be equal: %v %v %v %v", a.Hash(), b.Hash(), c.Hash(), d.Hash())
}
}
func TestUseAfterClose(t *testing.T) {
db := filledStateDB()
prefetcher := newTriePrefetcher(db.db, db.originalRoot, "")
skey := common.HexToHash("aaa")
prefetcher.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
a := prefetcher.trie(db.originalRoot)
prefetcher.close()
b := prefetcher.trie(db.originalRoot)
if a == nil {
t.Fatal("Prefetching before close should not return nil")
}
if b != nil {
t.Fatal("Trie after close should return nil")
}
}
func TestCopyClose(t *testing.T) {
db := filledStateDB()
prefetcher := newTriePrefetcher(db.db, db.originalRoot, "")
skey := common.HexToHash("aaa")
prefetcher.prefetch(db.originalRoot, [][]byte{skey.Bytes()})
cpy := prefetcher.copy()
a := prefetcher.trie(db.originalRoot)
b := cpy.trie(db.originalRoot)
prefetcher.close()
c := prefetcher.trie(db.originalRoot)
d := cpy.trie(db.originalRoot)
if a == nil {
t.Fatal("Prefetching before close should not return nil")
}
if b == nil {
t.Fatal("Copy trie should return nil")
}
if c != nil {
t.Fatal("Trie after close should return nil")
}
if d == nil {
t.Fatal("Copy trie should not return nil")
}
}

View File

@@ -118,7 +118,7 @@ func TestStateProcessorErrors(t *testing.T) {
txs: []*types.Transaction{
makeTx(0, common.Address{}, big.NewInt(1000000000000000000), params.TxGas, big.NewInt(875000000), nil),
},
want: "could not apply tx 0 [0x98c796b470f7fcab40aaef5c965a602b0238e1034cce6fb73823042dd0638d74]: insufficient funds for transfer: address 0x71562b71999873DB5b286dF957af199Ec94617F7",
want: "could not apply tx 0 [0x98c796b470f7fcab40aaef5c965a602b0238e1034cce6fb73823042dd0638d74]: insufficient funds for gas * price + value: address 0x71562b71999873DB5b286dF957af199Ec94617F7 have 1000000000000000000 want 1000018375000000000",
},
{ // ErrInsufficientFunds
txs: []*types.Transaction{
@@ -195,7 +195,7 @@ func TestStateProcessorErrors(t *testing.T) {
}
}
// One final error is ErrTxTypeNotSupported. For this, we need an older chain
// ErrTxTypeNotSupported, For this, we need an older chain
{
var (
db = rawdb.NewMemoryDatabase()
@@ -244,6 +244,46 @@ func TestStateProcessorErrors(t *testing.T) {
}
}
}
// ErrSenderNoEOA, for this we need the sender to have contract code
{
var (
db = rawdb.NewMemoryDatabase()
gspec = &Genesis{
Config: config,
Alloc: GenesisAlloc{
common.HexToAddress("0x71562b71999873DB5b286dF957af199Ec94617F7"): GenesisAccount{
Balance: big.NewInt(1000000000000000000), // 1 ether
Nonce: 0,
Code: common.FromHex("0xB0B0FACE"),
},
},
}
genesis = gspec.MustCommit(db)
blockchain, _ = NewBlockChain(db, nil, gspec.Config, ethash.NewFaker(), vm.Config{}, nil, nil)
)
defer blockchain.Stop()
for i, tt := range []struct {
txs []*types.Transaction
want string
}{
{ // ErrSenderNoEOA
txs: []*types.Transaction{
mkDynamicTx(0, common.Address{}, params.TxGas-1000, big.NewInt(0), big.NewInt(0)),
},
want: "could not apply tx 0 [0x88626ac0d53cb65308f2416103c62bb1f18b805573d4f96a3640bbbfff13c14f]: sender not an eoa: address 0x71562b71999873DB5b286dF957af199Ec94617F7, codehash: 0x9280914443471259d4570a8661015ae4a5b80186dbc619658fb494bebc3da3d1",
},
} {
block := GenerateBadBlock(genesis, ethash.NewFaker(), tt.txs, gspec.Config)
_, err := blockchain.InsertChain(types.Blocks{block})
if err == nil {
t.Fatal("block imported without errors")
}
if have, want := err.Error(), tt.want; have != want {
t.Errorf("test %d:\nhave \"%v\"\nwant \"%v\"\n", i, have, want)
}
}
}
}
// GenerateBadBlock constructs a "block" which contains the transactions. The transactions are not expected to be

View File

@@ -25,9 +25,12 @@ import (
cmath "github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/params"
)
var emptyCodeHash = crypto.Keccak256Hash(nil)
/*
The State Transitioning Model
@@ -193,6 +196,7 @@ func (st *StateTransition) buyGas() error {
if st.gasFeeCap != nil {
balanceCheck = new(big.Int).SetUint64(st.msg.Gas())
balanceCheck = balanceCheck.Mul(balanceCheck, st.gasFeeCap)
balanceCheck.Add(balanceCheck, st.value)
}
if have, want := st.state.GetBalance(st.msg.From()), balanceCheck; have.Cmp(want) < 0 {
return fmt.Errorf("%w: address %v have %v want %v", ErrInsufficientFunds, st.msg.From().Hex(), have, want)
@@ -219,6 +223,11 @@ func (st *StateTransition) preCheck() error {
st.msg.From().Hex(), msgNonce, stNonce)
}
}
// Make sure the sender is an EOA
if codeHash := st.state.GetCodeHash(st.msg.From()); codeHash != emptyCodeHash && codeHash != (common.Hash{}) {
return fmt.Errorf("%w: address %v, codehash: %s", ErrSenderNoEOA,
st.msg.From().Hex(), codeHash)
}
// Make sure that transaction gasFeeCap is greater than the baseFee (post london)
if st.evm.ChainConfig().IsLondon(st.evm.Context.BlockNumber) {
// Skip the checks if gas fields are zero and baseFee was explicitly disabled (eth_call)
@@ -278,6 +287,7 @@ func (st *StateTransition) TransitionDb() (*ExecutionResult, error) {
sender := vm.AccountRef(msg.From())
homestead := st.evm.ChainConfig().IsHomestead(st.evm.Context.BlockNumber)
istanbul := st.evm.ChainConfig().IsIstanbul(st.evm.Context.BlockNumber)
london := st.evm.ChainConfig().IsLondon(st.evm.Context.BlockNumber)
contractCreation := msg.To() == nil
// Check clauses 4-5, subtract intrinsic gas if everything is correct
@@ -310,7 +320,8 @@ func (st *StateTransition) TransitionDb() (*ExecutionResult, error) {
st.state.SetNonce(msg.From(), st.state.GetNonce(sender.Address())+1)
ret, st.gas, vmerr = st.evm.Call(sender, st.to(), st.data, st.gas, st.value)
}
if !st.evm.ChainConfig().IsLondon(st.evm.Context.BlockNumber) {
if !london {
// Before EIP-3529: refunds were capped to gasUsed / 2
st.refundGas(params.RefundQuotient)
} else {
@@ -318,7 +329,7 @@ func (st *StateTransition) TransitionDb() (*ExecutionResult, error) {
st.refundGas(params.RefundQuotientEIP3529)
}
effectiveTip := st.gasPrice
if st.evm.ChainConfig().IsLondon(st.evm.Context.BlockNumber) {
if london {
effectiveTip = cmath.BigMin(st.gasTipCap, new(big.Int).Sub(st.gasFeeCap, st.evm.Context.BaseFee))
}
st.state.AddBalance(st.evm.Context.Coinbase, new(big.Int).Mul(new(big.Int).SetUint64(st.gasUsed()), effectiveTip))

View File

@@ -635,8 +635,8 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error {
// pending or queued one, it overwrites the previous transaction if its price is higher.
//
// If a newly added transaction is marked as local, its sending account will be
// whitelisted, preventing any associated transaction from being dropped out of the pool
// due to pricing constraints.
// be added to the allowlist, preventing any associated transaction from being dropped
// out of the pool due to pricing constraints.
func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err error) {
// If the transaction is already known, discard it
hash := tx.Hash()

View File

@@ -18,12 +18,12 @@ func (l Log) MarshalJSON() ([]byte, error) {
Address common.Address `json:"address" gencodec:"required"`
Topics []common.Hash `json:"topics" gencodec:"required"`
Data hexutil.Bytes `json:"data" gencodec:"required"`
BlockNumber hexutil.Uint64 `json:"blockNumber" rlp:"-"`
TxHash common.Hash `json:"transactionHash" gencodec:"required" rlp:"-"`
TxIndex hexutil.Uint `json:"transactionIndex" rlp:"-"`
BlockHash common.Hash `json:"blockHash" rlp:"-"`
Index hexutil.Uint `json:"logIndex" rlp:"-"`
Removed bool `json:"removed" rlp:"-"`
BlockNumber hexutil.Uint64 `json:"blockNumber"`
TxHash common.Hash `json:"transactionHash" gencodec:"required"`
TxIndex hexutil.Uint `json:"transactionIndex"`
BlockHash common.Hash `json:"blockHash"`
Index hexutil.Uint `json:"logIndex"`
Removed bool `json:"removed"`
}
var enc Log
enc.Address = l.Address
@@ -44,12 +44,12 @@ func (l *Log) UnmarshalJSON(input []byte) error {
Address *common.Address `json:"address" gencodec:"required"`
Topics []common.Hash `json:"topics" gencodec:"required"`
Data *hexutil.Bytes `json:"data" gencodec:"required"`
BlockNumber *hexutil.Uint64 `json:"blockNumber" rlp:"-"`
TxHash *common.Hash `json:"transactionHash" gencodec:"required" rlp:"-"`
TxIndex *hexutil.Uint `json:"transactionIndex" rlp:"-"`
BlockHash *common.Hash `json:"blockHash" rlp:"-"`
Index *hexutil.Uint `json:"logIndex" rlp:"-"`
Removed *bool `json:"removed" rlp:"-"`
BlockNumber *hexutil.Uint64 `json:"blockNumber"`
TxHash *common.Hash `json:"transactionHash" gencodec:"required"`
TxIndex *hexutil.Uint `json:"transactionIndex"`
BlockHash *common.Hash `json:"blockHash"`
Index *hexutil.Uint `json:"logIndex"`
Removed *bool `json:"removed"`
}
var dec Log
if err := json.Unmarshal(input, &dec); err != nil {

View File

@@ -17,8 +17,11 @@
package types
import (
"io"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/rlp"
)
//go:generate gencodec -type Log -field-override logMarshaling -out gen_log_json.go
@@ -37,19 +40,19 @@ type Log struct {
// Derived fields. These fields are filled in by the node
// but not secured by consensus.
// block in which the transaction was included
BlockNumber uint64 `json:"blockNumber" rlp:"-"`
BlockNumber uint64 `json:"blockNumber"`
// hash of the transaction
TxHash common.Hash `json:"transactionHash" gencodec:"required" rlp:"-"`
TxHash common.Hash `json:"transactionHash" gencodec:"required"`
// index of the transaction in the block
TxIndex uint `json:"transactionIndex" rlp:"-"`
TxIndex uint `json:"transactionIndex"`
// hash of the block in which the transaction was included
BlockHash common.Hash `json:"blockHash" rlp:"-"`
BlockHash common.Hash `json:"blockHash"`
// index of the log in the block
Index uint `json:"logIndex" rlp:"-"`
Index uint `json:"logIndex"`
// The Removed field is true if this log was reverted due to a chain reorganisation.
// You must pay attention to this field if you receive logs through a filter query.
Removed bool `json:"removed" rlp:"-"`
Removed bool `json:"removed"`
}
type logMarshaling struct {
@@ -58,3 +61,83 @@ type logMarshaling struct {
TxIndex hexutil.Uint
Index hexutil.Uint
}
type rlpLog struct {
Address common.Address
Topics []common.Hash
Data []byte
}
// rlpStorageLog is the storage encoding of a log.
type rlpStorageLog rlpLog
// legacyRlpStorageLog is the previous storage encoding of a log including some redundant fields.
type legacyRlpStorageLog struct {
Address common.Address
Topics []common.Hash
Data []byte
BlockNumber uint64
TxHash common.Hash
TxIndex uint
BlockHash common.Hash
Index uint
}
// EncodeRLP implements rlp.Encoder.
func (l *Log) EncodeRLP(w io.Writer) error {
return rlp.Encode(w, rlpLog{Address: l.Address, Topics: l.Topics, Data: l.Data})
}
// DecodeRLP implements rlp.Decoder.
func (l *Log) DecodeRLP(s *rlp.Stream) error {
var dec rlpLog
err := s.Decode(&dec)
if err == nil {
l.Address, l.Topics, l.Data = dec.Address, dec.Topics, dec.Data
}
return err
}
// LogForStorage is a wrapper around a Log that flattens and parses the entire content of
// a log including non-consensus fields.
type LogForStorage Log
// EncodeRLP implements rlp.Encoder.
func (l *LogForStorage) EncodeRLP(w io.Writer) error {
return rlp.Encode(w, rlpStorageLog{
Address: l.Address,
Topics: l.Topics,
Data: l.Data,
})
}
// DecodeRLP implements rlp.Decoder.
//
// Note some redundant fields(e.g. block number, tx hash etc) will be assembled later.
func (l *LogForStorage) DecodeRLP(s *rlp.Stream) error {
blob, err := s.Raw()
if err != nil {
return err
}
var dec rlpStorageLog
err = rlp.DecodeBytes(blob, &dec)
if err == nil {
*l = LogForStorage{
Address: dec.Address,
Topics: dec.Topics,
Data: dec.Data,
}
} else {
// Try to decode log with previous definition.
var dec legacyRlpStorageLog
err = rlp.DecodeBytes(blob, &dec)
if err == nil {
*l = LogForStorage{
Address: dec.Address,
Topics: dec.Topics,
Data: dec.Data,
}
}
}
return err
}

View File

@@ -94,7 +94,28 @@ type receiptRLP struct {
type storedReceiptRLP struct {
PostStateOrStatus []byte
CumulativeGasUsed uint64
Logs []*Log
Logs []*LogForStorage
}
// v4StoredReceiptRLP is the storage encoding of a receipt used in database version 4.
type v4StoredReceiptRLP struct {
PostStateOrStatus []byte
CumulativeGasUsed uint64
TxHash common.Hash
ContractAddress common.Address
Logs []*LogForStorage
GasUsed uint64
}
// v3StoredReceiptRLP is the original storage encoding of a receipt including some unnecessary fields.
type v3StoredReceiptRLP struct {
PostStateOrStatus []byte
CumulativeGasUsed uint64
Bloom Bloom
TxHash common.Hash
ContractAddress common.Address
Logs []*LogForStorage
GasUsed uint64
}
// NewReceipt creates a barebone transaction receipt, copying the init fields.
@@ -212,28 +233,96 @@ func (r *Receipt) Size() common.StorageSize {
// entire content of a receipt, as opposed to only the consensus fields originally.
type ReceiptForStorage Receipt
// EncodeRLP implements rlp.Encoder.
// EncodeRLP implements rlp.Encoder, and flattens all content fields of a receipt
// into an RLP stream.
func (r *ReceiptForStorage) EncodeRLP(w io.Writer) error {
enc := &storedReceiptRLP{
PostStateOrStatus: (*Receipt)(r).statusEncoding(),
CumulativeGasUsed: r.CumulativeGasUsed,
Logs: r.Logs,
Logs: make([]*LogForStorage, len(r.Logs)),
}
for i, log := range r.Logs {
enc.Logs[i] = (*LogForStorage)(log)
}
return rlp.Encode(w, enc)
}
// DecodeRLP implements rlp.Decoder.
// DecodeRLP implements rlp.Decoder, and loads both consensus and implementation
// fields of a receipt from an RLP stream.
func (r *ReceiptForStorage) DecodeRLP(s *rlp.Stream) error {
// Retrieve the entire receipt blob as we need to try multiple decoders
blob, err := s.Raw()
if err != nil {
return err
}
// Try decoding from the newest format for future proofness, then the older one
// for old nodes that just upgraded. V4 was an intermediate unreleased format so
// we do need to decode it, but it's not common (try last).
if err := decodeStoredReceiptRLP(r, blob); err == nil {
return nil
}
if err := decodeV3StoredReceiptRLP(r, blob); err == nil {
return nil
}
return decodeV4StoredReceiptRLP(r, blob)
}
func decodeStoredReceiptRLP(r *ReceiptForStorage, blob []byte) error {
var stored storedReceiptRLP
if err := s.Decode(&stored); err != nil {
if err := rlp.DecodeBytes(blob, &stored); err != nil {
return err
}
if err := (*Receipt)(r).setStatus(stored.PostStateOrStatus); err != nil {
return err
}
r.CumulativeGasUsed = stored.CumulativeGasUsed
r.Logs = stored.Logs
r.Logs = make([]*Log, len(stored.Logs))
for i, log := range stored.Logs {
r.Logs[i] = (*Log)(log)
}
r.Bloom = CreateBloom(Receipts{(*Receipt)(r)})
return nil
}
func decodeV4StoredReceiptRLP(r *ReceiptForStorage, blob []byte) error {
var stored v4StoredReceiptRLP
if err := rlp.DecodeBytes(blob, &stored); err != nil {
return err
}
if err := (*Receipt)(r).setStatus(stored.PostStateOrStatus); err != nil {
return err
}
r.CumulativeGasUsed = stored.CumulativeGasUsed
r.TxHash = stored.TxHash
r.ContractAddress = stored.ContractAddress
r.GasUsed = stored.GasUsed
r.Logs = make([]*Log, len(stored.Logs))
for i, log := range stored.Logs {
r.Logs[i] = (*Log)(log)
}
r.Bloom = CreateBloom(Receipts{(*Receipt)(r)})
return nil
}
func decodeV3StoredReceiptRLP(r *ReceiptForStorage, blob []byte) error {
var stored v3StoredReceiptRLP
if err := rlp.DecodeBytes(blob, &stored); err != nil {
return err
}
if err := (*Receipt)(r).setStatus(stored.PostStateOrStatus); err != nil {
return err
}
r.CumulativeGasUsed = stored.CumulativeGasUsed
r.Bloom = stored.Bloom
r.TxHash = stored.TxHash
r.ContractAddress = stored.ContractAddress
r.GasUsed = stored.GasUsed
r.Logs = make([]*Log, len(stored.Logs))
for i, log := range stored.Logs {
r.Logs[i] = (*Log)(log)
}
return nil
}

View File

@@ -20,6 +20,7 @@ import (
"bytes"
"math"
"math/big"
"reflect"
"testing"
"github.com/ethereum/go-ethereum/common"
@@ -37,6 +38,128 @@ func TestDecodeEmptyTypedReceipt(t *testing.T) {
}
}
func TestLegacyReceiptDecoding(t *testing.T) {
tests := []struct {
name string
encode func(*Receipt) ([]byte, error)
}{
{
"StoredReceiptRLP",
encodeAsStoredReceiptRLP,
},
{
"V4StoredReceiptRLP",
encodeAsV4StoredReceiptRLP,
},
{
"V3StoredReceiptRLP",
encodeAsV3StoredReceiptRLP,
},
}
tx := NewTransaction(1, common.HexToAddress("0x1"), big.NewInt(1), 1, big.NewInt(1), nil)
receipt := &Receipt{
Status: ReceiptStatusFailed,
CumulativeGasUsed: 1,
Logs: []*Log{
{
Address: common.BytesToAddress([]byte{0x11}),
Topics: []common.Hash{common.HexToHash("dead"), common.HexToHash("beef")},
Data: []byte{0x01, 0x00, 0xff},
},
{
Address: common.BytesToAddress([]byte{0x01, 0x11}),
Topics: []common.Hash{common.HexToHash("dead"), common.HexToHash("beef")},
Data: []byte{0x01, 0x00, 0xff},
},
},
TxHash: tx.Hash(),
ContractAddress: common.BytesToAddress([]byte{0x01, 0x11, 0x11}),
GasUsed: 111111,
}
receipt.Bloom = CreateBloom(Receipts{receipt})
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
enc, err := tc.encode(receipt)
if err != nil {
t.Fatalf("Error encoding receipt: %v", err)
}
var dec ReceiptForStorage
if err := rlp.DecodeBytes(enc, &dec); err != nil {
t.Fatalf("Error decoding RLP receipt: %v", err)
}
// Check whether all consensus fields are correct.
if dec.Status != receipt.Status {
t.Fatalf("Receipt status mismatch, want %v, have %v", receipt.Status, dec.Status)
}
if dec.CumulativeGasUsed != receipt.CumulativeGasUsed {
t.Fatalf("Receipt CumulativeGasUsed mismatch, want %v, have %v", receipt.CumulativeGasUsed, dec.CumulativeGasUsed)
}
if dec.Bloom != receipt.Bloom {
t.Fatalf("Bloom data mismatch, want %v, have %v", receipt.Bloom, dec.Bloom)
}
if len(dec.Logs) != len(receipt.Logs) {
t.Fatalf("Receipt log number mismatch, want %v, have %v", len(receipt.Logs), len(dec.Logs))
}
for i := 0; i < len(dec.Logs); i++ {
if dec.Logs[i].Address != receipt.Logs[i].Address {
t.Fatalf("Receipt log %d address mismatch, want %v, have %v", i, receipt.Logs[i].Address, dec.Logs[i].Address)
}
if !reflect.DeepEqual(dec.Logs[i].Topics, receipt.Logs[i].Topics) {
t.Fatalf("Receipt log %d topics mismatch, want %v, have %v", i, receipt.Logs[i].Topics, dec.Logs[i].Topics)
}
if !bytes.Equal(dec.Logs[i].Data, receipt.Logs[i].Data) {
t.Fatalf("Receipt log %d data mismatch, want %v, have %v", i, receipt.Logs[i].Data, dec.Logs[i].Data)
}
}
})
}
}
func encodeAsStoredReceiptRLP(want *Receipt) ([]byte, error) {
stored := &storedReceiptRLP{
PostStateOrStatus: want.statusEncoding(),
CumulativeGasUsed: want.CumulativeGasUsed,
Logs: make([]*LogForStorage, len(want.Logs)),
}
for i, log := range want.Logs {
stored.Logs[i] = (*LogForStorage)(log)
}
return rlp.EncodeToBytes(stored)
}
func encodeAsV4StoredReceiptRLP(want *Receipt) ([]byte, error) {
stored := &v4StoredReceiptRLP{
PostStateOrStatus: want.statusEncoding(),
CumulativeGasUsed: want.CumulativeGasUsed,
TxHash: want.TxHash,
ContractAddress: want.ContractAddress,
Logs: make([]*LogForStorage, len(want.Logs)),
GasUsed: want.GasUsed,
}
for i, log := range want.Logs {
stored.Logs[i] = (*LogForStorage)(log)
}
return rlp.EncodeToBytes(stored)
}
func encodeAsV3StoredReceiptRLP(want *Receipt) ([]byte, error) {
stored := &v3StoredReceiptRLP{
PostStateOrStatus: want.statusEncoding(),
CumulativeGasUsed: want.CumulativeGasUsed,
Bloom: want.Bloom,
TxHash: want.TxHash,
ContractAddress: want.ContractAddress,
Logs: make([]*LogForStorage, len(want.Logs)),
GasUsed: want.GasUsed,
}
for i, log := range want.Logs {
stored.Logs[i] = (*LogForStorage)(log)
}
return rlp.EncodeToBytes(stored)
}
// Tests that receipt data can be correctly derived from the contextual infos
func TestDeriveFields(t *testing.T) {
// Create a few transactions to have receipts for

View File

@@ -96,7 +96,7 @@ func (al accessList) equal(other accessList) bool {
func (al accessList) accessList() types.AccessList {
acl := make(types.AccessList, 0, len(al))
for addr, slots := range al {
tuple := types.AccessTuple{Address: addr}
tuple := types.AccessTuple{Address: addr, StorageKeys: []common.Hash{}}
for slot := range slots {
tuple.StorageKeys = append(tuple.StorageKeys, slot)
}

View File

@@ -342,7 +342,7 @@ func (api *PrivateDebugAPI) GetBadBlocks(ctx context.Context) ([]*BadBlockArgs,
} else {
blockRlp = fmt.Sprintf("0x%x", rlpBytes)
}
if blockJSON, err = ethapi.RPCMarshalBlock(block, true, true); err != nil {
if blockJSON, err = ethapi.RPCMarshalBlock(block, true, true, api.eth.engine); err != nil {
blockJSON = map[string]interface{}{"error": err.Error()}
}
results = append(results, &BadBlockArgs{

View File

@@ -287,7 +287,7 @@ func (b *EthAPIBackend) SuggestGasTipCap(ctx context.Context) (*big.Int, error)
return b.gpo.SuggestTipCap(ctx)
}
func (b *EthAPIBackend) FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (firstBlock rpc.BlockNumber, reward [][]*big.Int, baseFee []*big.Int, gasUsedRatio []float64, err error) {
func (b *EthAPIBackend) FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (firstBlock *big.Int, reward [][]*big.Int, baseFee []*big.Int, gasUsedRatio []float64, err error) {
return b.gpo.FeeHistory(ctx, blockCount, lastBlock, rewardPercentiles)
}

View File

@@ -465,7 +465,7 @@ func (d *Downloader) syncWithPeer(p *peerConnection, hash common.Hash, td *big.I
}
if mode == FastSync && pivot == nil {
// If no pivot block was returned, the head is below the min full block
// threshold (i.e. new chian). In that case we won't really fast sync
// threshold (i.e. new chain). In that case we won't really fast sync
// anyway, but still need a valid pivot block to avoid some code hitting
// nil panics on an access.
pivot = d.blockchain.CurrentBlock().Header()
@@ -681,7 +681,7 @@ func (d *Downloader) fetchHead(p *peerConnection) (head *types.Header, pivot *ty
return head, nil, nil
}
// At this point we have 2 headers in total and the first is the
// validated head of the chian. Check the pivot number and return,
// validated head of the chain. Check the pivot number and return,
pivot := headers[1]
if pivot.Number.Uint64() != head.Number.Uint64()-uint64(fsMinFullBlocks) {
return nil, nil, fmt.Errorf("%w: remote pivot %d != requested %d", errInvalidChain, pivot.Number, head.Number.Uint64()-uint64(fsMinFullBlocks))

View File

@@ -83,7 +83,6 @@ var Defaults = Config{
TrieTimeout: 60 * time.Minute,
SnapshotCache: 102,
Miner: miner.Config{
GasFloor: 8000000,
GasCeil: 8000000,
GasPrice: big.NewInt(params.GWei),
Recommit: 3 * time.Second,

View File

@@ -24,6 +24,7 @@ import (
"sort"
"sync/atomic"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
@@ -48,7 +49,7 @@ const (
// blockFees represents a single block for processing
type blockFees struct {
// set by the caller
blockNumber rpc.BlockNumber
blockNumber uint64
header *types.Header
block *types.Block // only set if reward percentiles are requested
receipts types.Receipts
@@ -133,7 +134,7 @@ func (oracle *Oracle) processBlock(bf *blockFees, percentiles []float64) {
// also returned if requested and available.
// Note: an error is only returned if retrieving the head header has failed. If there are no
// retrievable blocks in the specified range then zero block count is returned with no error.
func (oracle *Oracle) resolveBlockRange(ctx context.Context, lastBlock rpc.BlockNumber, blocks, maxHistory int) (*types.Block, []*types.Receipt, rpc.BlockNumber, int, error) {
func (oracle *Oracle) resolveBlockRange(ctx context.Context, lastBlock rpc.BlockNumber, blocks, maxHistory int) (*types.Block, []*types.Receipt, uint64, int, error) {
var (
headBlock rpc.BlockNumber
pendingBlock *types.Block
@@ -181,7 +182,7 @@ func (oracle *Oracle) resolveBlockRange(ctx context.Context, lastBlock rpc.Block
if rpc.BlockNumber(blocks) > lastBlock+1 {
blocks = int(lastBlock + 1)
}
return pendingBlock, pendingReceipts, lastBlock, blocks, nil
return pendingBlock, pendingReceipts, uint64(lastBlock), blocks, nil
}
// FeeHistory returns data relevant for fee estimation based on the specified range of blocks.
@@ -197,9 +198,9 @@ func (oracle *Oracle) resolveBlockRange(ctx context.Context, lastBlock rpc.Block
// - gasUsedRatio: gasUsed/gasLimit in the given block
// Note: baseFee includes the next block after the newest of the returned range, because this
// value can be derived from the newest block.
func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (rpc.BlockNumber, [][]*big.Int, []*big.Int, []float64, error) {
func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, unresolvedLastBlock rpc.BlockNumber, rewardPercentiles []float64) (*big.Int, [][]*big.Int, []*big.Int, []float64, error) {
if blocks < 1 {
return 0, nil, nil, nil, nil // returning with no data and no error means there are no retrievable blocks
return common.Big0, nil, nil, nil, nil // returning with no data and no error means there are no retrievable blocks
}
if blocks > maxFeeHistory {
log.Warn("Sanitizing fee history length", "requested", blocks, "truncated", maxFeeHistory)
@@ -207,10 +208,10 @@ func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, lastBlock rpc.
}
for i, p := range rewardPercentiles {
if p < 0 || p > 100 {
return 0, nil, nil, nil, fmt.Errorf("%w: %f", errInvalidPercentile, p)
return common.Big0, nil, nil, nil, fmt.Errorf("%w: %f", errInvalidPercentile, p)
}
if i > 0 && p < rewardPercentiles[i-1] {
return 0, nil, nil, nil, fmt.Errorf("%w: #%d:%f > #%d:%f", errInvalidPercentile, i-1, rewardPercentiles[i-1], i, p)
return common.Big0, nil, nil, nil, fmt.Errorf("%w: #%d:%f > #%d:%f", errInvalidPercentile, i-1, rewardPercentiles[i-1], i, p)
}
}
// Only process blocks if reward percentiles were requested
@@ -223,36 +224,36 @@ func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, lastBlock rpc.
pendingReceipts []*types.Receipt
err error
)
pendingBlock, pendingReceipts, lastBlock, blocks, err = oracle.resolveBlockRange(ctx, lastBlock, blocks, maxHistory)
pendingBlock, pendingReceipts, lastBlock, blocks, err := oracle.resolveBlockRange(ctx, unresolvedLastBlock, blocks, maxHistory)
if err != nil || blocks == 0 {
return 0, nil, nil, nil, err
return common.Big0, nil, nil, nil, err
}
oldestBlock := lastBlock + 1 - rpc.BlockNumber(blocks)
oldestBlock := lastBlock + 1 - uint64(blocks)
var (
next = int64(oldestBlock)
next = oldestBlock
results = make(chan *blockFees, blocks)
)
for i := 0; i < maxBlockFetchers && i < blocks; i++ {
go func() {
for {
// Retrieve the next block number to fetch with this goroutine
blockNumber := rpc.BlockNumber(atomic.AddInt64(&next, 1) - 1)
blockNumber := atomic.AddUint64(&next, 1) - 1
if blockNumber > lastBlock {
return
}
fees := &blockFees{blockNumber: blockNumber}
if pendingBlock != nil && blockNumber >= rpc.BlockNumber(pendingBlock.NumberU64()) {
if pendingBlock != nil && blockNumber >= pendingBlock.NumberU64() {
fees.block, fees.receipts = pendingBlock, pendingReceipts
} else {
if len(rewardPercentiles) != 0 {
fees.block, fees.err = oracle.backend.BlockByNumber(ctx, blockNumber)
fees.block, fees.err = oracle.backend.BlockByNumber(ctx, rpc.BlockNumber(blockNumber))
if fees.block != nil && fees.err == nil {
fees.receipts, fees.err = oracle.backend.GetReceipts(ctx, fees.block.Hash())
}
} else {
fees.header, fees.err = oracle.backend.HeaderByNumber(ctx, blockNumber)
fees.header, fees.err = oracle.backend.HeaderByNumber(ctx, rpc.BlockNumber(blockNumber))
}
}
if fees.block != nil {
@@ -275,7 +276,7 @@ func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, lastBlock rpc.
for ; blocks > 0; blocks-- {
fees := <-results
if fees.err != nil {
return 0, nil, nil, nil, fees.err
return common.Big0, nil, nil, nil, fees.err
}
i := int(fees.blockNumber - oldestBlock)
if fees.header != nil {
@@ -288,7 +289,7 @@ func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, lastBlock rpc.
}
}
if firstMissing == 0 {
return 0, nil, nil, nil, nil
return common.Big0, nil, nil, nil, nil
}
if len(rewardPercentiles) != 0 {
reward = reward[:firstMissing]
@@ -296,5 +297,5 @@ func (oracle *Oracle) FeeHistory(ctx context.Context, blocks int, lastBlock rpc.
reward = nil
}
baseFee, gasUsedRatio = baseFee[:firstMissing+1], gasUsedRatio[:firstMissing]
return oldestBlock, reward, baseFee, gasUsedRatio, nil
return new(big.Int).SetUint64(oldestBlock), reward, baseFee, gasUsedRatio, nil
}

View File

@@ -32,7 +32,7 @@ func TestFeeHistory(t *testing.T) {
count int
last rpc.BlockNumber
percent []float64
expFirst rpc.BlockNumber
expFirst uint64
expCount int
expErr error
}{
@@ -70,7 +70,7 @@ func TestFeeHistory(t *testing.T) {
expBaseFee++
}
if first != c.expFirst {
if first.Uint64() != c.expFirst {
t.Fatalf("Test case %d: first block mismatch, want %d, got %d", i, c.expFirst, first)
}
if len(reward) != expReward {

View File

@@ -306,7 +306,7 @@ func TestTraceCall(t *testing.T) {
}
}
func TestOverridenTraceCall(t *testing.T) {
func TestOverriddenTraceCall(t *testing.T) {
t.Parallel()
// Initialize test accounts
@@ -369,7 +369,7 @@ func TestOverridenTraceCall(t *testing.T) {
config: &TraceCallConfig{
Tracer: &tracer,
},
expectErr: core.ErrInsufficientFundsForTransfer,
expectErr: core.ErrInsufficientFunds,
expect: nil,
},
// Successful simple contract call
@@ -417,24 +417,24 @@ func TestOverridenTraceCall(t *testing.T) {
},
},
}
for _, testspec := range testSuite {
for i, testspec := range testSuite {
result, err := api.TraceCall(context.Background(), testspec.call, rpc.BlockNumberOrHash{BlockNumber: &testspec.blockNumber}, testspec.config)
if testspec.expectErr != nil {
if err == nil {
t.Errorf("Expect error %v, get nothing", testspec.expectErr)
t.Errorf("test %d: want error %v, have nothing", i, testspec.expectErr)
continue
}
if !errors.Is(err, testspec.expectErr) {
t.Errorf("Error mismatch, want %v, get %v", testspec.expectErr, err)
t.Errorf("test %d: error mismatch, want %v, have %v", i, testspec.expectErr, err)
}
} else {
if err != nil {
t.Errorf("Expect no error, get %v", err)
t.Errorf("test %d: want no error, have %v", i, err)
continue
}
ret := new(callTrace)
if err := json.Unmarshal(result.(json.RawMessage), ret); err != nil {
t.Fatalf("failed to unmarshal trace result: %v", err)
t.Fatalf("test %d: failed to unmarshal trace result: %v", i, err)
}
if !jsonEqual(ret, testspec.expect) {
// uncomment this for easier debugging

View File

@@ -138,7 +138,7 @@ func testAccessList(t *testing.T, client *rpc.Client) {
From: testAddr,
To: &common.Address{},
Gas: 21000,
GasPrice: big.NewInt(1),
GasPrice: big.NewInt(765625000),
Value: big.NewInt(1),
}
al, gas, vmErr, err := ec.CreateAccessList(context.Background(), msg)

View File

@@ -76,6 +76,13 @@ type AncientReader interface {
// Ancient retrieves an ancient binary blob from the append-only immutable files.
Ancient(kind string, number uint64) ([]byte, error)
// ReadAncients retrieves multiple items in sequence, starting from the index 'start'.
// It will return
// - at most 'count' items,
// - at least 1 item (even if exceeding the maxBytes), but will otherwise
// return as many items as fit into maxBytes.
ReadAncients(kind string, start, count, maxBytes uint64) ([][]byte, error)
// Ancients returns the ancient item numbers in the ancient store.
Ancients() (uint64, error)

17
go.mod
View File

@@ -18,6 +18,7 @@ require (
github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f
github.com/davecgh/go-spew v1.1.1
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea
github.com/deepmap/oapi-codegen v1.8.2 // indirect
github.com/dlclark/regexp2 v1.2.0 // indirect
github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf
github.com/dop251/goja v0.0.0-20200721192441-a695b0cdd498
@@ -37,15 +38,17 @@ require (
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d
github.com/holiman/bloomfilter/v2 v2.0.3
github.com/holiman/uint256 v1.2.0
github.com/huin/goupnp v1.0.1-0.20210626160114-33cdcbb30dda
github.com/huin/goupnp v1.0.2
github.com/influxdata/influxdb v1.8.3
github.com/influxdata/influxdb-client-go/v2 v2.4.0
github.com/influxdata/line-protocol v0.0.0-20210311194329-9aa0e372d097 // indirect
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458
github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e
github.com/julienschmidt/httprouter v1.2.0
github.com/karalabe/usb v0.0.0-20190919080040-51dc0efba356
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.0
github.com/mattn/go-isatty v0.0.5-0.20180830101745-3fb116b82035
github.com/mattn/go-colorable v0.1.8
github.com/mattn/go-isatty v0.0.12
github.com/naoina/go-stringutil v0.1.0 // indirect
github.com/naoina/toml v0.1.2-0.20170918210437-9fafd6967416
github.com/olekukonko/tablewriter v0.0.5
@@ -60,12 +63,14 @@ require (
github.com/tklauser/go-sysconf v0.3.5 // indirect
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988
golang.org/x/text v0.3.4
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324
golang.org/x/sys v0.0.0-20210423082822-04245dca01da
golang.org/x/text v0.3.6
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce
gopkg.in/olebedev/go-duktape.v3 v3.0.0-20200619000410-60c24ae608a6
gopkg.in/urfave/cli.v1 v1.20.0
gopkg.in/yaml.v2 v2.4.0 // indirect
gotest.tools v2.2.0+incompatible // indirect
)

70
go.sum
View File

@@ -104,6 +104,7 @@ github.com/consensys/bavard v0.1.8-0.20210406032232-f3452dc9b572/go.mod h1:Bpd0/
github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f h1:C43yEtQ6NIf4ftFXD/V55gnGFgPbMQobd//YlnLjUJ8=
github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f/go.mod h1:815PAHg3wvysy0SyIqanF8gZ0Y1wjk/hrDHD/iT88+Q=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cyberdelia/templates v0.0.0-20141128023046-ca7fffd4298c/go.mod h1:GyV+0YP4qX0UQ7r2MoYZ+AvYDp12OF5yg4q8rGnyNh4=
github.com/dave/jennifer v1.2.0/go.mod h1:fIb+770HOpJ2fmN9EPPKOqm1vMGhB+TwXKMZhrIygKg=
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -111,6 +112,9 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea h1:j4317fAZh7X6GqbFowYdYdI0L9bwxL07jyPZIdepyZ0=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ=
github.com/deepmap/oapi-codegen v1.6.0/go.mod h1:ryDa9AgbELGeB+YEXE1dR53yAjHwFvE9iAUlWl9Al3M=
github.com/deepmap/oapi-codegen v1.8.2 h1:SegyeYGcdi0jLLrpbCMoJxnUUn8GBXHsvr4rbzjuhfU=
github.com/deepmap/oapi-codegen v1.8.2/go.mod h1:YLgSKSDv/bZQB7N4ws6luhozi3cEdRktEqrX88CvjIw=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-bitstream v0.0.0-20180413035011-3522498ce2c8/go.mod h1:VMaSuZ+SZcx/wljOQKvp5srsbCiKDEb6K2wC4+PiBmQ=
@@ -136,8 +140,12 @@ github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWo
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqGNY4FhTFhk+o9oFHGINQ/+vhlm8HFzi6znCI=
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww=
github.com/getkin/kin-openapi v0.53.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4=
github.com/getkin/kin-openapi v0.61.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/glycerine/go-unsnap-stream v0.0.0-20180323001048-9f0cb55181dd/go.mod h1:/20jfyN9Y5QPEAprSgKAUr+glWDY39ZiUEAYOEv5dsE=
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/go-chi/chi/v5 v5.0.0/go.mod h1:BBug9lr0cqtdAhsu6R4AAdvufI0/XBzAQSsUqJpoZOs=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0 h1:Wz+5lgoB0kkuqLEc6NVmwRknTKP6dTGbSqvhZtBI/j0=
@@ -147,6 +155,8 @@ github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80n
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E=
github.com/go-ole/go-ole v1.2.1/go.mod h1:7FAglXiTm7HKlQRDeOQ6ZNUHidzCWXuZWq/1dTyBNF8=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-sourcemap/sourcemap v2.1.2+incompatible h1:0b/xya7BKGhXuqFESKM4oIiRo9WOt2ebz7KxfreD6ug=
github.com/go-sourcemap/sourcemap v2.1.2+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg=
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
@@ -178,6 +188,7 @@ github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8l
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangci/lint-1 v0.0.0-20181222135242-d2cdd8c08219/go.mod h1:/X8TswGSh1pIozq4ZwCfxS0WA5JGXguxk94ar/4c87Y=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/flatbuffers v1.11.0/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
@@ -200,6 +211,7 @@ github.com/google/uuid v1.1.5/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/graph-gophers/graphql-go v0.0.0-20201113091052-beb923fada29 h1:sezaKhEfPFg8W0Enm61B9Gs911H8iesGY5R8NDPtd1M=
@@ -213,16 +225,21 @@ github.com/holiman/bloomfilter/v2 v2.0.3/go.mod h1:zpoh+gs7qcpqrHr3dB55AMiJwo0iU
github.com/holiman/uint256 v1.2.0 h1:gpSYcPLWGv4sG43I2mVLiDZCNDh/EpGjSk8tmtxitHM=
github.com/holiman/uint256 v1.2.0/go.mod h1:y4ga/t+u+Xwd7CpDgZESaRcWy0I7XMlTMA25ApIH5Jw=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huin/goupnp v1.0.1-0.20210626160114-33cdcbb30dda h1:Vofqyy/Ysqit++X33unU0Gr08b6P35hKm3juytDrBVI=
github.com/huin/goupnp v1.0.1-0.20210626160114-33cdcbb30dda/go.mod h1:0dxJBVBHqTMjIUMkESDTNgOOx/Mw5wYIfyFmdzSamkM=
github.com/huin/goupnp v1.0.2 h1:RfGLP+h3mvisuWEyybxNq5Eft3NWhHLPeUN72kpKZoI=
github.com/huin/goupnp v1.0.2/go.mod h1:0dxJBVBHqTMjIUMkESDTNgOOx/Mw5wYIfyFmdzSamkM=
github.com/huin/goutil v0.0.0-20170803182201-1ca381bf3150/go.mod h1:PpLOETDnJ0o3iZrZfqZzyLl6l7F3c6L1oWn7OICBi6o=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/flux v0.65.1/go.mod h1:J754/zds0vvpfwuq7Gc2wRdVwEodfpCFM7mYlOw2LqY=
github.com/influxdata/influxdb v1.8.3 h1:WEypI1BQFTT4teLM+1qkEcvUi0dAvopAI/ir0vAiBg8=
github.com/influxdata/influxdb v1.8.3/go.mod h1:JugdFhsvvI8gadxOI6noqNeeBHvWNTbfYGtiAn+2jhI=
github.com/influxdata/influxdb-client-go/v2 v2.4.0 h1:HGBfZYStlx3Kqvsv1h2pJixbCl/jhnFtxpKFAv9Tu5k=
github.com/influxdata/influxdb-client-go/v2 v2.4.0/go.mod h1:vLNHdxTJkIf2mSLvGrpj8TCcISApPoXkaxP8g9uRlW8=
github.com/influxdata/influxql v1.1.1-0.20200828144457-65d3ef77d385/go.mod h1:gHp9y86a/pxhjJ+zMjNXiQAA197Xk9wLxaz+fGG+kWk=
github.com/influxdata/line-protocol v0.0.0-20180522152040-32c6aa80de5e/go.mod h1:4kt73NQhadE3daL3WhR5EJ/J2ocX0PZzwxQ0gXJ7oFE=
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839/go.mod h1:xaLFMmpvUxqXtVkUJfg9QmT88cDaCJ3ZKgdZ78oO8Qo=
github.com/influxdata/line-protocol v0.0.0-20210311194329-9aa0e372d097 h1:vilfsDSy7TDxedi9gyBkMvAirat/oRcL0lFdJBf6tdM=
github.com/influxdata/line-protocol v0.0.0-20210311194329-9aa0e372d097/go.mod h1:xaLFMmpvUxqXtVkUJfg9QmT88cDaCJ3ZKgdZ78oO8Qo=
github.com/influxdata/promql/v2 v2.12.0/go.mod h1:fxOPu+DY0bqCTCECchSRtWfc+0X19ybifQhZoQNF5D8=
github.com/influxdata/roaring v0.4.13-0.20180809181101-fc520f41fab6/go.mod h1:bSgUQ7q5ZLSO+bKBGqJiCBGAl+9DxyW63zLTujjUlOE=
github.com/influxdata/tdigest v0.0.0-20181121200506-bf2b5ad3c0a9/go.mod h1:Js0mqiSBE6Ffsg94weZZ2c+v/ciT8QRHFOap7EKDrR0=
@@ -263,18 +280,27 @@ github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/labstack/echo/v4 v4.2.1/go.mod h1:AA49e0DZ8kk5jTOOCKNuPR6oTnBS0dYiM4FW1e6jwpg=
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/matryer/moq v0.0.0-20190312154309-6cfb0558e1bd/go.mod h1:9ELz6aaclSIGnZBoaSLZ3NAl1VTufbOrXBPvtcy6WiQ=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.0 h1:v2XXALHHh6zHfYTJ+cSkwtyffnaOyR1MXaA91mTrb8o=
github.com/mattn/go-colorable v0.1.0/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ0s8=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d h1:oNAwILwmgWKFpuU+dXvI6dl9jG2mAWAZLX3r9s0PPiw=
github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.5-0.20180830101745-3fb116b82035 h1:USWjF42jDCSEeikX/G1g40ZWnsPXN5WkZ4jMHZWyBK4=
github.com/mattn/go-isatty v0.0.5-0.20180830101745-3fb116b82035/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
@@ -360,6 +386,7 @@ github.com/stretchr/testify v1.2.0/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 h1:xQdMZ1WLrgkkvOZ/LDQxjVxMLdby7osSh4ZEVa5sIjs=
@@ -372,6 +399,9 @@ github.com/tklauser/numcpus v0.2.2/go.mod h1:x3qojaO3uyYt0i56EW/VUYs7uBvdl2fkfZF
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef h1:wHSqTBrZW24CsNJDfeh9Ex6Pm0Rcpc7qrgKBiL44vF4=
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef/go.mod h1:sJ5fKU0s6JVwZjjcUEX2zFOnvq0ASQ2K9Zr6cf67kNs=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/willf/bitset v1.1.3/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/xlab/treeprint v0.0.0-20180616005107-d6fb6747feb6/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@@ -389,6 +419,8 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20190909091759-094676da4a83/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 h1:It14KIkyBFYkHkwZ7k45minvA9aorojkyjGk9KJ5B/w=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -436,10 +468,13 @@ golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 h1:qWPm9rbaAMKs8Bq/9LRpbMqxWRVUAQwMI9fVrssnTfw=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d h1:20cMwl2fHAzkJMEA+8J4JgqBQcQGzbisXo31MIeenXI=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -461,6 +496,7 @@ golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -468,34 +504,45 @@ golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200107162124-548cf772de50/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200826173525-f9321e4c35a6/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210316164454-77fc1eacc6aa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988 h1:EjgCl+fVlIaPJSori0ikSz3uV0DOHKWOJFpv1sAAhBM=
golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da h1:b3NXsE2LusjYGGjL5bxEVZZORm/YEFFrWFjR8eFrw/c=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324 h1:Hir2P/De0WpUhtrKGGjvSb2YxUgyZ7EFOSLIcSSpiwE=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba h1:O8mE0/t419eoIwhTFpKVkHiTs/Igowgfkj25AcZrtiE=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -591,8 +638,9 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=

View File

@@ -32,6 +32,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/clique"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/consensus/misc"
@@ -81,19 +82,19 @@ func (s *PublicEthereumAPI) MaxPriorityFeePerGas(ctx context.Context) (*hexutil.
}
type feeHistoryResult struct {
OldestBlock rpc.BlockNumber `json:"oldestBlock"`
OldestBlock *hexutil.Big `json:"oldestBlock"`
Reward [][]*hexutil.Big `json:"reward,omitempty"`
BaseFee []*hexutil.Big `json:"baseFeePerGas,omitempty"`
GasUsedRatio []float64 `json:"gasUsedRatio"`
}
func (s *PublicEthereumAPI) FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (*feeHistoryResult, error) {
oldest, reward, baseFee, gasUsed, err := s.b.FeeHistory(ctx, blockCount, lastBlock, rewardPercentiles)
func (s *PublicEthereumAPI) FeeHistory(ctx context.Context, blockCount rpc.DecimalOrHex, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (*feeHistoryResult, error) {
oldest, reward, baseFee, gasUsed, err := s.b.FeeHistory(ctx, int(blockCount), lastBlock, rewardPercentiles)
if err != nil {
return nil, err
}
results := &feeHistoryResult{
OldestBlock: oldest,
OldestBlock: (*hexutil.Big)(oldest),
GasUsedRatio: gasUsed,
}
if reward != nil {
@@ -477,7 +478,7 @@ func (s *PrivateAccountAPI) SignTransaction(ctx context.Context, args Transactio
if args.Nonce == nil {
return nil, fmt.Errorf("nonce not specified")
}
// Before actually sign the transaction, ensure the transaction fee is reasonable.
// Before actually signing the transaction, ensure the transaction fee is reasonable.
tx := args.toTransaction()
if err := checkTxFee(tx.GasPrice(), tx.Gas(), s.b.RPCTxFeeCap()); err != nil {
return nil, err
@@ -1008,8 +1009,19 @@ func DoEstimateGas(ctx context.Context, b Backend, args TransactionArgs, blockNr
}
hi = block.GasLimit()
}
// Normalize the max fee per gas the call is willing to spend.
var feeCap *big.Int
if args.GasPrice != nil && (args.MaxFeePerGas != nil || args.MaxPriorityFeePerGas != nil) {
return 0, errors.New("both gasPrice and (maxFeePerGas or maxPriorityFeePerGas) specified")
} else if args.GasPrice != nil {
feeCap = args.GasPrice.ToInt()
} else if args.MaxFeePerGas != nil {
feeCap = args.MaxFeePerGas.ToInt()
} else {
feeCap = common.Big0
}
// Recap the highest gas limit with account's available balance.
if args.GasPrice != nil && args.GasPrice.ToInt().BitLen() != 0 {
if feeCap.BitLen() != 0 {
state, _, err := b.StateAndHeaderByNumberOrHash(ctx, blockNrOrHash)
if err != nil {
return 0, err
@@ -1022,7 +1034,7 @@ func DoEstimateGas(ctx context.Context, b Backend, args TransactionArgs, blockNr
}
available.Sub(available, args.Value.ToInt())
}
allowance := new(big.Int).Div(available, args.GasPrice.ToInt())
allowance := new(big.Int).Div(available, feeCap)
// If the allowance is larger than maximum uint64, skip checking
if allowance.IsUint64() && hi > allowance.Uint64() {
@@ -1031,7 +1043,7 @@ func DoEstimateGas(ctx context.Context, b Backend, args TransactionArgs, blockNr
transfer = new(hexutil.Big)
}
log.Warn("Gas estimation capped by limited funds", "original", hi, "balance", balance,
"sent", transfer.ToInt(), "gasprice", args.GasPrice.ToInt(), "fundable", allowance)
"sent", transfer.ToInt(), "maxFeePerGas", feeCap, "fundable", allowance)
hi = allowance.Uint64()
}
}
@@ -1120,7 +1132,7 @@ type StructLogRes struct {
Gas uint64 `json:"gas"`
GasCost uint64 `json:"gasCost"`
Depth int `json:"depth"`
Error error `json:"error,omitempty"`
Error string `json:"error,omitempty"`
Stack *[]string `json:"stack,omitempty"`
Memory *[]string `json:"memory,omitempty"`
Storage *map[string]string `json:"storage,omitempty"`
@@ -1136,7 +1148,7 @@ func FormatLogs(logs []vm.StructLog) []StructLogRes {
Gas: trace.Gas,
GasCost: trace.GasCost,
Depth: trace.Depth,
Error: trace.Err,
Error: trace.ErrorString(),
}
if trace.Stack != nil {
stack := make([]string, len(trace.Stack))
@@ -1164,7 +1176,8 @@ func FormatLogs(logs []vm.StructLog) []StructLogRes {
}
// RPCMarshalHeader converts the given header to the RPC output .
func RPCMarshalHeader(head *types.Header) map[string]interface{} {
func RPCMarshalHeader(head *types.Header, engine consensus.Engine) map[string]interface{} {
miner, _ := engine.Author(head)
result := map[string]interface{}{
"number": (*hexutil.Big)(head.Number),
"hash": head.Hash(),
@@ -1174,7 +1187,7 @@ func RPCMarshalHeader(head *types.Header) map[string]interface{} {
"sha3Uncles": head.UncleHash,
"logsBloom": head.Bloom,
"stateRoot": head.Root,
"miner": head.Coinbase,
"miner": miner,
"difficulty": (*hexutil.Big)(head.Difficulty),
"extraData": hexutil.Bytes(head.Extra),
"size": hexutil.Uint64(head.Size()),
@@ -1195,8 +1208,8 @@ func RPCMarshalHeader(head *types.Header) map[string]interface{} {
// RPCMarshalBlock converts the given block to the RPC output which depends on fullTx. If inclTx is true transactions are
// returned. When fullTx is true the returned block contains full transaction details, otherwise it will only contain
// transaction hashes.
func RPCMarshalBlock(block *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
fields := RPCMarshalHeader(block.Header())
func RPCMarshalBlock(block *types.Block, inclTx bool, fullTx bool, engine consensus.Engine) (map[string]interface{}, error) {
fields := RPCMarshalHeader(block.Header(), engine)
fields["size"] = hexutil.Uint64(block.Size())
if inclTx {
@@ -1231,7 +1244,7 @@ func RPCMarshalBlock(block *types.Block, inclTx bool, fullTx bool) (map[string]i
// rpcMarshalHeader uses the generalized output filler, then adds the total difficulty field, which requires
// a `PublicBlockchainAPI`.
func (s *PublicBlockChainAPI) rpcMarshalHeader(ctx context.Context, header *types.Header) map[string]interface{} {
fields := RPCMarshalHeader(header)
fields := RPCMarshalHeader(header, s.b.Engine())
fields["totalDifficulty"] = (*hexutil.Big)(s.b.GetTd(ctx, header.Hash()))
return fields
}
@@ -1239,7 +1252,7 @@ func (s *PublicBlockChainAPI) rpcMarshalHeader(ctx context.Context, header *type
// rpcMarshalBlock uses the generalized output filler, then adds the total difficulty field, which requires
// a `PublicBlockchainAPI`.
func (s *PublicBlockChainAPI) rpcMarshalBlock(ctx context.Context, b *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
fields, err := RPCMarshalBlock(b, inclTx, fullTx)
fields, err := RPCMarshalBlock(b, inclTx, fullTx, s.b.Engine())
if err != nil {
return nil, err
}
@@ -1323,7 +1336,7 @@ func newRPCTransaction(tx *types.Transaction, blockHash common.Hash, blockNumber
price := math.BigMin(new(big.Int).Add(tx.GasTipCap(), baseFee), tx.GasFeeCap())
result.GasPrice = (*hexutil.Big)(price)
} else {
result.GasPrice = nil
result.GasPrice = (*hexutil.Big)(tx.GasFeeCap())
}
}
return result
@@ -1441,7 +1454,12 @@ func AccessList(ctx context.Context, b Backend, blockNrOrHash rpc.BlockNumberOrH
}
// Copy the original db so we don't modify it
statedb := db.Copy()
msg := types.NewMessage(args.from(), args.To, uint64(*args.Nonce), args.Value.ToInt(), uint64(*args.Gas), args.GasPrice.ToInt(), big.NewInt(0), big.NewInt(0), args.data(), accessList, false)
// Set the accesslist to the last al
args.AccessList = &accessList
msg, err := args.ToMessage(b.RPCGasCap(), header.BaseFee)
if err != nil {
return nil, 0, nil, err
}
// Apply the transaction with the access list tracer
tracer := vm.NewAccessListTracer(accessList, args.from(), to, precompiles)

View File

@@ -42,7 +42,7 @@ type Backend interface {
// General Ethereum API
Downloader() *downloader.Downloader
SuggestGasTipCap(ctx context.Context) (*big.Int, error)
FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (rpc.BlockNumber, [][]*big.Int, []*big.Int, []float64, error)
FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (*big.Int, [][]*big.Int, []*big.Int, []float64, error)
ChainDb() ethdb.Database
AccountManager() *accounts.Manager
ExtRPCEnabled() bool

View File

@@ -80,40 +80,50 @@ func (args *TransactionArgs) setDefaults(ctx context.Context, b Backend) error {
}
// After london, default to 1559 unless gasPrice is set
head := b.CurrentHeader()
if b.ChainConfig().IsLondon(head.Number) && args.GasPrice == nil {
if args.MaxPriorityFeePerGas == nil {
tip, err := b.SuggestGasTipCap(ctx)
if err != nil {
return err
// If user specifies both maxPriorityfee and maxFee, then we do not
// need to consult the chain for defaults. It's definitely a London tx.
if args.MaxPriorityFeePerGas == nil || args.MaxFeePerGas == nil {
// In this clause, user left some fields unspecified.
if b.ChainConfig().IsLondon(head.Number) && args.GasPrice == nil {
if args.MaxPriorityFeePerGas == nil {
tip, err := b.SuggestGasTipCap(ctx)
if err != nil {
return err
}
args.MaxPriorityFeePerGas = (*hexutil.Big)(tip)
}
if args.MaxFeePerGas == nil {
gasFeeCap := new(big.Int).Add(
(*big.Int)(args.MaxPriorityFeePerGas),
new(big.Int).Mul(head.BaseFee, big.NewInt(2)),
)
args.MaxFeePerGas = (*hexutil.Big)(gasFeeCap)
}
if args.MaxFeePerGas.ToInt().Cmp(args.MaxPriorityFeePerGas.ToInt()) < 0 {
return fmt.Errorf("maxFeePerGas (%v) < maxPriorityFeePerGas (%v)", args.MaxFeePerGas, args.MaxPriorityFeePerGas)
}
} else {
if args.MaxFeePerGas != nil || args.MaxPriorityFeePerGas != nil {
return errors.New("maxFeePerGas or maxPriorityFeePerGas specified but london is not active yet")
}
if args.GasPrice == nil {
price, err := b.SuggestGasTipCap(ctx)
if err != nil {
return err
}
if b.ChainConfig().IsLondon(head.Number) {
// The legacy tx gas price suggestion should not add 2x base fee
// because all fees are consumed, so it would result in a spiral
// upwards.
price.Add(price, head.BaseFee)
}
args.GasPrice = (*hexutil.Big)(price)
}
args.MaxPriorityFeePerGas = (*hexutil.Big)(tip)
}
if args.MaxFeePerGas == nil {
gasFeeCap := new(big.Int).Add(
(*big.Int)(args.MaxPriorityFeePerGas),
new(big.Int).Mul(head.BaseFee, big.NewInt(2)),
)
args.MaxFeePerGas = (*hexutil.Big)(gasFeeCap)
}
if args.MaxFeePerGas.ToInt().Cmp(args.MaxPriorityFeePerGas.ToInt()) < 0 {
return fmt.Errorf("maxFeePerGas (%v) < maxPriorityFeePerGas (%v)", args.MaxFeePerGas, args.MaxPriorityFeePerGas)
}
} else {
if args.MaxFeePerGas != nil || args.MaxPriorityFeePerGas != nil {
return errors.New("maxFeePerGas or maxPriorityFeePerGas specified but london is not active yet")
}
if args.GasPrice == nil {
price, err := b.SuggestGasTipCap(ctx)
if err != nil {
return err
}
if b.ChainConfig().IsLondon(head.Number) {
// The legacy tx gas price suggestion should not add 2x base fee
// because all fees are consumed, so it would result in a spiral
// upwards.
price.Add(price, head.BaseFee)
}
args.GasPrice = (*hexutil.Big)(price)
// Both maxPriorityfee and maxFee set by caller. Sanity-check their internal relation
if args.MaxFeePerGas.ToInt().Cmp(args.MaxPriorityFeePerGas.ToInt()) < 0 {
return fmt.Errorf("maxFeePerGas (%v) < maxPriorityFeePerGas (%v)", args.MaxFeePerGas, args.MaxPriorityFeePerGas)
}
}
if args.Value == nil {

View File

@@ -60,8 +60,11 @@ func (b *LesApiBackend) SetHead(number uint64) {
}
func (b *LesApiBackend) HeaderByNumber(ctx context.Context, number rpc.BlockNumber) (*types.Header, error) {
// Return the latest current as the pending one since there
// is no pending notion in the light client. TODO(rjl493456442)
// unify the behavior of `HeaderByNumber` and `PendingBlockAndReceipts`.
if number == rpc.PendingBlockNumber {
return nil, nil
return b.eth.blockchain.CurrentHeader(), nil
}
if number == rpc.LatestBlockNumber {
return b.eth.blockchain.CurrentHeader(), nil
@@ -266,7 +269,7 @@ func (b *LesApiBackend) SuggestGasTipCap(ctx context.Context) (*big.Int, error)
return b.gpo.SuggestTipCap(ctx)
}
func (b *LesApiBackend) FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (firstBlock rpc.BlockNumber, reward [][]*big.Int, baseFee []*big.Int, gasUsedRatio []float64, err error) {
func (b *LesApiBackend) FeeHistory(ctx context.Context, blockCount int, lastBlock rpc.BlockNumber, rewardPercentiles []float64) (firstBlock *big.Int, reward [][]*big.Int, baseFee []*big.Int, gasUsedRatio []float64, err error) {
return b.gpo.FeeHistory(ctx, blockCount, lastBlock, rewardPercentiles)
}

View File

@@ -322,8 +322,8 @@ func TestBadHeaderHashes(t *testing.T) {
var err error
headers := makeHeaderChainWithDiff(bc.genesisBlock, []int{1, 2, 4}, 10)
core.BadHashes[headers[2].Hash()] = true
if _, err = bc.InsertHeaderChain(headers, 1); !errors.Is(err, core.ErrBlacklistedHash) {
t.Errorf("error mismatch: have: %v, want %v", err, core.ErrBlacklistedHash)
if _, err = bc.InsertHeaderChain(headers, 1); !errors.Is(err, core.ErrBannedHash) {
t.Errorf("error mismatch: have: %v, want %v", err, core.ErrBannedHash)
}
}

View File

@@ -28,6 +28,11 @@ type Config struct {
InfluxDBUsername string `toml:",omitempty"`
InfluxDBPassword string `toml:",omitempty"`
InfluxDBTags string `toml:",omitempty"`
EnableInfluxDBV2 bool `toml:",omitempty"`
InfluxDBToken string `toml:",omitempty"`
InfluxDBBucket string `toml:",omitempty"`
InfluxDBOrganization string `toml:",omitempty"`
}
// DefaultConfig is the default config for metrics used in go-ethereum.
@@ -42,4 +47,10 @@ var DefaultConfig = Config{
InfluxDBUsername: "test",
InfluxDBPassword: "test",
InfluxDBTags: "host=localhost",
// influxdbv2-specific flags
EnableInfluxDBV2: false,
InfluxDBToken: "test",
InfluxDBBucket: "geth",
InfluxDBOrganization: "geth",
}

View File

@@ -0,0 +1,223 @@
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package influxdb
import (
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
influxdb2 "github.com/influxdata/influxdb-client-go/v2"
"github.com/influxdata/influxdb-client-go/v2/api"
)
type v2Reporter struct {
reg metrics.Registry
interval time.Duration
endpoint string
token string
bucket string
organization string
namespace string
tags map[string]string
client influxdb2.Client
write api.WriteAPI
cache map[string]int64
}
// InfluxDBWithTags starts a InfluxDB reporter which will post the from the given metrics.Registry at each d interval with the specified tags
func InfluxDBV2WithTags(r metrics.Registry, d time.Duration, endpoint string, token string, bucket string, organization string, namespace string, tags map[string]string) {
rep := &v2Reporter{
reg: r,
interval: d,
endpoint: endpoint,
token: token,
bucket: bucket,
organization: organization,
namespace: namespace,
tags: tags,
cache: make(map[string]int64),
}
rep.client = influxdb2.NewClient(rep.endpoint, rep.token)
defer rep.client.Close()
// async write client
rep.write = rep.client.WriteAPI(rep.organization, rep.bucket)
errorsCh := rep.write.Errors()
// have to handle write errors in a separate goroutine like this b/c the channel is unbuffered and will block writes if not read
go func() {
for err := range errorsCh {
log.Warn("write error", "err", err.Error())
}
}()
rep.run()
}
func (r *v2Reporter) run() {
intervalTicker := time.Tick(r.interval)
pingTicker := time.Tick(time.Second * 5)
for {
select {
case <-intervalTicker:
r.send()
case <-pingTicker:
_, err := r.client.Health(context.Background())
if err != nil {
log.Warn("Got error from influxdb client health check", "err", err.Error())
}
}
}
}
func (r *v2Reporter) send() {
r.reg.Each(func(name string, i interface{}) {
now := time.Now()
namespace := r.namespace
switch metric := i.(type) {
case metrics.Counter:
v := metric.Count()
l := r.cache[name]
measurement := fmt.Sprintf("%s%s.count", namespace, name)
fields := map[string]interface{}{
"value": v - l,
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
r.cache[name] = v
case metrics.Gauge:
ms := metric.Snapshot()
measurement := fmt.Sprintf("%s%s.gauge", namespace, name)
fields := map[string]interface{}{
"value": ms.Value(),
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
case metrics.GaugeFloat64:
ms := metric.Snapshot()
measurement := fmt.Sprintf("%s%s.gauge", namespace, name)
fields := map[string]interface{}{
"value": ms.Value(),
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
case metrics.Histogram:
ms := metric.Snapshot()
if ms.Count() > 0 {
ps := ms.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999, 0.9999})
measurement := fmt.Sprintf("%s%s.histogram", namespace, name)
fields := map[string]interface{}{
"count": ms.Count(),
"max": ms.Max(),
"mean": ms.Mean(),
"min": ms.Min(),
"stddev": ms.StdDev(),
"variance": ms.Variance(),
"p50": ps[0],
"p75": ps[1],
"p95": ps[2],
"p99": ps[3],
"p999": ps[4],
"p9999": ps[5],
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
}
case metrics.Meter:
ms := metric.Snapshot()
measurement := fmt.Sprintf("%s%s.meter", namespace, name)
fields := map[string]interface{}{
"count": ms.Count(),
"m1": ms.Rate1(),
"m5": ms.Rate5(),
"m15": ms.Rate15(),
"mean": ms.RateMean(),
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
case metrics.Timer:
ms := metric.Snapshot()
ps := ms.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999, 0.9999})
measurement := fmt.Sprintf("%s%s.timer", namespace, name)
fields := map[string]interface{}{
"count": ms.Count(),
"max": ms.Max(),
"mean": ms.Mean(),
"min": ms.Min(),
"stddev": ms.StdDev(),
"variance": ms.Variance(),
"p50": ps[0],
"p75": ps[1],
"p95": ps[2],
"p99": ps[3],
"p999": ps[4],
"p9999": ps[5],
"m1": ms.Rate1(),
"m5": ms.Rate5(),
"m15": ms.Rate15(),
"meanrate": ms.RateMean(),
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
case metrics.ResettingTimer:
t := metric.Snapshot()
if len(t.Values()) > 0 {
ps := t.Percentiles([]float64{50, 95, 99})
val := t.Values()
measurement := fmt.Sprintf("%s%s.span", namespace, name)
fields := map[string]interface{}{
"count": len(val),
"max": val[len(val)-1],
"mean": t.Mean(),
"min": val[0],
"p50": ps[0],
"p95": ps[1],
"p99": ps[2],
}
pt := influxdb2.NewPoint(measurement, r.tags, fields, now)
r.write.WritePoint(pt)
}
}
})
// Force all unwritten data to be sent
r.write.Flush()
}

View File

@@ -242,7 +242,6 @@ func makeMiner(genesis *core.Genesis) (*node.Node, *eth.Ethereum, error) {
GPO: ethconfig.Defaults.GPO,
Ethash: ethconfig.Defaults.Ethash,
Miner: miner.Config{
GasFloor: genesis.GasLimit * 9 / 10,
GasCeil: genesis.GasLimit * 11 / 10,
GasPrice: big.NewInt(1),
Recommit: time.Second,

View File

@@ -193,7 +193,6 @@ func makeSealer(genesis *core.Genesis) (*node.Node, *eth.Ethereum, error) {
TxPool: core.DefaultTxPoolConfig,
GPO: ethconfig.Defaults.GPO,
Miner: miner.Config{
GasFloor: genesis.GasLimit * 9 / 10,
GasCeil: genesis.GasLimit * 11 / 10,
GasPrice: big.NewInt(1),
Recommit: time.Second,

View File

@@ -171,7 +171,6 @@ func makeMiner(genesis *core.Genesis) (*node.Node, *eth.Ethereum, error) {
GPO: ethconfig.Defaults.GPO,
Ethash: ethconfig.Defaults.Ethash,
Miner: miner.Config{
GasFloor: genesis.GasLimit * 9 / 10,
GasCeil: genesis.GasLimit * 11 / 10,
GasPrice: big.NewInt(1),
Recommit: time.Second,

View File

@@ -897,19 +897,17 @@ func (w *worker) commitNewWork(interrupt *int32, noempty bool, timestamp int64)
header := &types.Header{
ParentHash: parent.Hash(),
Number: num.Add(num, common.Big1),
GasLimit: core.CalcGasLimit(parent.GasUsed(), parent.GasLimit(), w.config.GasFloor, w.config.GasCeil),
GasLimit: core.CalcGasLimit(parent.GasLimit(), w.config.GasCeil),
Extra: w.extra,
Time: uint64(timestamp),
}
// Set baseFee and GasLimit if we are on an EIP-1559 chain
if w.chainConfig.IsLondon(header.Number) {
header.BaseFee = misc.CalcBaseFee(w.chainConfig, parent.Header())
parentGasLimit := parent.GasLimit()
if !w.chainConfig.IsLondon(parent.Number()) {
// Bump by 2x
parentGasLimit = parent.GasLimit() * params.ElasticityMultiplier
parentGasLimit := parent.GasLimit() * params.ElasticityMultiplier
header.GasLimit = core.CalcGasLimit(parentGasLimit, w.config.GasCeil)
}
header.GasLimit = core.CalcGasLimit1559(parentGasLimit, w.config.GasCeil)
}
// Only set the coinbase if our consensus engine is running (avoid spurious block rewards)
if w.isRunning() {

View File

@@ -67,7 +67,6 @@ var (
testConfig = &Config{
Recommit: time.Second,
GasFloor: params.GenesisGasLimit,
GasCeil: params.GenesisGasLimit,
}
)

View File

@@ -292,19 +292,6 @@ func (tx *Transaction) GetNonce() int64 { return int64(tx.tx.Nonce()) }
func (tx *Transaction) GetHash() *Hash { return &Hash{tx.tx.Hash()} }
func (tx *Transaction) GetCost() *BigInt { return &BigInt{tx.tx.Cost()} }
// Deprecated: GetSigHash cannot know which signer to use.
func (tx *Transaction) GetSigHash() *Hash { return &Hash{types.HomesteadSigner{}.Hash(tx.tx)} }
// Deprecated: use EthereumClient.TransactionSender
func (tx *Transaction) GetFrom(chainID *BigInt) (address *Address, _ error) {
var signer types.Signer = types.HomesteadSigner{}
if chainID != nil {
signer = types.NewEIP155Signer(chainID.bigint)
}
from, err := types.Sender(signer, tx.tx)
return &Address{from}, err
}
func (tx *Transaction) GetTo() *Address {
if to := tx.tx.To(); to != nil {
return &Address{*to}

View File

@@ -251,7 +251,7 @@ func (h *httpServer) doStop() {
// Shut down the server.
httpHandler := h.httpHandler.Load().(*rpcHandler)
wsHandler := h.httpHandler.Load().(*rpcHandler)
wsHandler := h.wsHandler.Load().(*rpcHandler)
if httpHandler != nil {
h.httpHandler.Store((*rpcHandler)(nil))
httpHandler.server.Stop()
@@ -280,7 +280,7 @@ func (h *httpServer) enableRPC(apis []rpc.API, config httpConfig) error {
// Create RPC server and handler.
srv := rpc.NewServer()
if err := RegisterApisFromWhitelist(apis, config.Modules, srv, false); err != nil {
if err := RegisterApis(apis, config.Modules, srv, false); err != nil {
return err
}
h.httpConfig = config
@@ -312,7 +312,7 @@ func (h *httpServer) enableWS(apis []rpc.API, config wsConfig) error {
// Create RPC server and handler.
srv := rpc.NewServer()
if err := RegisterApisFromWhitelist(apis, config.Modules, srv, false); err != nil {
if err := RegisterApis(apis, config.Modules, srv, false); err != nil {
return err
}
h.wsConfig = config
@@ -515,20 +515,20 @@ func (is *ipcServer) stop() error {
return err
}
// RegisterApisFromWhitelist checks the given modules' availability, generates a whitelist based on the allowed modules,
// RegisterApis checks the given modules' availability, generates an allowlist based on the allowed modules,
// and then registers all of the APIs exposed by the services.
func RegisterApisFromWhitelist(apis []rpc.API, modules []string, srv *rpc.Server, exposeAll bool) error {
func RegisterApis(apis []rpc.API, modules []string, srv *rpc.Server, exposeAll bool) error {
if bad, available := checkModuleAvailability(modules, apis); len(bad) > 0 {
log.Error("Unavailable modules in HTTP API list", "unavailable", bad, "available", available)
}
// Generate the whitelist based on the allowed modules
whitelist := make(map[string]bool)
// Generate the allow list based on the allowed modules
allowList := make(map[string]bool)
for _, module := range modules {
whitelist[module] = true
allowList[module] = true
}
// Register all the APIs exposed by the services
for _, api := range apis {
if exposeAll || whitelist[api.Namespace] || (len(whitelist) == 0 && api.Public) {
if exposeAll || allowList[api.Namespace] || (len(allowList) == 0 && api.Public) {
if err := srv.RegisterName(api.Namespace, api.Service); err != nil {
return err
}

View File

@@ -77,7 +77,7 @@ var (
errAlreadyDialing = errors.New("already dialing")
errAlreadyConnected = errors.New("already connected")
errRecentlyDialed = errors.New("recently dialed")
errNotWhitelisted = errors.New("not contained in netrestrict whitelist")
errNetRestrict = errors.New("not contained in netrestrict list")
errNoPort = errors.New("node does not provide TCP port")
)
@@ -133,7 +133,7 @@ type dialConfig struct {
self enode.ID // our own ID
maxDialPeers int // maximum number of dialed peers
maxActiveDials int // maximum number of active dials
netRestrict *netutil.Netlist // IP whitelist, disabled if nil
netRestrict *netutil.Netlist // IP netrestrict list, disabled if nil
resolver nodeResolver
dialer NodeDialer
log log.Logger
@@ -402,7 +402,7 @@ func (d *dialScheduler) checkDial(n *enode.Node) error {
return errAlreadyConnected
}
if d.netRestrict != nil && !d.netRestrict.Contains(n.IP()) {
return errNotWhitelisted
return errNetRestrict
}
if d.history.contains(string(n.ID().Bytes())) {
return errRecentlyDialed

View File

@@ -41,7 +41,7 @@ type Config struct {
PrivateKey *ecdsa.PrivateKey
// These settings are optional:
NetRestrict *netutil.Netlist // network whitelist
NetRestrict *netutil.Netlist // list of allowed IP networks
Bootnodes []*enode.Node // list of bootstrap nodes
Unhandled chan<- ReadPacket // unhandled packets are sent on this channel
Log log.Logger // if set, log messages go here

View File

@@ -583,7 +583,7 @@ func (t *UDPv4) nodeFromRPC(sender *net.UDPAddr, rn v4wire.Node) (*node, error)
return nil, err
}
if t.netrestrict != nil && !t.netrestrict.Contains(rn.IP) {
return nil, errors.New("not contained in netrestrict whitelist")
return nil, errors.New("not contained in netrestrict list")
}
key, err := v4wire.DecodePubkey(crypto.S256(), rn.ID)
if err != nil {

View File

@@ -353,7 +353,7 @@ func (srv *Server) RemovePeer(node *enode.Node) {
}
}
// AddTrustedPeer adds the given node to a reserved whitelist which allows the
// AddTrustedPeer adds the given node to a reserved trusted list which allows the
// node to always connect, even if the slot are full.
func (srv *Server) AddTrustedPeer(node *enode.Node) {
select {
@@ -903,7 +903,7 @@ func (srv *Server) checkInboundConn(remoteIP net.IP) error {
}
// Reject connections that do not match NetRestrict.
if srv.NetRestrict != nil && !srv.NetRestrict.Contains(remoteIP) {
return fmt.Errorf("not whitelisted in NetRestrict")
return fmt.Errorf("not in netrestrict list")
}
// Reject Internet peers that try too often.
now := srv.clock.Now()

View File

@@ -123,20 +123,12 @@ func probabilistic(net *Network, quit chan struct{}, nodeCount int) {
randWait := time.Duration(rand.Intn(5000)+1000) * time.Millisecond
rand1 := rand.Intn(nodeCount - 1)
rand2 := rand.Intn(nodeCount - 1)
if rand1 < rand2 {
if rand1 <= rand2 {
lowid = rand1
highid = rand2
} else if rand1 > rand2 {
highid = rand1
lowid = rand2
} else {
if rand1 == 0 {
rand2 = 9
} else if rand1 == 9 {
rand1 = 0
}
lowid = rand1
highid = rand2
}
var steps = highid - lowid
wg.Add(steps)

View File

@@ -67,12 +67,6 @@ var GoerliBootnodes = []string{
"enode://a59e33ccd2b3e52d578f1fbd70c6f9babda2650f0760d6ff3b37742fdcdfdb3defba5d56d315b40c46b70198c7621e63ffa3f987389c7118634b0fefbbdfa7fd@51.15.119.157:40303",
}
// CalaverasBootnodes are the enode URLs of the P2P bootstrap nodes running on the
// Calaveras ephemeral test network.
var CalaverasBootnodes = []string{
"enode://9e1096aa59862a6f164994cb5cb16f5124d6c992cdbf4535ff7dea43ea1512afe5448dca9df1b7ab0726129603f1a3336b631e4d7a1a44c94daddd03241587f9@3.9.20.133:30303",
}
var V5Bootnodes = []string{
// Teku team's bootnode
"enr:-KG4QOtcP9X1FbIMOe17QNMKqDxCpm14jcX5tiOE4_TyMrFqbmhPZHK_ZPG2Gxb1GE2xdtodOfx9-cgvNtxnRyHEmC0ghGV0aDKQ9aX9QgAAAAD__________4JpZIJ2NIJpcIQDE8KdiXNlY3AyNTZrMaEDhpehBDbZjM_L9ek699Y7vhUJ-eAdMyQW_Fil522Y0fODdGNwgiMog3VkcIIjKA",

View File

@@ -27,11 +27,10 @@ import (
// Genesis hashes to enforce below configs on.
var (
MainnetGenesisHash = common.HexToHash("0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3")
RopstenGenesisHash = common.HexToHash("0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d")
RinkebyGenesisHash = common.HexToHash("0x6341fd3daf94b748c72ced5a5b26028f2474f5f00d824504e4fa37a75767e177")
GoerliGenesisHash = common.HexToHash("0xbf7e331f7f7c1dd2e05159666b3bf8bc7a8a3a9eb1d518969eab529dd9b88c1a")
CalaverasGenesisHash = common.HexToHash("0xeb9233d066c275efcdfed8037f4fc082770176aefdbcb7691c71da412a5670f2")
MainnetGenesisHash = common.HexToHash("0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3")
RopstenGenesisHash = common.HexToHash("0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d")
RinkebyGenesisHash = common.HexToHash("0x6341fd3daf94b748c72ced5a5b26028f2474f5f00d824504e4fa37a75767e177")
GoerliGenesisHash = common.HexToHash("0xbf7e331f7f7c1dd2e05159666b3bf8bc7a8a3a9eb1d518969eab529dd9b88c1a")
)
// TrustedCheckpoints associates each known checkpoint with the genesis hash of
@@ -75,10 +74,10 @@ var (
// MainnetTrustedCheckpoint contains the light client trusted checkpoint for the main network.
MainnetTrustedCheckpoint = &TrustedCheckpoint{
SectionIndex: 389,
SectionHead: common.HexToHash("0x8f96e510cf64abf34095c5aa3937acdf5316de5540945b9688f4a2e083cddc73"),
CHTRoot: common.HexToHash("0xa2362493848d6dbc50dcbbf74c017ea808b8938bfb129217d507bd276950d7ac"),
BloomRoot: common.HexToHash("0x72fc78a841bde7e08e1fb7c187b622c49dc8271db12db748ff5d0f27bdb41413"),
SectionIndex: 395,
SectionHead: common.HexToHash("0xbfca95b8c1de014e252288e9c32029825fadbff58285f5b54556525e480dbb5b"),
CHTRoot: common.HexToHash("0x2ccf3dbb58eb6375e037fdd981ca5778359e4b8fa0270c2878b14361e64161e7"),
BloomRoot: common.HexToHash("0x2d46ec65a6941a2dc1e682f8f81f3d24192021f492fdf6ef0fdd51acb0f4ba0f"),
}
// MainnetCheckpointOracle contains a set of configs for the main network oracle.
@@ -116,10 +115,10 @@ var (
// RopstenTrustedCheckpoint contains the light client trusted checkpoint for the Ropsten test network.
RopstenTrustedCheckpoint = &TrustedCheckpoint{
SectionIndex: 322,
SectionHead: common.HexToHash("0xe3f2fb70acd752bbcac06b67688db8430815c788a31213011ed51b966108a5f4"),
CHTRoot: common.HexToHash("0xb2993a6bc28b23b84159cb477c38c0ec5607434faae6b3657ad44cbcf116f288"),
BloomRoot: common.HexToHash("0x871841e5c2ada9dab2011a550d38e9fe0a30047cfc81f1ffc7ebc09f4f230732"),
SectionIndex: 329,
SectionHead: common.HexToHash("0xe66f7038333a01fb95dc9ea03e5a2bdaf4b833cdcb9e393b9127e013bd64d39b"),
CHTRoot: common.HexToHash("0x1b0c883338ac0d032122800c155a2e73105fbfebfaa50436893282bc2d9feec5"),
BloomRoot: common.HexToHash("0x3cc98c88d283bf002378246f22c653007655cbcea6ed89f98d739f73bd341a01"),
}
// RopstenCheckpointOracle contains a set of configs for the Ropsten test network oracle.
@@ -160,10 +159,10 @@ var (
// RinkebyTrustedCheckpoint contains the light client trusted checkpoint for the Rinkeby test network.
RinkebyTrustedCheckpoint = &TrustedCheckpoint{
SectionIndex: 270,
SectionHead: common.HexToHash("0x03ef8982c93bbf18c859bc1b20ae05b439f04cf1ff592656e941d2c3fcff5d68"),
CHTRoot: common.HexToHash("0x9eb80685e8ece479e105b170439779bc0f89997ab7f4dee425f85c4234e8a6b5"),
BloomRoot: common.HexToHash("0xc3673721c5697efe5fe4cb825d178f4a335dbfeda6a197fb75c9256a767379dc"),
SectionIndex: 276,
SectionHead: common.HexToHash("0xea89a4b04e3da9bd688e316f8de669396b6d4a38a19d2cd96a00b70d58b836aa"),
CHTRoot: common.HexToHash("0xd6889d0bf6673c0d2c1cf6e9098a6fe5b30888a115b6112796aa8ee8efc4a723"),
BloomRoot: common.HexToHash("0x6009a9256b34b8bde3a3f094afb647ba5d73237546017b9025d64ac1ff54c47c"),
}
// RinkebyCheckpointOracle contains a set of configs for the Rinkeby test network oracle.
@@ -202,10 +201,10 @@ var (
// GoerliTrustedCheckpoint contains the light client trusted checkpoint for the Görli test network.
GoerliTrustedCheckpoint = &TrustedCheckpoint{
SectionIndex: 154,
SectionHead: common.HexToHash("0xf4cb74cc0e3683589f4992902184241fb892d7c3859d0044c16ec864605ff80d"),
CHTRoot: common.HexToHash("0xead95f9f2504b2c7c6d82c51d30e50b40631c3ea2f590cddcc9721cfc0ae79de"),
BloomRoot: common.HexToHash("0xc6dd6cfe88ac9c4a6d19c9a8651944fa9d941a2340a8f5ddaf673d4d39779d81"),
SectionIndex: 160,
SectionHead: common.HexToHash("0xb5a666c790dc35a5613d04ebba8ba47a850b45a15d9b95ad7745c35ae034b5a5"),
CHTRoot: common.HexToHash("0x6b4e00df52bdc38fa6c26c8ef595c2ad6184963ea36ab08ee744af460aa735e1"),
BloomRoot: common.HexToHash("0x8fa88f5e50190cb25243aeee262a1a9e4434a06f8d455885dcc1b5fc48c33836"),
}
// GoerliCheckpointOracle contains a set of configs for the Goerli test network oracle.
@@ -221,27 +220,6 @@ var (
Threshold: 2,
}
CalaverasChainConfig = &ChainConfig{
ChainID: big.NewInt(123),
HomesteadBlock: big.NewInt(0),
DAOForkBlock: nil,
DAOForkSupport: true,
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),
EIP158Block: big.NewInt(0),
ByzantiumBlock: big.NewInt(0),
ConstantinopleBlock: big.NewInt(0),
PetersburgBlock: big.NewInt(0),
IstanbulBlock: big.NewInt(0),
MuirGlacierBlock: nil,
BerlinBlock: big.NewInt(0),
LondonBlock: big.NewInt(500),
Clique: &CliqueConfig{
Period: 30,
Epoch: 30000,
},
}
// AllEthashProtocolChanges contains every protocol change (EIPs) introduced
// and accepted by the Ethereum core developers into the Ethash consensus.
//

View File

@@ -21,10 +21,10 @@ import (
)
const (
VersionMajor = 1 // Major version component of the current release
VersionMinor = 10 // Minor version component of the current release
VersionPatch = 5 // Patch version component of the current release
VersionMeta = "stable" // Version metadata to append to the version string
VersionMajor = 1 // Major version component of the current release
VersionMinor = 10 // Minor version component of the current release
VersionPatch = 8 // Patch version component of the current release
VersionMeta = "unstable" // Version metadata to append to the version string
)
// Version holds the textual version string.

View File

@@ -21,6 +21,7 @@ import (
"encoding/json"
"fmt"
"math"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/common"
@@ -191,3 +192,24 @@ func BlockNumberOrHashWithHash(hash common.Hash, canonical bool) BlockNumberOrHa
RequireCanonical: canonical,
}
}
// DecimalOrHex unmarshals a non-negative decimal or hex parameter into a uint64.
type DecimalOrHex uint64
// UnmarshalJSON implements json.Unmarshaler.
func (dh *DecimalOrHex) UnmarshalJSON(data []byte) error {
input := strings.TrimSpace(string(data))
if len(input) >= 2 && input[0] == '"' && input[len(input)-1] == '"' {
input = input[1 : len(input)-1]
}
value, err := strconv.ParseUint(input, 10, 64)
if err != nil {
value, err = hexutil.DecodeUint64(input)
}
if err != nil {
return err
}
*dh = DecimalOrHex(value)
return nil
}

View File

@@ -96,7 +96,7 @@ func wsHandshakeValidator(allowedOrigins []string) func(*http.Request) bool {
if _, ok := req.Header["Origin"]; !ok {
return true
}
// Verify origin against whitelist.
// Verify origin against allow list.
origin := strings.ToLower(req.Header.Get("Origin"))
if allowAllOrigins || originIsAllowed(origins, origin) {
return true

Some files were not shown because too many files have changed in this diff Show More