[DOCS] massive documentation update (#20229)

This PR:

- reorganizes all documentation pages so they live in the right category
- removes lots of legacy docs
- contains many improvements to active documentation pages

Geth user documentation is now spread across five major categories:

- Install and Build: installation and compile instructions
- Using Geth: this is for pages about general geth usage.
- For dApp Developers: this is for programming guides and functionality specific
   to dapp development. All the dev guides for mobile framework and Go APIs live here.
- JSON-RPC APIs: this has its own section because there is now a sub-page for
   every name space. I have also added an overview text that explains how to set
   up the API servers.
- For Geth Developers: this is for geth contributors
This commit is contained in:
Felix Lange
2019-11-05 13:46:00 +01:00
committed by GitHub
parent 86b20d897e
commit 7416b05b81
76 changed files with 3537 additions and 4680 deletions

View File

@ -1,34 +0,0 @@
---
title: Active go-ethereum projects
---
## Direction of development until the end of 2018
- Clef: move account management out of geth to clef
- Constantinople - Tools for testing
- Automate cross-client testing
- Progpow (ASIC-resistent PoW algorithm)
- Ethereum Node Report
- Topic discovery
- Build an end-to-end test system
- Simple API for LES-protocol
- Loadbalance tests using Swarm team's network simulator
- Test FlowControl subsystem rewrite
- Clients get more bandwidth with micro-payment
- Database IO reductions
- Historical state pruning
- Next gen sync algo (cross client)
- Blockscout for Puppeth
- Contract based signers for Clique (v1.5)
- Rinkeby - improve maintenance
- Concurrent tx execution experiment
- Dashboard
- Hive - Devp2p basic tests running as Hive simulation
- Hive - Devp2p network tests (different clients peering)
- Hive - Add all known client implementations
- Hive - Public metrics/test failures page
- DevP2P - Document protocols
- Hive - Further tests for networked consensus
- Discovery - Work with Felix to get ENR/next discovery out asap
- Countable trie experiment - For better sync statistic and futher storage rent
- Build an end-to-end test system
- Finalize simple checkpoint syncing

View File

@ -1,6 +1,8 @@
---
title: Code Review Guidelines
sort_key: B
---
The only way to get code into go-ethereum is to send a pull request. Those pull requests
need to be reviewed by someone. This document is a guide that explains our expectations
around PRs for both authors and reviewers.

View File

@ -1,167 +0,0 @@
---
title: Cross-compiling Ethereum
---
**Note: All of these and much more have been merged into the project Makefile.
You can cross build via `make geth-<os>-<platform>` without needing to know any
of these details from below.**
Developers usually have a preferred platform that they feel most comfortable
working in, with all the necessary tools, libraries and environments set up for
an optimal workflow. However, there's often need to build for either a different
CPU architecture, or an entirely different operating system; but maintaining a
development environment for each and switching between the them quickly becomes
unwieldy.
Here we present a very simple way to cross compile Ethereum to various operating
systems and architectures using a minimal set of prerequisites and a completely
containerized approach, guaranteeing that your development environment remains
clean even after the complex requirements and mechanisms of a cross compilation.
The currently supported target platforms are:
- ARMv7 Android and iOS
- 32 bit, 64 bit and ARMv5 Linux
- 32 bit and 64 bit Mac OSX
- 32 bit and 64 bit Windows
Please note, that cross compilation does not replace a release build. Although
resulting binaries can usually run perfectly on the desired platform, compiling
on a native system with the specialized tools provided by the official vendor
can often result in more a finely optimized code.
## Cross compilation environment
Although the `go-ethereum` project is written in Go, it does include a bit of C
code shared between all implementations to ensure that all perform equally well,
including a dependency to the GNU Multiple Precision Arithmetic Library. Because
of these, Go cannot by itself compile to a different platform than the host. To
overcome this limitation, we will use [`xgo`](https://github.com/karalabe/xgo),
a Go cross compiler package based on Docker containers that has been architected
specifically to allow both embedded C snippets as well as simpler external C
dependencies during compilation.
The `xgo` project has two simple dependencies: Docker (to ensure that the build
environment is completely contained) and Go. On most platforms these should be
available from the official package repositories. For manually installing them,
please consult their install guides at [Docker](https://docs.docker.com/installation/)
and [Go](https://golang.org/doc/install) respectively. This guide assumes that these
two dependencies are met.
To install and/or update xgo, simply type:
$ go get -u github.com/karalabe/xgo
You can test whether `xgo` is functioning correctly by requesting it to cross
compile itself and verifying that all cross compilations succeeded or not.
$ xgo github.com/karalabe/xgo
...
$ ls -al
-rwxr-xr-x 1 root root 2792436 Sep 14 16:45 xgo-android-21-arm
-rwxr-xr-x 1 root root 2353212 Sep 14 16:45 xgo-darwin-386
-rwxr-xr-x 1 root root 2906128 Sep 14 16:45 xgo-darwin-amd64
-rwxr-xr-x 1 root root 2388288 Sep 14 16:45 xgo-linux-386
-rwxr-xr-x 1 root root 2960560 Sep 14 16:45 xgo-linux-amd64
-rwxr-xr-x 1 root root 2437864 Sep 14 16:45 xgo-linux-arm
-rwxr-xr-x 1 root root 2551808 Sep 14 16:45 xgo-windows-386.exe
-rwxr-xr-x 1 root root 3130368 Sep 14 16:45 xgo-windows-amd64.exe
## Building Ethereum
Cross compiling Ethereum is analogous to the above example, but an additional
flags is required to satisfy the dependencies:
- `--deps` is used to inject arbitrary C dependency packages and pre-build them
Injecting the GNU Arithmetic Library dependency and selecting `geth` would be:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
github.com/ethereum/go-ethereum/cmd/geth
...
$ ls -al
-rwxr-xr-x 1 root root 23213372 Sep 14 17:59 geth-android-21-arm
-rwxr-xr-x 1 root root 14373980 Sep 14 17:59 geth-darwin-386
-rwxr-xr-x 1 root root 17373676 Sep 14 17:59 geth-darwin-amd64
-rwxr-xr-x 1 root root 21098910 Sep 14 17:59 geth-linux-386
-rwxr-xr-x 1 root root 25049693 Sep 14 17:59 geth-linux-amd64
-rwxr-xr-x 1 root root 20578535 Sep 14 17:59 geth-linux-arm
-rwxr-xr-x 1 root root 16351260 Sep 14 17:59 geth-windows-386.exe
-rwxr-xr-x 1 root root 19418071 Sep 14 17:59 geth-windows-amd64.exe
As the cross compiler needs to build all the dependencies as well as the main
project itself for each platform, it may take a while for the build to complete
(approximately 3-4 minutes on a Core i7 3770K machine).
### Fine tuning the build
By default Go, and inherently `xgo`, checks out and tries to build the master
branch of a source repository. However, more often than not, you'll probably
want to build a different branch from possibly an entirely different remote
repository. These can be controlled via the `--remote` and `--branch` flags.
To build the `develop` branch of the official `go-ethereum` repository instead
of the default `master` branch, you just need to specify it as an additional
command line flag (`--branch`):
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--branch=develop \
github.com/ethereum/go-ethereum/cmd/geth
Additionally, during development you will most probably want to not only build
a custom branch, but also one originating from your own fork of the repository
instead of the upstream one. This can be done via the `--remote` flag:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--remote=https://github.com/karalabe/go-ethereum \
--branch=rpi-staging \
github.com/ethereum/go-ethereum/cmd/geth
By default `xgo` builds binaries for all supported platforms and architectures,
with Android binaries defaulting to the highest released Android NDK platform.
To limit the build targets or compile to a different Android platform, use the
`--targets` CLI parameter.
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--targets=android-16/arm,windows/* \
github.com/ethereum/go-ethereum/cmd/geth
### Building locally
If you would like to cross compile your local development version, simply specify
a local path (starting with `.` or `/`), and `xgo` will use all local code from
`GOPATH`, only downloading missing dependencies. In such a case of course, the
`--branch`, `--remote` and `--pkg` arguments are no-op:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
./cmd/geth
## Using the Makefile
Having understood the gist of `xgo` based cross compilation, you do not need to
actually memorize and maintain these commands, as they have been incorporated into
the official [Makefile](https://github.com/ethereum/go-ethereum/blob/master/Makefile)
and can be invoked with a trivial `make` request:
* `make geth-cross`: Cross compiles to every supported OS and architecture
* `make geth-<os>`: Cross compiles supported architectures of a particular OS (e.g. `linux`)
* `make geth-<os>-<arch>`: Cross compiles to a specific OS/architecture (e.g. `linux`, `arm`)
We advise using the `make` based commands opposed to manually invoking `xgo` as we do
maintain the Makefile actively whereas we cannot guarantee that this document will be
always readily updated to latest advancements.
### Tuning the cross builds
A few of the `xgo` build options have also been surfaced directly into the Makefile to
allow fine tuning builds to work around either upstream Go issues, or to enable some
fancier mechanics.
- `make ... GO=<go>`: Use a specific Go runtime (e.g. `1.5.1`, `1.5-develop`, `develop`)
- `make ... MODE=<mode>`: Build a specific target type (e.g. `exe`, `c-archive`).
Please note that these are not yet fully finalized, so they may or may not change in
the future as our code and the Go runtime features change.

View File

@ -1,15 +0,0 @@
---
title: Developer guide
---
### Native DApps
[Introduction and packages](Native:-Introduction)
[Account management](Native:-Account-management)
### Mobile platforms
[Introduction and packages](Mobile:-Introduction)
[Account management](Mobile:-Account-management)

View File

@ -1,441 +0,0 @@
---
title: Native DApps / Go bindings to Ethereum contracts
---
**[Please note, events are not yet implemented as they need some RPC subscription
features that are still under review.]**
The original roadmap and/or dream of the Ethereum platform was to provide a solid, high
performing client implementation of the consensus protocol in various languages, which
would provide an RPC interface for JavaScript DApps to communicate with, pushing towards
the direction of the Mist browser, through which users can interact with the blockchain.
Although this was a solid plan for mainstream adoption and does cover quite a lot of use
cases that people come up with (mostly where people manually interact with the blockchain),
it eludes the server side (backend, fully automated, devops) use cases where JavaScript is
usually not the language of choice given its dynamic nature.
This page introduces the concept of server side native Dapps: Go language bindings to any
Ethereum contract that is compile time type safe, highly performant and best of all, can
be generated fully automatically from a contract ABI and optionally the EVM bytecode.
*This page is written in a more beginner friendly tutorial style to make it easier for
people to start out with writing Go native Dapps. The used concepts will be introduced
gradually as a developer would need/encounter them. However, we do assume the reader
is familiar with Ethereum in general, has a fair understanding of Solidity and can code
Go.*
## Token contract
To avoid falling into the fallacy of useless academic examples, we're going to take the
official Token contract as the base for introducing the Go
native bindings. If you're unfamiliar with the contract, skimming the linked page should
probably be enough, the details aren't relevant for now. *In short the contract implements
a custom token that can be deployed on top of Ethereum.* To make sure this tutorial doesn't
go stale if the linked website changes, the Solidity source code of the Token contract is
also available at [`token.sol`](https://gist.github.com/karalabe/08f4b780e01c8452d989).
### Go binding generator
Interacting with a contract on the Ethereum blockchain from Go (or any other language for
a matter of fact) is already possible via the RPC interfaces exposed by Ethereum clients.
However, writing the boilerplate code that translates decent Go language constructs into
RPC calls and back is extremely time consuming and also extremely brittle: implementation
bugs can only be detected during runtime and it's almost impossible to evolve a contract
as even a tiny change in Solidity can be painful to port over to Go.
To avoid all this mess, the go-ethereum implementation introduces a source code generator
that can convert Ethereum ABI definitions into easy to use, type-safe Go packages. Assuming
you have a valid Go development environment set up, `godep` installed and the go-ethereum
repository checked out correctly, you can build the generator with:
```
$ cd $GOPATH/src/github.com/ethereum/go-ethereum
$ godep go install ./cmd/abigen
```
### Generating the bindings
The single essential thing needed to generate a Go binding to an Ethereum contract is the
contract's ABI definition `JSON` file. For our `Token` contract tutorial you can obtain this
either by compiling the Solidity code yourself (e.g. via @chriseth's [online Solidity compiler](https://chriseth.github.io/browser-solidity/)), or you can download our pre-compiled [`token.abi`](https://gist.github.com/karalabe/b8dfdb6d301660f56c1b).
To generate a binding, simply call:
```
$ abigen --abi token.abi --pkg main --type Token --out token.go
```
Where the flags are:
* `--abi`: Mandatory path to the contract ABI to bind to
* `--pgk`: Mandatory Go package name to place the Go code into
* `--type`: Optional Go type name to assign to the binding struct
* `--out`: Optional output path for the generated Go source file (not set = stdout)
This will generate a type-safe Go binding for the Token contract. The generated code will
look something like [`token.go`](https://gist.github.com/karalabe/5839509295afa4f7e2215bc4116c7a8f),
but please generate your own as this will change as more work is put into the generator.
### Accessing an Ethereum contract
To interact with a contract deployed on the blockchain, you'll need to know the `address`
of the contract itself, and need to specify a `backend` through which to access Ethereum.
The binding generator provides out of the box an RPC backend through which you can attach
to an existing Ethereum node via IPC, HTTP or WebSockets.
We'll use the foundation's Unicorn token contract deployed
on the testnet to demonstrate calling contract methods. It is deployed at the address
`0x21e6fc92f93c8a1bb41e2be64b4e1f88a54d3576`.
To run the snippet below, please ensure a Geth instance is running and attached to the
Morden test network where the above mentioned contract was deployed. Also please update
the path to the IPC socket below to the one reported by your own local Geth node.
```go
package main
import (
"fmt"
"log"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
)
func main() {
// Create an IPC based RPC connection to a remote node
conn, err := ethclient.Dial("/home/karalabe/.ethereum/testnet/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
// Instantiate the contract and display its name
token, err := NewToken(common.HexToAddress("0x21e6fc92f93c8a1bb41e2be64b4e1f88a54d3576"), conn)
if err != nil {
log.Fatalf("Failed to instantiate a Token contract: %v", err)
}
name, err := token.Name(nil)
if err != nil {
log.Fatalf("Failed to retrieve token name: %v", err)
}
fmt.Println("Token name:", name)
}
```
And the output (yay):
```
Token name: Testnet Unicorn
```
If you look at the method invoked to read the token name `token.Name(nil)`, it required
a parameter to be passed, even though the original Solidity contract requires none. This
is a `*bind.CallOpts` type, which can be used to fine tune the call.
* `Pending`: Whether to access pending contract state or the current stable one
* `GasLimit`: Place a limit on the computing resources the call might consume
### Transacting with an Ethereum contract
Invoking a method that changes contract state (i.e. transacting) is a bit more involved,
as a live transaction needs to be authorized and broadcast into the network. **Opposed
to the conventional way of storing accounts and keys in the node we attach to, Go bindings
require signing transactions locally and do not delegate this to a remote node.** This is
done so to facilitate the general direction of the Ethereum community where accounts are
kept private to DApps, and not shared (by default) between them.
Thus to allow transacting with a contract, your code needs to implement a method that
given an input transaction, signs it and returns an authorized output transaction. Since
most users have their keys in the [Web3 Secret Storage](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) format, the `bind` package contains a small utility method
(`bind.NewTransactor(keyjson, passphrase)`) that can create an authorized transactor from
a key file and associated password, without the user needing to implement key signing himself.
Changing the previous code snippet to send one unicorn to the zero address:
```go
package main
import (
"fmt"
"log"
"math/big"
"strings"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
)
const key = `paste the contents of your *testnet* key json here`
func main() {
// Create an IPC based RPC connection to a remote node and instantiate a contract binding
conn, err := ethclient.Dial("/home/karalabe/.ethereum/testnet/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
token, err := NewToken(common.HexToAddress("0x21e6fc92f93c8a1bb41e2be64b4e1f88a54d3576"), conn)
if err != nil {
log.Fatalf("Failed to instantiate a Token contract: %v", err)
}
// Create an authorized transactor and spend 1 unicorn
auth, err := bind.NewTransactor(strings.NewReader(key), "my awesome super secret password")
if err != nil {
log.Fatalf("Failed to create authorized transactor: %v", err)
}
tx, err := token.Transfer(auth, common.HexToAddress("0x0000000000000000000000000000000000000000"), big.NewInt(1))
if err != nil {
log.Fatalf("Failed to request token transfer: %v", err)
}
fmt.Printf("Transfer pending: 0x%x\n", tx.Hash())
}
```
And the output (yay):
```
Transfer pending: 0x4f4aaeb29ed48e88dd653a81f0b05d4df64a86c99d4e83b5bfeb0f0006b0e55b
```
*Note, with high probability you won't have any testnet unicorns available to spend, so the
above program will fail with an error. Send at least 2.014 testnet(!) Ethers to the foundation
testnet tipjar `0xDf7D0030bfed998Db43288C190b63470c2d18F50` to receive a unicorn token and
you'll be able to see the above code run without an error!*
Similar to the method invocations in the previous section which only read contract state,
transacting methods also require a mandatory first parameter, a `*bind.TransactOpts` type,
which authorizes the transaction and potentially fine tunes it:
* `From`: Address of the account to invoke the method with (mandatory)
* `Signer`: Method to sign a transaction locally before broadcasting it (mandatory)
* `Nonce`: Account nonce to use for the transaction ordering (optional)
* `GasLimit`: Place a limit on the computing resources the call might consume (optional)
* `GasPrice`: Explicitly set the gas price to run the transaction with (optional)
* `Value`: Any funds to transfer along with the method call (optional)
The two mandatory fields are automatically set by the `bind` package if the auth options are
constructed using `bind.NewTransactor`. The nonce and gas related fields are automatically
derived by the binding if they are not set. An unset value is assumed to be zero.
### Pre-configured contract sessions
As mentioned in the previous two sections, both reading as well as state modifying contract
calls require a mandatory first parameter which can both authorize as well as fine tune some
of the internal parameters. However, most of the time we want to use the same parameters and
issue transactions with the same account, so always constructing the call/transact options or
passing them along with the binding can become unwieldy.
To avoid these scenarios, the generator also creates specialized wrappers that can be pre-
configured with tuning and authorization parameters, allowing all the Solidity defined methods
to be invoked without needing an extra parameter.
These are named analogous to the original contract type name, just suffixed with `Sessions`:
```go
// Wrap the Token contract instance into a session
session := &TokenSession{
Contract: token,
CallOpts: bind.CallOpts{
Pending: true,
},
TransactOpts: bind.TransactOpts{
From: auth.From,
Signer: auth.Signer,
GasLimit: big.NewInt(3141592),
},
}
// Call the previous methods without the option parameters
session.Name()
session.Transfer("0x0000000000000000000000000000000000000000"), big.NewInt(1))
```
### Deploying contracts to Ethereum
Interacting with existing contracts is nice, but let's take it up a notch and deploy
a brand new contract onto the Ethereum blockchain! To do so however, the contract ABI
we used to generate the binding is not enough. We need the compiled bytecode too to
allow deploying it.
To get the bytecode, either go back to the online compiler with which you may generate it,
or alternatively download our [`token.bin`](https://gist.github.com/karalabe/026548f6a5f5f97b54de).
You'll need to rerun the Go generator with the bytecode included for it to create deploy
code too:
```
$ abigen --abi token.abi --pkg main --type Token --out token.go --bin token.bin
```
This will generate something similar to [`token.go`](https://gist.github.com/karalabe/2153b087c1f80f651fd87dd4c439fac4).
If you quickly skim this file, you'll find an extra `DeployToken` function that was just
injected compared to the previous code. Beside all the parameters specified by Solidity,
it also needs the usual authorization options to deploy the contract with and the Ethereum
backend to deploy the contract through.
Putting it all together would result in:
```go
package main
import (
"fmt"
"log"
"math/big"
"strings"
"time"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/ethclient"
)
const key = `paste the contents of your *testnet* key json here`
func main() {
// Create an IPC based RPC connection to a remote node and an authorized transactor
conn, err := rpc.NewIPCClient("/home/karalabe/.ethereum/testnet/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
auth, err := bind.NewTransactor(strings.NewReader(key), "my awesome super secret password")
if err != nil {
log.Fatalf("Failed to create authorized transactor: %v", err)
}
// Deploy a new awesome contract for the binding demo
address, tx, token, err := DeployToken(auth, conn), new(big.Int), "Contracts in Go!!!", 0, "Go!")
if err != nil {
log.Fatalf("Failed to deploy new token contract: %v", err)
}
fmt.Printf("Contract pending deploy: 0x%x\n", address)
fmt.Printf("Transaction waiting to be mined: 0x%x\n\n", tx.Hash())
// Don't even wait, check its presence in the local pending state
time.Sleep(250 * time.Millisecond) // Allow it to be processed by the local node :P
name, err := token.Name(&bind.CallOpts{Pending: true})
if err != nil {
log.Fatalf("Failed to retrieve pending name: %v", err)
}
fmt.Println("Pending name:", name)
}
```
And the code performs as expected: it requests the creation of a brand new Token contract
on the Ethereum blockchain, which we can either wait for to be mined or as in the above code
start calling methods on it in the pending state :)
```
Contract pending deploy: 0x46506d900559ad005feb4645dcbb2dbbf65e19cc
Transaction waiting to be mined: 0x6a81231874edd2461879b7280ddde1a857162a744e3658ca7ec276984802183b
Pending name: Contracts in Go!!!
```
## Bind Solidity directly
If you've followed the tutorial along until this point you've probably realized that
every contract modification needs to be recompiled, the produced ABIs and bytecodes
(especially if you need multiple contracts) individually saved to files and then the
binding executed for them. This can become a quite bothersome after the Nth iteration,
so the `abigen` command supports binding from Solidity source files directly (`--sol`),
which first compiles the source code (via `--solc`, defaulting to `solc`) into it's
constituent components and binds using that.
Binding the official Token contract [`token.sol`](https://gist.github.com/karalabe/08f4b780e01c8452d989)
would then entail to running:
```
$ abigen --sol token.sol --pkg main --out token.go
```
*Note: Building from Solidity (`--sol`) is mutually exclusive with individually setting
the bind components (`--abi`, `--bin` and `--type`), as all of them are extracted from
the Solidity code and produced build results directly.*
Building a contract directly from Solidity has the nice side effect that all contracts
contained within a Solidity source file are built and bound, so if your file contains many
contract sources, each and every one of them will be available from Go code. The sample
Token solidity file results in [`token.go`](https://gist.github.com/karalabe/c22aab73194ba7da834ab5b379621031).
### Project integration (i.e. `go generate`)
The `abigen` command was made in such a way as to play beautifully together with existing
Go toolchains: instead of having to remember the exact command needed to bind an Ethereum
contract into a Go project, we can leverage `go generate` to remember all the nitty-gritty
details.
Place the binding generation command into a Go source file before the package definition:
```
//go:generate abigen --sol token.sol --pkg main --out token.go
```
After which whenever the Solidity contract is modified, instead of needing to remember and
run the above command, we can simply call `go generate` on the package (or even the entire
source tree via `go generate ./...`), and it will correctly generate the new bindings for us.
## Blockchain simulator
Being able to deploy and access already deployed Ethereum contracts from within native Go
code is an extremely powerful feature, but there is one facet with developing native code
that not even the testnet lends itself well to: *automatic unit testing*. Using go-ethereum
internal constructs it's possible to create test chains and verify them, but it is unfeasible
to do high level contract testing with such low level mechanisms.
To sort out this last issue that would make it hard to run (and test) native DApps, we've also
implemented a *simulated blockchain*, that can be set as a backend to native contracts the same
way as a live RPC backend could be: `backends.NewSimulatedBackend(genesisAccounts)`.
```go
package main
import (
"fmt"
"log"
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/crypto"
)
func main() {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAccount{Address: auth.From, Balance: big.NewInt(10000000000)})
// Deploy a token contract on the simulated blockchain
_, _, token, err := DeployMyToken(auth, sim, new(big.Int), "Simulated blockchain tokens", 0, "SBT")
if err != nil {
log.Fatalf("Failed to deploy new token contract: %v", err)
}
// Print the current (non existent) and pending name of the contract
name, _ := token.Name(nil)
fmt.Println("Pre-mining name:", name)
name, _ = token.Name(&bind.CallOpts{Pending: true})
fmt.Println("Pre-mining pending name:", name)
// Commit all pending transactions in the simulator and print the names again
sim.Commit()
name, _ = token.Name(nil)
fmt.Println("Post-mining name:", name)
name, _ = token.Name(&bind.CallOpts{Pending: true})
fmt.Println("Post-mining pending name:", name)
}
```
And the output (yay):
```
Pre-mining name:
Pre-mining pending name: Simulated blockchain tokens
Post-mining name: Simulated blockchain tokens
Post-mining pending name: Simulated blockchain tokens
```
Note, that we don't have to wait for a local private chain miner, or testnet miner to
integrate the currently pending transactions. When we decide to mine the next block,
we simply `Commit()` the simulator.

View File

@ -1,91 +0,0 @@
---
title: Mobile Clients
---
**This page has been obsoleted. An new guide is in the progress at [[Mobile: Introduction]]**
---
*This page is meant to be a guide on using go-ethereum from mobile platforms. Since neither the mobile libraries nor the light client protocol is finalized, the content here will be sparse, with emphasis being put on how to get your hands dirty. As the APIs stabilize this section will be expanded accordingly.*
### Changelog
* 30th September, 2016: Create initial page, upload Android light bundle.
### Background
Before reading further, please skim through the slides of a Devcon2 talk: [Import Geth: Ethereum from Go and beyond](https://ethereum.karalabe.com/talks/2016-devcon.html), which introduces the basic concepts behind using go-ethereum as a library, and also showcases a few code snippets on how you can do various client side tasks, both on classical computing nodes as well as Android devices. A recording of the talk will be linked when available.
*Please note, the Android and iOS library bundles linked in the presentation will not be updated (for obvious posterity reasons), so always grab latest bundles from this page (until everything is merged into the proper build infrastructure).*
### Mobile bundles
You can download the latest bundles at:
* [Android (30th September, 2016)](https://bintray.com/karalabe/ethereum/download_file?file_path=geth.aar) - `SHA1: 753e334bf61fa519bec83bcb487179e36d58fc3a`
* iOS: *light client has not yet been bundled*
### Android quickstart
We assume you are using Android Studio for your development. Please download the latest Android `.aar` bundle from above and import it into your Android Studio project via `File -> New -> New Module`. This will result in a `geth` sub-project inside your work-space. To use the library in your project, please modify your apps `build.gradle` file, adding a dependency to the Geth library:
```gradle
dependencies {
// All your previous dependencies
compile project(':geth')
}
```
To get you hands dirty, here's a code snippet that will
* Start up an in-process light node inside your Android application
* Display some initial infos about your node
* Subscribe to new blocks and display them live as they arrive
<img src="http://i.imgur.com/LyTCCqg.png" width="512px" alt="Android in-process node"/>
```java
import org.ethereum.geth.*;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
setTitle("Android In-Process Node");
final TextView textbox = (TextView) findViewById(R.id.textbox);
Context ctx = new Context();
try {
Node node = Geth.newNode(getFilesDir() + "/.ethereum", new NodeConfig());
node.start();
NodeInfo info = node.getNodeInfo();
textbox.append("My name: " + info.getName() + "\n");
textbox.append("My address: " + info.getListenerAddress() + "\n");
textbox.append("My protocols: " + info.getProtocols() + "\n\n");
EthereumClient ec = node.getEthereumClient();
textbox.append("Latest block: " + ec.getBlockByNumber(ctx, -1).getNumber() + ", syncing...\n");
NewHeadHandler handler = new NewHeadHandler() {
@Override public void onError(String error) { }
@Override public void onNewHead(final Header header) {
MainActivity.this.runOnUiThread(new Runnable() {
public void run() { textbox.append("#" + header.getNumber() + ": " + header.getHash().getHex().substring(0, 10) + ".\n"); }
});
}
};
ec.subscribeNewHead(ctx, handler, 16);
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
#### Known quirks
* Many constructors (those that would throw exceptions) are of the form `Geth.newXXX()`, instead of simply the Java style `new XXX()` This is an upstream limitation of the [gomobile](https://github.com/golang/mobile) project, one which is currently being worked on to resolve.
* There are zero documentations attached to the Java library methods. This too is a limitation of the [gomobile](https://github.com/golang/mobile) project. We will try to propose a fix upstream to make our docs from the Go codebase available in Java.

View File

@ -1,161 +0,0 @@
---
title: Peer-to-peer
---
The peer to peer package ([go-ethereum/p2p](https://github.com/ethereum/go-ethereum/tree/master/p2p)) allows you to rapidly and easily add peer to peer networking to any type of application. The p2p package is set up in a modular structure and extending the p2p with your own additional sub protocols is easy and straight forward.
Starting the p2p service only requires you setup a `p2p.Server{}` with a few settings:
```go
import "github.com/ethereum/go-ethereum/crypto"
import "github.com/ethereum/go-ethereum/p2p"
nodekey, _ := crypto.GenerateKey()
srv := p2p.Server{
MaxPeers: 10,
PrivateKey: nodekey,
Name: "my node name",
ListenAddr: ":30300",
Protocols: []p2p.Protocol{},
}
srv.Start()
```
If we wanted to extend the capabilities of our p2p server we'd need to pass it an additional sub protocol in the `Protocol: []p2p.Protocol{}` array.
An additional sub protocol that has the ability to respond to the message "foo" with "bar" requires you to setup an `p2p.Protocol{}`:
```go
func MyProtocol() p2p.Protocol {
return p2p.Protocol{ // 1.
Name: "MyProtocol", // 2.
Version: 1, // 3.
Length: 1, // 4.
Run: func(peer *p2p.Peer, ws p2p.MsgReadWriter) error { return nil }, // 5.
}
}
```
1. A sub-protocol object in the p2p package is called `Protocol{}`. Each time a peer connects with the capability of handling this type of protocol will use this;
2. The name of your protocol to identify the protocol on the network;
3. The version of the protocol.
4. The amount of messages this protocol relies on. Because the p2p is extendible and thus has the ability to send an arbitrary amount of messages (with a type, which we'll see later) the p2p handler needs to know how much space it needs to reserve for your protocol, this to ensure consensus can be reached between the peers doing a negotiation over the message IDs. Our protocol supports only one; `message` (as you'll see later).
5. The main handler of your protocol. We've left this intentionally blank for now. The `peer` variable is the peer connected to you and provides you with some basic information regarding the peer. The `ws` variable which is a reader and a writer allows you to communicate with the peer. If a message is being send to us by that peer the `MsgReadWriter` will handle it and vice versa.
Lets fill in the blanks and create a somewhat useful peer by allowing it to communicate with another peer:
```go
const messageId = 0 // 1.
type Message string // 2.
func msgHandler(peer *p2p.Peer, ws p2p.MsgReadWriter) error {
for {
msg, err := ws.ReadMsg() // 3.
if err != nil { // 4.
return err // if reading fails return err which will disconnect the peer.
}
var myMessage [1]Message
err = msg.Decode(&myMessage) // 5.
if err != nil {
// handle decode error
continue
}
switch myMessage[0] {
case "foo":
err := p2p.SendItems(ws, messageId, "bar") // 6.
if err != nil {
return err // return (and disconnect) error if writing fails.
}
default:
fmt.Println("recv:", myMessage)
}
}
return nil
}
```
1. The one and only message we know about;
2. A typed string we decode in to;
3. `ReadMsg` waits on the line until it receives a message, an error or EOF.
4. In case of an error during reading it's best to return that error and let the p2p server handle it. This usually results in a disconnect from the peer.
5. `msg` contains two fields and a decoding method:
* `Code` contains the message id, `Code == messageId` (i.e., 0)
* `Payload` the contents of the message.
* `Decode(<ptr>)` is a helper method for: take `msg.Payload` and decodes the rest of the message in to the given interface. If it fails it will return an error.
6. If the message we decoded was `foo` respond with a `NewMessage` using the `messageId` message identifier and respond with the message `bar`. The `bar` message would be handled in the `default` case in the same switch.
Now if we'd tie this all up we'd have a working p2p server with a message passing sub protocol.
```go
package main
import (
"fmt"
"os"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p"
)
const messageId = 0
type Message string
func MyProtocol() p2p.Protocol {
return p2p.Protocol{
Name: "MyProtocol",
Version: 1,
Length: 1,
Run: msgHandler,
}
}
func main() {
nodekey, _ := crypto.GenerateKey()
srv := p2p.Server{
MaxPeers: 10,
PrivateKey: nodekey,
Name: "my node name",
ListenAddr: ":30300",
Protocols: []p2p.Protocol{MyProtocol()},
}
if err := srv.Start(); err != nil {
fmt.Println(err)
os.Exit(1)
}
select {}
}
func msgHandler(peer *p2p.Peer, ws p2p.MsgReadWriter) error {
for {
msg, err := ws.ReadMsg()
if err != nil {
return err
}
var myMessage Message
err = msg.Decode(&myMessage)
if err != nil {
// handle decode error
continue
}
switch myMessage {
case "foo":
err := p2p.SendItems(ws, messageId, "bar"))
if err != nil {
return err
}
default:
fmt.Println("recv:", myMessage)
}
}
return nil
}
```

View File

@ -1,145 +0,0 @@
---
title: RPS pub-sub
---
# Introduction
From version 1.4 geth has **_experimental_** support for pub/sub using subscriptions as defined in the JSON-RPC 2.0 specification. This allows clients to wait for events instead of polling for them.
It works by subscribing to particular events. The node will return a subscription id. For each event that matches the subscription a notification with relevant data is send together with the subscription id.
Example:
// create subscription
>> {"id": 1, "method": "eth_subscribe", "params": ["newHeads", {}]}
<< {"jsonrpc":"2.0","id":1,"result":"0xcd0c3e8af590364c09d0fa6a1210faf5"}
// incoming notifications
<< {"jsonrpc":"2.0","method":"eth_subscription","params":{"subscription":"0xcd0c3e8af590364c09d0fa6a1210faf5","result":{"difficulty":"0xd9263f42a87",<...>, "uncles":[]}}}
<< {"jsonrpc":"2.0","method":"eth_subscription","params":{"subscription":"0xcd0c3e8af590364c09d0fa6a1210faf5","result":{"difficulty":"0xd90b1a7ad02", <...>, "uncles":["0x80aacd1ea4c9da32efd8c2cc9ab38f8f70578fcd46a1a4ed73f82f3e0957f936"]}}}
// cancel subscription
>> {"id": 1, "method": "eth_unsubscribe", "params": ["0xcd0c3e8af590364c09d0fa6a1210faf5"]}
<< {"jsonrpc":"2.0","id":1,"result":true}
# Considerations
1. notifications are send for current events and not for past events. If your use case requires you not to miss any notifications than subscriptions are probably not the best option.
2. subscriptions require a full duplex connection. Geth offers such connections in the form of websockets (enable with --ws) and ipc (enabled by default).
3. subscriptions are coupled to a connection. If the connection is closed all subscriptions that are created over this connection are removed.
4. notifications are stored in an internal buffer and sent from this buffer to the client. If the client is unable to keep up and the number of buffered notifications reaches a limit (currently 10k) the connection is closed. Keep in mind that subscribing to some events can cause a flood of notifications, e.g. listening for all logs/blocks when the node starts to synchronize.
## Create subscription
Subscriptions are creates with a regular RPC call with `eth_subscribe` as method and the subscription name as first parameter. If successful it returns the subscription id.
### Parameters
1. subscription name
2. optional arguments
### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["newHeads", {"includeTransactions": true}]}
<< {"id": 1, "jsonrpc": "2.0", "result": "0x9cef478923ff08bf67fde6c64013158d"}
## Cancel subscription
Subscriptions are cancelled with a regular RPC call with `eth_unsubscribe` as method and the subscription id as first parameter. It returns a bool indicating if the subscription was cancelled successful.
### Parameters
1. subscription id
### Example
>> {"id": 1, "method": "eth_unsubscribe", "params": ["0x9cef478923ff08bf67fde6c64013158d"]}
<< {"jsonrpc":"2.0","id":1,"result":true}
# Supported subscriptions
## newHeads
Fires a notification each time a new header is appended to the chain, including chain reorganizations. Users can use the bloom filter to determine if the block contains logs that are interested to them.
In case of a chain reorganization the subscription will emit all new headers for the new chain. Therefore the subscription can emit multiple headers on the same height.
### Example
```
>> {"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}
<< {"jsonrpc":"2.0","id":2,"result":"0x9ce59a13059e417087c02d3236a0b1cc"}
<< {
"jsonrpc": "2.0",
"method": "eth_subscription",
"params": {
"result": {
"difficulty": "0x15d9223a23aa",
"extraData": "0xd983010305844765746887676f312e342e328777696e646f7773",
"gasLimit": "0x47e7c4",
"gasUsed": "0x38658",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"miner": "0xf8b483dba2c3b7176a3da549ad41a48bb3121069",
"nonce": "0x084149998194cc5f",
"number": "0x1348c9",
"parentHash": "0x7736fab79e05dc611604d22470dadad26f56fe494421b5b333de816ce1f25701",
"receiptRoot": "0x2fab35823ad00c7bb388595cb46652fe7886e00660a01e867824d3dceb1c8d36",
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"stateRoot": "0xb3346685172db67de536d8765c43c31009d0eb3bd9c501c9be3229203f15f378",
"timestamp": "0x56ffeff8",
"transactionsRoot": "0x0167ffa60e3ebc0b080cdb95f7c0087dd6c0e61413140e39d94d3468d7c9689f"
},
"subscription": "0x9ce59a13059e417087c02d3236a0b1cc"
}
}
```
## logs
Returns logs that are included in new imported blocks and match the given filter criteria.
In case of a chain reorganization previous sent logs that are on the old chain will be resend with the `removed` property set to true. Logs from transactions that ended up in the new chain are emitted. Therefore a subscription can emit logs for the same transaction multiple times.
### Parameters
1. `object` with the following (optional) fields
- **address**, either an address or an array of addresses. Only logs that are created from these addresses are returned (optional)
- **topics**, only logs which match the specified topics (optional)
### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["logs", {"address": "0x8320fe7702b96808f7bbc0d4a888ed1468216cfd", "topics": ["0xd78a0cb8bb633d06981248b816e7bd33c2a35a6089241d099fa519e361cab902"]}]}
<< {"jsonrpc":"2.0","id":2,"result":"0x4a8a4c0517381924f9838102c5a4dcb7"}
<< {"jsonrpc":"2.0","method":"eth_subscription","params": {"subscription":"0x4a8a4c0517381924f9838102c5a4dcb7","result":{"address":"0x8320fe7702b96808f7bbc0d4a888ed1468216cfd","blockHash":"0x61cdb2a09ab99abf791d474f20c2ea89bf8de2923a2d42bb49944c8c993cbf04","blockNumber":"0x29e87","data":"0x00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000003","logIndex":"0x0","topics":["0xd78a0cb8bb633d06981248b816e7bd33c2a35a6089241d099fa519e361cab902"],"transactionHash":"0xe044554a0a55067caafd07f8020ab9f2af60bdfe337e395ecd84b4877a3d1ab4","transactionIndex":"0x0"}}}
## newPendingTransactions
Returns the hash for all transactions that are added to the pending state and are signed with a key that is available in the node.
When a transaction that was previously part of the canonical chain isn't part of the new canonical chain after a reogranization its again emitted.
### Parameters
none
### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
<< {"jsonrpc":"2.0","id":2,"result":"0xc3b33aa549fb9a60e95d21862596617c"}
<< {
"jsonrpc":"2.0",
"method":"eth_subscription",
"params":{
"subscription":"0xc3b33aa549fb9a60e95d21862596617c",
"result":"0xd6fdc5cc41a9959e922f30cb772a9aef46f4daea279307bc5f7024edc4ccd7fa"
}
}
## syncing
Indicates when the node starts or stops synchronizing. The result can either be a boolean indicating that the synchronization has started (true), finished (false) or an object with various progress indicators.
### Parameters
none
### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["syncing"]}
<< {"jsonrpc":"2.0","id":2,"result":"0xe2ffeb2703bcf602d42922385829ce96"}
<< {"subscription":"0xe2ffeb2703bcf602d42922385829ce96","result":{"syncing":true,"status":{"startingBlock":674427,"currentBlock":67400,"highestBlock":674432,"pulledStates":0,"knownStates":0}}}}
## Possible future subscription:
- balance changes
- account changes
- nonce changes
- storage changes in contracts

View File

@ -1,141 +0,0 @@
---
title: Tracing / Introduction
---
There are two different types of transactions in Ethereum: plain value transfers and contract executions. A plain value transfer just moves Ether from one account to another and as such is uninteresting from this guide's perspective. If however the recipient of a transaction is a contract account with associated EVM (Ethereum Virtual Machine) bytecode - beside transferring any Ether - the code will also be executed as part of the transaction.
Having code associated with Ethereum accounts permits transactions to do arbitrarilly complex data storage and enables them to act on the previously stored data by further transacting internally with outside accounts and contracts. This creates an intertwined ecosystem of contracts, where a single transaction can interact with tens or hunderds of accounts.
The downside of contract execution is that it is very hard to say what a transaction actually did. A transaction receipt does contain a status code to check whether execution succeeded or not, but there's no way to see what data was modified, nor what external contracts where invoked. In order to introspect a transaction, we need to trace its execution.
## Tracing prerequisites
In its simplest form, tracing a transaction entails requesting the Ethereum node to reexecute the desired transaction with varying degrees of data collection and have it return the aggregated summary for post processing. Reexecuting a transaction however has a few prerequisites to be met.
In order for an Ethereum node to reexecute a transaction, it needs to have available all historical state accessed by the transaction:
* Balance, nonce, bytecode and storage of both the recipient as well as all internally invoked contracts.
* Block metadata referenced during execution of both the outer as well as all internally created transactions.
* Intermediate state generated by all preceding transactions contained in the same block as the one being traced.
Depending on your node's mode of synchronization and pruning, different configurations result in different capabilities:
* An **archive** node retaining **all historical data** can trace arbitrary transactions at any point in time. Tracing a single transaction also entails reexecuting all preceding transactions in the same block.
* A **fast synced** node retaining **all historical data** after initial sync can only trace transactions from blocks following the initial sync point. Tracing a single transaction also entails reexecuting all preceding transactions in the same block.
* A **fast synced** node retaining only **periodic state data** after initial sync can only trace transactions from blocks following the initial sync point. Tracing a single transaction entails reexecuting all preceding transactions **both** in the same block, as well as all preceding blocks until the previous stored snapshot.
* A **light synced** node retrieving data **on demand** can in theory trace transactions for which all required historical state is readily available in the network. In practice, data availability is **not** a feasible assumption.
*There are exceptions to the above rules when running batch traces of entire blocks or chain segments. Those will be detailed later.*
## Basic traces
The simplest type of transaction trace that `go-ethereum` can generate are raw EVM opcode traces. For every VM instruction the transaction executes, a structured log entry is emitted, containing all contextual metadata deemed useful. This includes the *program counter*, *opcode name*, *opcode cost*, *remaining gas*, *execution depth* and any *occurred error*. The structured logs can optionally also contain the content of the *execution stack*, *execution memory* and *contract storage*.
An example log entry for a single opcode looks like:
```json
{
"pc": 48,
"op": "DIV",
"gasCost": 5,
"gas": 64532,
"depth": 1,
"error": null,
"stack": [
"00000000000000000000000000000000000000000000000000000000ffffffff",
"0000000100000000000000000000000000000000000000000000000000000000",
"2df07fbaabbe40e3244445af30759352e348ec8bebd4dd75467a9f29ec55d98d"
],
"memory": [
"0000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000060"
],
"storage": {
}
}
```
The entire output of an raw EVM opcode trace is a JSON object having a few metadata fields: *consumed gas*, *failure status*, *return value*; and a list of *opcode entries* that take the above form:
```json
{
"gas": 25523,
"failed": false,
"returnValue": "",
"structLogs": []
}
```
### Generating basic traces
To generate a raw EVM opcode trace, `go-ethereum` provides a few [RPC API endpoints](https://github.com/ethereum/go-ethereum/wiki/Management-APIs), out of which the most commonly used is [`debug_traceTransaction`](https://github.com/ethereum/go-ethereum/wiki/Management-APIs#debug_tracetransaction).
In its simplest form, `traceTransaction` accepts a transaction hash as its sole argument, traces the transaction, aggregates all the generated data and returns it as a **large** JSON object. A sample invocation from the Geth console would be:
```js
debug.traceTransaction("0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f")
```
The same call can of course be invoked from outside the node too via HTTP RPC. In this case, please make sure the HTTP endpoint is enabled via `--rpc` and the `debug` API namespace exposed via `--rpcapi=debug`.
```
$ curl -H "Content-Type: application/json" -d '{"id": 1, "method": "debug_traceTransaction", "params": ["0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f"]}' localhost:8545
```
Running the above operation on the Rinkeby network (with a node retaining enough history) will result in this [trace dump](https://gist.github.com/karalabe/c91f95ac57f5e57f8b950ec65ecc697f).
### Tuning basic traces
By default the raw opcode tracer emits all relevant events that occur within the EVM while processing a transaction, such as *EVM stack*, *EVM memory* and *updated storage slots*. Certain use cases however may not need some of these data fields reported. To cater for those use cases, these massive fields may be omitted using a second *options* parameter for the tracer:
```json
{
"disableStack": true,
"disableMemory": true,
"disableStorage": true
}
```
Running the previous tracer invocation from the Geth console with the data fields disabled:
```js
debug.traceTransaction("0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f", {disableStack: true, disableMemory: true, disableStorage: true})
```
Analogously running the filtered tracer from outside the node too via HTTP RPC:
```
$ curl -H "Content-Type: application/json" -d '{"id": 1, "method": "debug_traceTransaction", "params": ["0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f", {"disableStack": true, "disableMemory": true, "disableStorage": true}]}' localhost:8545
```
Running the above operation on the Rinkeby network will result in this significantly shorter [trace dump](https://gist.github.com/karalabe/d74a7cb33a70f2af75e7824fc772c5b4).
### Limits of basic traces
Although the raw opcode traces we've generated above have their use, this basic way of tracing is problematic in the real world. Having an individual log entry for every single opcode is too low level for most use cases, and will require developers to create additional tools to post-process the traces. Additionally, a full opcode trace can easily go into the hundreds of megabytes, making them very resource intensive to get out of the node and process externally.
To avoid all of the previously mentioned issues, `go-ethereum` supports running custom JavaScript tracers *within* the Ethereum node, which have full access to the EVM stack, memory and contract storage. This permits developers to only gather the data they need, and do any processing **at** the data. Please see the next section for our *custom in-node tracers*.
### Pruning
Geth by default does in-memory pruning of state, discarding state entries that it deems is no longer necessary to maintain. This is configured via the `--gcmode` option. Often, people run into the error that state is not available.
Say you want to do a trace on block `B`. Now there are a couple of cases:
1. You have done a fast-sync, pivot block `P` where `P <= B`.
2. You have done a fast-sync, pivot block `P` where `P > B`.
3. You have done a full-sync, with pruning
4. You have done a full-sync, without pruning (`--gcmode=archive`)
Here's what happens in each respective case:
1. Geth will regenerate the desired state by replaying blocks from the closest point in time before `B` where it has full state. This defaults to [`128`](https://github.com/ethereum/go-ethereum/blob/master/eth/api_tracer.go#L52) blocks max, but you can specify more in the actual call `... "reexec":1000 .. }` to the tracer.
2. Sorry, can't be done without replaying from genesis.
3. Same as 1)
4. Does not need to replay anything, can immediately load up the state and serve the request.
There is one other option available to you, which may or may not suit your needs. That is to use [Evmlab]( https://github.com/holiman/evmlab).
```
docker pull holiman/evmlab && docker run -it holiman/evmlab
```
There you can use the reproducer. The reproducer will incrementally fetch data from infura until it has all the information required to create the trace locally on an evm which is bundled with the image. It will create a custom genesis containing the state that the transaction touches (balances, code, nonce etc). It should be mentioned that the evmlab reproducer is strictly guaranteed to be totally exact with regards to gascosts incurred by the outer transaction, as evmlab does not fully calculate the gascosts for nonzero data etc, but is usually sufficient to analyze contracts and events.

View File

@ -0,0 +1,143 @@
---
title: Developer guide
sort_key: A
---
**NOTE: These instructions are for people who want to contribute Go source code changes.
If you just want to run ethereum, use the normal [Installation Instructions](../install-and-build/installing-geth)**
This document is the entry point for developers of the Go implementation of Ethereum. Developers here refer to the hands-on: who are interested in build, develop, debug, submit a bug report or pull request or contribute code to go-ethereum.
## Building and Testing
### Go Environment
We assume that you have [`go` installed](https://golang.org/doc/install), and `GOPATH` is set.
**Note**:You must have your working copy under `$GOPATH/src/github.com/ethereum/go-ethereum`.
Since `go` does not use relative path for import, working in any other directory will have no effect, since the import paths will be appended to `$GOPATH/src`, and if the lib does not exist, the version at master HEAD will be downloaded.
Most likely you will be working from your fork of `go-ethereum`, let's say from `github.com/nirname/go-ethereum`. Clone or move your fork into the right place:
```
git clone git@github.com:nirname/go-ethereum.git $GOPATH/src/github.com/ethereum/go-ethereum
```
### Managing Vendored Dependencies
All other dependencies are tracked in the vendor/ directory. We use [govendor](https://github.com/kardianos/govendor) to manage them.
If you want to add a new dependency, run `govendor fetch <import-path>`, then commit the result.
If you want to update all dependencies to their latest upstream version, run `govendor fetch +v`.
You can also use govendor to run certain commands on all go-ethereum packages, excluding vendored
code. Example: to recreate all generated code, run `govendor generate +l`.
### Building Executables
Switch to the go-ethereum repository root directory.
You can build all code using the go tool, placing the resulting binary in `$GOPATH/bin`.
```text
go install -v ./...
```
go-ethereum exectuables can be built individually. To build just geth, use:
```text
go install -v ./cmd/geth
```
read about cross compilation of go-ethereum [here](../developers/cross-compiling-ethereum).
### Git flow
To make life easier try [git flow](http://nvie.com/posts/a-successful-git-branching-model/) it sets this all up and streamlines your work flow.
### Testing
Testing one library:
```
go test -v -cpu 4 ./eth
```
Using options `-cpu` (number of cores allowed) and `-v` (logging even if no error) is recommended.
Testing only some methods:
```
go test -v -cpu 4 ./eth -run TestMethod
```
**Note**: here all tests with prefix _TestMethod_ will be run, so if you got TestMethod, TestMethod1, then both!
Running benchmarks, eg.:
```
go test -v -cpu 4 -bench . -run BenchmarkJoin
```
for more see [go test flags](http://golang.org/cmd/go/#hdr-Description_of_testing_flags)
### Metrics and monitoring
`geth` can do node behaviour monitoring, aggregation and show performance metric charts.
read about [metrics and monitoring](../doc/metrics-and-monitoring)
### Getting Stack Traces
If `geth` is started with the `--pprof` option, a debugging HTTP server is made available on port 6060. You can bring up http://localhost:6060/debug/pprof to see the heap, running routines etc. By clicking full goroutine stack dump (clicking http://localhost:6060/debug/pprof/goroutine?debug=2) you can generate trace that is useful for debugging.
Note that if you run multiple instances of `geth`, this port will only work for the first instance that was launched. If you want to generate stacktraces for these other instances, you need to start them up choosing an alternative pprof port. Make sure you are redirecting stderr to a logfile.
```
geth -port=30300 -verbosity 5 --pprof --pprofport 6060 2>> /tmp/00.glog
geth -port=30301 -verbosity 5 --pprof --pprofport 6061 2>> /tmp/01.glog
geth -port=30302 -verbosity 5 --pprof --pprofport 6062 2>> /tmp/02.glog
```
Alternatively if you want to kill the clients (in case they hang or stalled syncing, etc) but have the stacktrace too, you can use the `-QUIT` signal with `kill`:
```
killall -QUIT geth
```
This will dump stack traces for each instance to their respective log file.
## Contributing
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
GitHub is used to track issues and contribute code, suggestions, feature requests or
documentation.
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull
request (PR) for the maintainers to review and merge into the main code base. If you wish
to submit more complex changes though, please check up with the core devs first on [our
gitter channel](https://gitter.im/ethereum/go-ethereum) to ensure those changes are in
line with the general philosophy of the project and/or get some early feedback which can
make both your efforts much lighter as well as our review and merge procedures quick and
simple.
PRs need to be based on and opened against the `master` branch (unless by explicit
agreement, you contribute to a complex feature branch).
Your PR will be reviewed according to the [Code Review
guidelines](../developers/code-review-guidelines).
We encourage a PR early approach, meaning you create the PR the earliest even without the
fix/feature. This will let core devs and other volunteers know you picked up an issue.
These early PRs should indicate 'in progress' status.
## Dev Tutorials (mostly outdated)
* [private networks, local clusters and monitoring](../doc/setting-up-private-network-or-local-cluster)
* [p2p 101](../developers/peer-to-peer): a tutorial about setting up and creating a p2p server and p2p sub protocol.
* [how to whisper](../whisper/whisper-overview): an introduction to whisper.

View File

@ -0,0 +1,57 @@
---
title: Issue Handling Workflow
sort_key: B
---
### (Draft proposal)
Keep the number of open issues under 820
Keep the ratio of open issues per all issues under 13%
Have 50 issues labelled [help wanted](https://github.com/ethereum/go-ethereum/labels/help%20wanted) and 50 [good first issue](https://github.com/ethereum/go-ethereum/labels/good%20first%20issue).
Use structured labels of the form `<category>:<label>` or if need be `<category>:<main>/<sub>`, for example `area: plugins/foobuzzer`.
Use the following labels. Areas and statuses depend on the application and workflow.
- area
- `area: android`
- `area: clef`
- `area: network`
- `area: swarm`
- `area: whisper`
- type
- `type: bug`
- `type: feature`
- `type: documentation`
- `type: discussion`
- status
- `status: PR review`
- `status: community working on it`
- need
- `need: more info`
- `need: steps to reproduce`
- `need: investigation`
- `need: decision`
Use these milestones
- [Future](https://github.com/ethereum/go-ethereum/milestone/80) - Maybe implement one day
- [Coming soon](https://github.com/ethereum/go-ethereum/milestone/81) - Not assigned to a specific release, but to be delivered in one of the upcoming releases
- \<next version\> - Next release with a version number
- \<next-next version\> - The version after the next release with a version number
- \<next major release\> - Optional.
It's ok to not set a due date for a milestone, but once you release it, close it. If you have a few issues dangling, consider moving them to the next milestone, and close this one.
Optionally, use a project board to collect issues of a larger effort that has an end state and overarches multiple releases.
## Workflow
We have a weekly or bi-weekly triage meeting. Issues are preselected by [labelling them "status:triage" and sorted the oldest ones first](https://github.com/ethereum/go-ethereum/issues?q=is%3Aopen+is%3Aissue+label%3Astatus%3Atriage+sort%3Acreated-asc). This is when we go through the new issues and do one of the following
1. Close it.
1. Assign it to "Coming soon" milestone which doesn't have an end date.
1. Move it to the "Future" milestone.
1. Change its status to "Need:\<what-is-needed\>".
Optional further activities:
* Label the issue with the appropriate area/component.
* Add a section to the FAQ or add a wiki page. Link to it from the issue.