* swarm/network/stream: remove old syncing test and replace it with two more granular tests, with the first one testing a full sync between two nodes (makes sure that when all subscriptions are established on all bins - all content is replicated from one node to the other), and a second test that adds a bigger amount of nodes to a star network topology, effectively making the pivot node have a depth > 0 to sync just certain content to the connected nodes (due to a star topology this test (always 1 hop) does not check that, for example, specific content is routed across multiple nodes, but assumes content should be only on the most proximate node)
* api, cmd/swarm-smoke, fuse, network, storage: fix data dir leak when using ephemeral filestore, pull test params to be easily adjustable
* accounts/scwallet: Add a switch to enable smartcard support
* accounts: change the meaning of the switch
* disable card support in windows until tested
* only activate account if pcscd socket file is present
* the switch is now the path to the socket file
* accounts/scwallet: holiman's review feedback
* accounts/scwallet: send the path to go-pcsclite
* accounts/scwallet: add default, per platform path
* accounts/scwallet: fix error log warning
* accounts/scwallet: update pcsc lib to latest
* accounts/scwallet: use default path from pcsclite
* scwallet: forgot to change switch name
* cmd: minor style cleanups (error handling first, then happy path)
* Removed comment section referring to Cloudflare's bn curve parameters
* Added comment to clarify the nature of the parameters
* Changed value of xi to i+9
* core: reinit chain from freezer in batches
* core/rawdb: concurrent database reinit from freezer dump
* core/rawdb: reinit from freezer in sequential order
* core, eth: some fixes for freezer
* vendor, core/rawdb, cmd/geth: add db inspector
* core, cmd/utils: check ancient store path forceily
* cmd/geth, common, core/rawdb: a few fixes
* cmd/geth: support windows file rename and fix rename error
* core: support ancient plugin
* core, cmd: streaming file copy
* cmd, consensus, core, tests: keep genesis in leveldb
* core: write txlookup during ancient init
* core: bump database version
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
This change implements EIP-868. The UDPv4 transport announces support
for the extension in ping/pong and handles enrRequest messages.
There are two uses of the extension: If a remote node announces support
for EIP-868 in their pong, node revalidation pulls the node's record.
The Resolve method requests the record unconditionally.
These changes fix two corner cases related to internal handling of types
in package rlp: The "tail" struct tag can only be applied to the last field.
The check for this was wrong and didn't allow for private fields after the
field with the tag. Unsupported types (e.g. structs containing int) which
implement either the Encoder or Decoder interface but not both
couldn't be encoded/decoded.
Also fixes#19367
* core, eth, trie: bloom filter for trie node dedup during fast sync
* eth/downloader, trie: address review comments
* core, ethdb, trie: restart fast-sync bloom construction now and again
* eth/downloader: initialize fast sync bloom on startup
* eth: reenable eth/62 until we properly remove it
* core: fix import errors on clique crashes + empty blocks
* cosensus/clique, core: add test for the mirrored state issue
* core: address todo question wrt log count
* core: raise a louder warning for non-clique known blocks
swarm/api: integrate tags to count chunks being split and stored
swarm/api/http: integrate tags in middleware for HTTP `POST` calls and assert chunks being calculated and counted correctly
swarm: remove deprecated and unused code, add swarm hash to DoneSplit signature, remove calls to the api client from the http package
* swarm/shed: remove metrics fields from DB struct
* swarm/schunk: add String methods to modes
* swarm/storage/localstore: add metrics and traces
* swarm/chunk: unknown modes without spaces in String methods
* swarm/storage/localstore: remove bin number from pull subscription metrics
* swarm/storage/localstore: add resetting time metrics and code improvements
cmd/swarm/swarm-smoke: improve smoke tests (#1337)
swarm/network: remove dead code (#1339)
swarm/network: remove FetchStore and SyncChunkStore in favor of NetStore (#1342)
* add-date-to unstable
* fields-insteadof-split
* internal/build: support building with missing git
* docker: add git history back to support commit date in version
* internal/build: use PR commits hashes for PR builds
* core: import known blocks if they can be inserted as canonical blocks
* core: insert knowns blocks
* core: remove useless
* core: doesn't process head block in reorg function
* accounts: add note about backing up the keystore
* cmd, accounts: move the printout to accountCreate
* internal, signer: add info when new account is created via rpc
* cmd, internal, signer: split logs
* cmd/geth: make account new output a bit more verbose
This change makes getBalance, getCode, getStorageAt, getProof,
call, getTransactionCount return an error if the block number in
the request doesn't exist. getHeaderByNumber still returns null
for missing headers.
This change restructures the internals of p2p/discover to make room for
the discv5 code which will soon be added to this package.
- packet type names now have a "V4" suffix.
- ListenUDP returns *UDPv4 instead of *Table. This technically breaks
the API but the only caller in go-ethereum is package p2p, which uses
a compatible interface and doesn't need changes.
- The internal transport interface is changed to make Table reusable for v5.
- The 'lookup' code moves from table to transport. This required
updating the lookup unit test to use udpTest instead of a custom transport.
* swarm/network: fix data races in TestInitialPeersMsg test
* swarm/network: add Kademlia.Saturation method with lock
* swarm/network: add Hive.Peer method to safely retrieve a bzz peer
* swarm/network: remove duplicate comment
* p2p/testing: prevent goroutine leak in ProtocolTester
* swarm/network: fix data race in newBzzBaseTesterWithAddrs
* swarm/network: fix goroutone leaks in testInitialPeersMsg
* swarm/network: raise number of peer check attempts in testInitialPeersMsg
* swarm/network: use Hive.Peer in Hive.PeerInfo function
* swarm/network: reduce the scope of mutex lock in newBzzBaseTesterWithAddrs
* swarm/storage: disable TestCleanIndex with race detector
* core: lookup txs by block number instead of block hash
Transaction hashes now store a reference to their corresponding
block number as opposed to their hash. In benchmarks this was
shown to reduce storage by over 12 GB.
The main limitation of this approach is that transactions on
non-canonical blocks could never be looked up, however that is
currently not supported.
The database version has been upgraded to version 5 and the
transaction lookup process is backwards-compatible with the
prior two transaction lookup formats prexisting in the
database instance. Tests have been added to ensure this.
* core/rawdb: tiny review nit fixes
* cmd, eth, miner: disable advance sealing if user require
* cmd, console, miner, les, eth: wrap the miner config
* eth: remove todo
* cmd, miner: revert noadvance flag
The reason for this is: if the transaction execution is even longer
than block time, then this kind of transactions is DoS attack.
* swarm/storage/feed/lookup: Add context handling/forwarding
* swarm/storage/feed/lookup: Add test to catch bad hint
* swarm/storage/feed/lookup: Added context cancellation test
* swarm/api: fix file descriptor leak in NewTestSwarmServer
Swarm storage (localstore) was not closed. That resulted a
"too many open files" error if `TestClientUploadDownloadRawEncrypted`
was run with `-count 1000`.
* cmd/swarm: speed up StartNewNodes() by parallelization
Reduce cluster startup time from 13s to 7s.
* swarm/api: disable flaky TestClientUploadDownloadRawEncrypted with -race
* swarm/storage: disable flaky TestLDBStoreCollectGarbage (-race)
With race detection turned on the disabled cases often fail with:
"ldbstore_test.go:535: expected surplus chunk 150 to be missing, but got no error"
* cmd/swarm: fix process leak in TestACT and TestSwarmUp
Each test run we start 3 nodes, but we did not terminate them. So
those 3 nodes continued eating up 1.2GB (3.4GB with -race) after test
completion.
6b6c4d1c27 changed how we start clusters
to speed up tests. The changeset merged together test cases
and introduced a global cluster. But "forgot" about termination.
Let's get rid of "global cluster" so we have a clear owner of
termination (some time sacrifice), while leaving subtests to use the
same cluster.
* metrics/prometheus: added prometheus http server and metrics collector
* metrics/prometheus: minor cleanups
* metrics/prometheus: named keys instead name in tag
* metrics/prometheus: minor typo cleanups, sorted report
* accounts, core, internal, node: Add support for smartcard wallets
* accounts, internal: Changes in response to review
* vendor: pull in missing go-echd library
* accounts/scwallet, console: user friendly card opening
* accounts/scwallet: ordered wallets, tighter events, derivation logs
* accounts, console: frendly card errors, support pin unblock
* accounts/scwallet: fix crypto API change
* accounts/scwallet: rebase and update
* Fix some linter issues
* Remove the direct dependency on libpcsclite
Instead, use a go library that communicates with pcscd over a socket.
Also update the changes introduced by @gravityblast since this PR's
inception
* Temporary fix to the ADBU status call
* fix wallet status update
This is a temporary fix, better checks need to
be performed once the whole process has been
validated.
* Fix key derivation
* Add some documentation
* Update a comment to reflect the workings of the updated system
* Vendor keycard-go/derivationpath
* Formatting fixes
* Add instructions on how to install the card
* Achieve full transaction signature+sending
* PK derivation has to be supported by the card
* Fix linter issues
* Upgrade to keycard app v2.1.1
* Set gballet as codeowner of the smartcard wallet dir
* fix unnecessary condition linter warning
* refuse to overwrite the master key of a previously initialized card
* refresh the account list when initializing the card
* Update the card preparation instructions based on review feedback
* 'sanitize' JSON input
Co-Authored-By: gballet <gballet@gmail.com>
* Apply suggestions from code review
Co-Authored-By: gballet <gballet@gmail.com>
* fix a serialization error
* more review feedback
* More review feedback
* Can now specify the number of empty accounts to derive
* Fix rebase error: include norm package
* Update bip-39 ref and remove ebfe/scard from vendor
* Add missing dependency
This PR fixes this, moving domain.ChainId from the map's initializer down to a separate if statement which checks the existance of ChainId's value, similar to the rest of the fields, before adding it. I've also included a new test to demonstrate the issue
This resolves a minor issue where neighbors responses containing less
than 16 nodes would bump the failure counter, removing the node. One
situation where this can happen is a private deployment where the total
number of extant nodes is less than 16.
Issue found by @jsying.
This PR makes it easy to generate and execute testcases for VM arithmetic operations. By enabling and running the testcase TestWriteExpectedValues, a set of json files are created which contain input and output for each arith operation.
The test TestJsonTestcases executes all of those tests.
While meaningless as is, this PR makes it less risky to make changes (optimizations) to the vm operations, since there will be a larger body of testcases.
* swarm/network: fix hive bug not sending shallow peers
- hive bug: needed shallow peers were not sent to nodes beyond connection's proximity order
- add extensive protocol exchange tests for initial subPeersMsg-peersMsg exchange
- modify bzzProtocolTester to allow pregenerated overlay addresses
* swarm/network: attempt to fix hive persistance test
* swarm/network: fix TestHiveStatePersistance (#1320)
* swarm/network: remove trace lines from the hive persistance test
* address PR review comments
* swarm/network: address PR comments on TestInitialPeersMsg
* eliminate *testing.T argument from bzz/hive protocoltesters
* add sorting (only runs in test code) on peersMsg payload
* add random (0 to MaxPeersPerPO) peers for each po
* add extra peers closer to pivot than control
This PR is a more advanced form of the dirty-to-clean cacher (#18995),
where we reuse previous database write batches as datasets to uncache,
saving a dirty-trie-iteration and a dirty-trie-rlp-reencoding per block.
This PR fixes two issues in the PoW calculation of a Whisper envelope,
compared to the spec (see PoW Requirements):
- The pow is supposed to take the leading number of zeroes (i.e. most
significant zeroes) and what it did was to take the number of trailing
zeroes (i.e. least significant zeroes). It has been fixed to match what
the spec and Parity does.
- The spec expects to use the size of the RLP encoded envelope, and it took
something else, as described in #18070.
* travis: remove verbose from Swarm race tests
By removing -v our output will be cleaner, but the Travis job still
won't be terminated - due to 'no output for 10 minutes' - as keepalive
.sh produces a log line every 5 minutes.
* travis: extend Swarm race detection to p2p subpackages
As p2p/protocols, p2p/simulations and p2p/testing packages mostly
belong to the Swarm team.
* core/vm: remove function call for stack validation from evm runloop
* core/vm: separate gas calc into static + dynamic
* core/vm: optimize push1
* core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER
* core/vm: use generic error message for jump/jumpi, to avoid string interpolation
* testdata: fix tests for new error message
* core/vm: use 64-bit memory calculations
* core/vm: fix error in memory calculation
* core/vm: address review concerns
* core/vm: avoid unnecessary use of big.Int:BitLen()
dummyHook's fields were concurrently written by nodes and read by
the test. The simplest solution is to protect all fields with a mutex.
Enable: TestMultiplePeersDropSelf, TestMultiplePeersDropOther as they
seemingly accidentally stayed disabled during a refactor/rewrite
since 1836366ac1.
resolvesethersphere/go-ethereum#1286
* swarm/network: propagate span with ctx
* swarm/network: try to stop stream.send.request spans on time
* swarm/storage: add chunk ref as a log to netstore.fetcher span
Although current two implementations(ledgerDriver, trezorDriver) of interface driver.Close do not actually return any error. Instead, they only return nil.
But since the declaration of Close function returns error, it is better to check the returned error in case in future some new implementation of Close function returns error and we may forget to modify the function which invokes Close function at that time.
This PR will will break existing UIs, since it changes all calls like ApproveSignTransaction to be on the form ui_approveSignTransaction.
This is to make it possible for the UI to reuse the json-rpc library from go-ethereum, which uses this convention.
Also, this PR removes some unused structs, after import/export were removed from the external api (so no longer needs internal methods for approval)
One more breaking change is introduced, removing passwords from the ApproveSignTxResponse and the likes. This makes the manual interface more like the rulebased interface, and integrates nicely with the credential storage. Thus, the way it worked before, it would be tempting for the UI to implement 'remember password' functionality. The way it is now, it will be easy instead to tell clef to store passwords and use them.
If a pw is not found in the credential store, the user is prompted to provide the password.
These tests never run as the build tag excluded them from the CI
execution. As a results the (dead) code got out of sync with other
parts of Swarm and now they would not even compile. => Removed.
resolvesethersphere/go-ethereum#1238
* p2p/discover: remove unused function
* p2p/enode: use localItemKey for local sequence number
I added localItemKey for this purpose in #18963, but then
forgot to actually use it. This changes the database layout
yet again and requires bumping the version number.
* node: require LocalAppData variable
This avoids path inconsistencies on Windows XP.
Hat tip to @MicahZoltu for catching this so quickly.
* node: fix typo
* travis, build: switch to NDK 19b, fix gomobile builds
* travis, build: move NDK into its final bundle location
* travis: disable Android build on PRs once again
This change
- implements concurrent LES request serving even for a single peer.
- replaces the request cost estimation method with a cost table based on
benchmarks which gives much more consistent results. Until now the
allowed number of light peers was just a guess which probably contributed
a lot to the fluctuating quality of available service. Everything related
to request cost is implemented in a single object, the 'cost tracker'. It
uses a fixed cost table with a global 'correction factor'. Benchmark code
is included and can be run at any time to adapt costs to low-level
implementation changes.
- reimplements flowcontrol.ClientManager in a cleaner and more efficient
way, with added capabilities: There is now control over bandwidth, which
allows using the flow control parameters for client prioritization.
Target utilization over 100 percent is now supported to model concurrent
request processing. Total serving bandwidth is reduced during block
processing to prevent database contention.
- implements an RPC API for the LES servers allowing server operators to
assign priority bandwidth to certain clients and change prioritized
status even while the client is connected. The new API is meant for
cases where server operators charge for LES using an off-protocol mechanism.
- adds a unit test for the new client manager.
- adds an end-to-end test using the network simulator that tests bandwidth
control functions through the new API.
* swarm/pss: fix data race on HandshakeController.symKeyIndex
The HandshakeController.symKeyIndex map was accessed concurrently.
Since insufficient test coverage the race is not detected every time.
However, running TestClientHandshake a 100 times seems to be enough to
reproduce the race.
Note: I've chosen HandshakeController.lock to protect
HandshakeController.symKeyIndex as that was already protected in a few
functions by that lock.
Additionally:
- removed unused testStore
- enabled tests in handshake_test.go as they pass
- removed code duplication by adding getSymKey()
* swarm/pss: fix a data race on HandshakeController.keyC
* swarm/pss: fix data races with on Pss.symKeyPool
* swarm/storage/mock: implement listings methods for mem and rpc stores
* swarm/storage/mock/rpc: add comments and newTestStore helper function
* swarm/storage/mock/mem: add missing comments
* swarm/storage/mock: add comments to new types and constants
* swarm/storage/mock/db: implement listings for mock/db global store
* swarm/storage/mock/test: add comments for MockStoreListings
* swarm/storage/mock/explorer: initial implementation
* cmd/swarm/global-store: add chunk explorer
* cmd/swarm/global-store: add chunk explorer tests
* swarm/storage/mock/explorer: add tests
* swarm/storage/mock/explorer: add swagger api definition
* swarm/storage/mock/explorer: not-zero test values for invalid addr and key
* swarm/storage/mock/explorer: test wildcard cors origin
* swarm/storage/mock/db: renames based on Fabio's suggestions
* swarm/storage/mock/explorer: add more comments to testHandler function
* cmd/swarm/global-store: terminate subprocess with Kill in tests
* swarm/storage/localstore: close localstore in two tests
* swarm/storage/localstore: fix a possible deadlock in tests
* swarm/storage/localstore: re-enable pull subs tests for travis race
* swarm/storage/localstore: stop sending to errChan on context done in tests
* swarm/storage/localstore: better want check in readPullSubscriptionBin
* swarm/storage/localstore: protect chunk put with addr lock in tests
* swamr/storage/localstore: wait for gc and writeGCSize workers on Close
* swarm/storage/localstore: more correct testDB_collectGarbageWorker
* swarm/storage/localstore: set DB Close timeout to 5s
The current loop continuation condition is always true as a uint8
is always being checked whether it is less than 255 (its maximum
value). Since the loop starts with the value 1, the loop termination
can be guarranteed to exit once the value overflows to 0.
* swarm/storage: increase mget timeout in common_test.go
TestDbStoreCorrect_1k sometimes timed out with -race on Travis.
--- FAIL: TestDbStoreCorrect_1k (24.63s)
common_test.go:194: testStore failed: timed out after 10s
* swarm: remove unused vars from TestSnapshotSyncWithServer
nodeCount and chunkCount is returned from setupSim and those values
we use.
* swarm: move race/norace helpers from stream to testutil
As we will need to use the flag in other packages, too.
* swarm: refactor TestSwarmNetwork case
Extract long running test cases for better visibility.
* swarm/network: skip TestSyncingViaGlobalSync with -race
As panics on Travis.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7e351b]
* swarm: run TestSwarmNetwork with fewer nodes with -race
As otherwise we always get test failure with `network_test.go:374:
context deadline exceeded` even with raised `Timeout`.
* swarm/network: run TestDeliveryFromNodes with fewer nodes with -race
Test on Travis times out with 8 or more nodes if -race flag is present.
* swarm/network: smaller node count for discovery tests with -race
TestDiscoveryPersistenceSimulationSimAdapters failed on Travis with
`-race` flag present. The failure was due to extensive memory usage,
coming from the CGO runtime. Using a smaller node count resolves the
issue.
=== RUN TestDiscoveryPersistenceSimulationSimAdapter
==7227==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL github.com/ethereum/go-ethereum/swarm/network/simulations/discovery 804.826s
* swarm/network: run TestFileRetrieval with fewer nodes with -race
Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes.
=== RUN TestFileRetrieval
==7366==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL github.com/ethereum/go-ethereum/swarm/network/stream 155.165s
* swarm/network: run TestRetrieval with fewer nodes with -race
Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes ("ThreadSanitizer failed to
allocate").
* swarm/network: skip flaky TestGetSubscriptionsRPC on Travis w/ -race
Test fails a lot with something like:
streamer_test.go:1332: Real subscriptions and expected amount don't match; real: 0, expected: 20
* swarm/storage: skip TestDB_SubscribePull* tests on Travis w/ -race
Travis just hangs...
ok github.com/ethereum/go-ethereum/swarm/storage/feed/lookup 1.307s
keepalive
keepalive
keepalive
or panics after a while.
Without these tests the race detector job is now stable. Let's
invetigate these tests in a separate issue:
https://github.com/ethersphere/go-ethereum/issues/1245
* swarm/newtork: WIP Span request span until delivery and put
* swarm/storage: Introduce new trace across single fetcher lifespan
* swarm/network: Put span ids for sendpriority in context value
* swarm: Add global span store in tracing
* swarm/tracing: Add context key constants
* swarm/tracing: Add comments
* swarm/storage: Remove redundant fix for filestore
* swarm/tracing: Elaborate constants comments
* swarm/network, swarm/storage, swarm:tracing: Minor cleanup
* swarm/network/stream: fix a goroutine leak in Registry
* swarm/network, swamr/network/stream: Kademlia close addr count and depth change chans
* swarm/network/stream: rename close channel to quit
* swarm/network/stream: fix sync between NewRegistry goroutine and Close method
Simplifies the transaction presense check to use a function to
determine if the transaction is present in the block provided
to trace, which originally had a redundant parenthesis and used
a `exist` flag to dictate control flow.
I added localItemKey for this purpose in #18963, but then
forgot to actually use it. This changes the database layout
yet again and requires bumping the version number.
Package crypto works with or without cgo, which is great. However, to make it
work without cgo required setting the build tag `nocgo`. It's common to disable
cgo by instead just setting the environment variable `CGO_ENABLED=0`. Setting
this environment variable does _not_ implicitly set the build tag `nocgo`. So
projects that try to build the crypto package with `CGO_ENABLED=0` will fail. I
have done this myself several times. Until today, I had just assumed that this
meant that this package requires cgo.
But a small build tag change will make this case work. Instead of using `nocgo`
and `!nocgo`, we can use `!cgo` and `cgo`, respectively. The `cgo` build tag is
automatically set if cgo is enabled, and unset if it is disabled.
This changes default location of the data directory to use the LOCALAPPDATA
environment variable, resolving issues with remote home directories an improving
compatibility with Cygwin.
Fixes#2239Fixes#2237Fixes#16437
* swarm/network: DRY out repeated giga comment
I not necessarily agree with the way we wait for event propagation.
But I truly disagree with having duplicated giga comments.
* p2p/simulations: encapsulate Node.Up field so we avoid data races
The Node.Up field was accessed concurrently without "proper" locking.
There was a lock on Network and that was used sometimes to access
the field. Other times the locking was missed and we had
a data race.
For example: https://github.com/ethereum/go-ethereum/pull/18464
The case above was solved, but there were still intermittent/hard to
reproduce races. So let's solve the issue permanently.
resolves: ethersphere/go-ethereum#1146
* p2p/simulations: fix unmarshal of simulations.Node
Making Node.Up field private in 13292ee897
broke TestHTTPNetwork and TestHTTPSnapshot. Because the default
UnmarshalJSON does not handle unexported fields.
Important: The fix is partial and not proper to my taste. But I cut
scope as I think the fix may require a change to the current
serialization format. New ticket:
https://github.com/ethersphere/go-ethereum/issues/1177
* p2p/simulations: Add a sanity test case for Node.Config UnmarshalJSON
* p2p/simulations: revert back to defer Unlock() pattern for Network
It's a good patten to call `defer Unlock()` right after `Lock()` so
(new) error cases won't miss to unlock. Let's get back to that pattern.
The patten was abandoned in 85a79b3ad3,
while fixing a data race. That data race does not exist anymore,
since the Node.Up field got hidden behind its own lock.
* p2p/simulations: consistent naming for test providers Node.UnmarshalJSON
* p2p/simulations: remove JSON annotation from private fields of Node
As unexported fields are not serialized.
* p2p/simulations: fix deadlock in Network.GetRandomDownNode()
Problem: GetRandomDownNode() locks -> getDownNodeIDs() ->
GetNodes() tries to lock -> deadlock
On Network type, unexported functions must assume that `net.lock`
is already acquired and should not call exported functions which
might try to lock again.
* p2p/simulations: ensure method conformity for Network
Connect* methods were moved to p2p/simulations.Network from
swarm/network/simulation. However these new methods did not follow
the pattern of Network methods, i.e., all exported method locks
the whole Network either for read or write.
* p2p/simulations: fix deadlock during network shutdown
`TestDiscoveryPersistenceSimulationSimAdapter` often got into deadlock.
The execution was stuck on two locks, i.e, `Kademlia.lock` and
`p2p/simulations.Network.lock`. Usually the test got stuck once in each
20 executions with high confidence.
`Kademlia` was stuck in `Kademlia.EachAddr()` and `Network` in
`Network.Stop()`.
Solution: in `Network.Stop()` `net.lock` must be released before
calling `node.Stop()` as stopping a node (somehow - I did not find
the exact code path) causes `Network.InitConn()` to be called from
`Kademlia.SuggestPeer()` and that blocks on `net.lock`.
Related ticket: https://github.com/ethersphere/go-ethereum/issues/1223
* swarm/state: simplify if statement in DBStore.Put()
* p2p/simulations: remove faulty godoc from private function
The comment started with the wrong method name.
The method is simple and self explanatory. Also, it's private.
=> Let's just remove the comment.
* swarm/network: new saturation for implementation
* swarm/network: re-added saturation func in Kademlia as it is used elsewhere
* swarm/network: saturation with higher MinBinSize
* swarm/network: PeersPerBin with depth check
* swarm/network: edited tests to pass new saturated check
* swarm/network: minor fix saturated check
* swarm/network/simulations/discovery: fixed renamed RPC call
* swarm/network: renamed to isSaturated and returns bool
* swarm/network: early depth check
* signer/clef: make use of json-rpc notification
* signer: tidy up output of OnApprovedTx
* accounts/external, signer: implement remote signing of text, make accounts_sign take hexdata
* clef: added basic testscript
* signer, external, api: add clique signing test to debug rpc, fix clique signing in clef
* signer: fix clique interoperability between geth and clef
* clef: rename networkid switch to chainid
* clef: enable chainid flag
* clef, signer: minor changes from review
* clef: more tests for signer
* common/fdlimit: cap on MacOS file limits, fixes#18994
* common/fdlimit: fix Maximum-check to respect OPEN_MAX
* common/fdlimit: return error if OPEN_MAX is exceeded in Raise()
* common/fdlimit: goimports
* common/fdlimit: check value after setting fdlimit
* common/fdlimit: make comment a bit more descriptive
* cmd/utils: make fdlimit happy path a bit cleaner
* node: close AccountsManager in new Close method
* p2p/simulations, p2p/simulations/adapters: handle node close on shutdown
* node: move node ephemeralKeystore cleanup to stop method
* node: call Stop in Node.Close method
* cmd/geth: close node.Node created with makeFullNode in cli commands
* node: close Node instances in tests
* cmd/geth, node: minor code style fixes
* cmd, console, miner, mobile: proper node Close() termination
* Named functions and defined a basic EIP191 content type list
* Written basic content type functions
* Added ecRecover method in the clef api
* Updated the extapi changelog and addded indications in the README
* Changed the version of the external API
* Added tests for 0x45
* Implementing UnmarshalJSON() for TypedData
* Working on TypedData
* Solved the auditlog issue
* Changed method to signTypedData
* Changed mimes and implemented the 'encodeType' function for EIP-712
* Polished docstrings, ran goimports and swapped fmt.Errorf with errors.New where possible
* Drafted recursive encodeData
* Ran goimports and gofmt
* Drafted first version of EIP-712, including tests
* Temporarily switched to using common.Address in tests
* Drafted text/validator and and rewritten []byte as hexutil.Bytes
* Solved stringified address encoding issue
* Changed the property type required by signData from bytes to interface{}
* Fixed bugs in 'data/typed' signs
* Brought legal warning back after temporarily disabling it for development
* Added example RPC calls for account_signData and account_signTypedData
* Named functions and defined a basic EIP191 content type list
* Written basic content type functions
* Added ecRecover method in the clef api
* Updated the extapi changelog and addded indications in the README
* Added tests for 0x45
* Implementing UnmarshalJSON() for TypedData
* Working on TypedData
* Solved the auditlog issue
* Changed method to signTypedData
* Changed mimes and implemented the 'encodeType' function for EIP-712
* Polished docstrings, ran goimports and swapped fmt.Errorf with errors.New where possible
* Drafted recursive encodeData
* Ran goimports and gofmt
* Drafted first version of EIP-712, including tests
* Temporarily switched to using common.Address in tests
* Drafted text/validator and and rewritten []byte as hexutil.Bytes
* Solved stringified address encoding issue
* Changed the property type required by signData from bytes to interface{}
* Fixed bugs in 'data/typed' signs
* Brought legal warning back after temporarily disabling it for development
* Added example RPC calls for account_signData and account_signTypedData
* Polished and fixed PR
* Polished and fixed PR
* Solved malformed data panics and also wrote tests
* Solved malformed data panics and also wrote tests
* Added alphabetical sorting to type dependencies
* Added alphabetical sorting to type dependencies
* Added pretty print to data/typed UI
* Added pretty print to data/typed UI
* signer: more tests for typed data
* signer: more tests for typed data
* Fixed TestMalformedData4 errors and renamed IsValid to Validate
* Fixed TestMalformedData4 errors and renamed IsValid to Validate
* Fixed more new failing tests and deanonymised some functions
* Fixed more new failing tests and deanonymised some functions
* Added types to EIP712 output in cliui
* Added types to EIP712 output in cliui
* Fixed regexp issues
* Fixed regexp issues
* Added pseudo-failing test
* Added pseudo-failing test
* Fixed false positive test
* Fixed false positive test
* Added PrettyPrint method
* Added PrettyPrint method
* signer: refactor formatting and UI
* signer: make ui use new message format for signing
* Fixed breaking changes
* Fixed rules_test failing test
* Added extra regexp for reference types
* signer: more hard types
* Fixed failing test, formatted files
* signer: use golang/x keccak
* Fixed goimports error
* clef, signer: address some review concerns
* Implemented latest recommendations
* Fixed comments and uintint256 issue
* accounts, signer: fix mimetypes, add interface to sign data with passphrase
* signer, accounts: remove duplicated code, pass hash preimages to signing
* signer: prevent panic in type assertions, make cliui print rawdata as quotable-safe
* signer: linter fixes, remove deprecated crypto dependency
* accounts: fix goimport
New APIs added:
client.RegisterName(namespace, service) // makes service available to server
client.Notify(ctx, method, args...) // sends a notification
ClientFromContext(ctx) // to get a client in handler method
This is essentially a rewrite of the server-side code. JSON-RPC
processing code is now the same on both server and client side. Many
minor issues were fixed in the process and there is a new test suite for
JSON-RPC spec compliance (and non-compliance in some cases).
List of behavior changes:
- Method handlers are now called with a per-request context instead of a
per-connection context. The context is canceled right after the method
returns.
- Subscription error channels are always closed when the connection
ends. There is no need to also wait on the Notifier's Closed channel
to detect whether the subscription has ended.
- Client now omits "params" instead of sending "params": null when there
are no arguments to a call. The previous behavior was not compliant
with the spec. The server still accepts "params": null.
- Floating point numbers are allowed as "id". The spec doesn't allow
them, but we handle request "id" as json.RawMessage and guarantee that
the same number will be sent back.
- Logging is improved significantly. There is now a message at DEBUG
level for each RPC call served.
This change clears up confusion around the two ways in which nodes
can be added to the table.
When a neighbors packet is received as a reply to findnode, the nodes
contained in the reply are added as 'seen' entries if sufficient space
is available.
When a ping is received and the endpoint verification has taken place,
the remote node is added as a 'verified' entry or moved to the front of
the bucket if present. This also updates the node's IP address and port
if they have changed.
* cmd, eth: Added in the flag to step geth once sync based on input
* cmd, eth: 16400 Add an option to stop geth once in sync.
* cmd: 16400 Add an option to stop geth once in sync. WIP
* cmd/geth/main, les/fletcher: added in light mode support
* cmd/geth/main, les/fletcher: Cleaned Comments and code for light mode
* cmd: 16400 Fixed formatting issue and cleaned code
* cmd, eth, les: 16400 Fixed formatting issues
* cmd, eth, les: Performed gofmt to update formatting
* cmd, eth, les: Fixed bugs resulting formatting
* cmd/geth, eth/, les: switched to downloader event
* eth: Fixed styling and gen_config
* eth/: Fix nil error in config file
* cmd/geth: Updated countdown log
* les/fetcher.go: Removed depcreated channel
* eth/downloader.go: Removed deprecated select
* cmd/geth, cmd/utils: Fixed minor issues
* eth: Reverted config files to proper format
* eth: Fixed typo in config file
* cmd/geth, eth/down: Updated code to use header time stamp
* eth/downloader: Changed the time threshold to 10 minutes
* cmd/geth, eth/downloader: Updated downloading event to pass latest header
* cmd/geth: Updated main to use right timer object
* cmd/geth: Removed unused failed event
* cmd/geth: added in correct time field with type assertion
* cmd/geth, cmd/utils: Updated flag to use boolean
* cmd/geth, cmd/utils, eth/downloader: Cleaned up code based on recommendations
* cmd/geth: Removed unneeded import
* cmd/geth, eth/downloader: fixed event field and suggested changes
* cmd/geth, cmd/utils: Updated flag and linting issue
This change resolves multiple issues around handling of endpoint proofs.
The proof is now done separately for each IP and completing the proof
requires a matching ping hash.
Also remove waitping because it's equivalent to sleep. waitping was
slightly more efficient, but that may cause issues with findnode if
packets are reordered and the remote end sees findnode before pong.
Logging of received packets was hitherto done after handling the packet,
which meant that sent replies were logged before the packet that
generated them. This change splits up packet handling into 'preverify'
and 'handle'. The error from 'preverify' is logged, but 'handle' happens
after the message is logged. This fixes the order. Packet logs now
contain the node ID.
dput --passive should make repo pushes from Travis work again.
dput --no-upload-log works around an issue I had while uploading locally.
debuild -d says that debuild shouldn't check for build dependencies when
creating the source package. This option is needed to make builds work
in environments where the installed Go version doesn't match the
declared dependency in the source package.
This PR is replacing the metrics.influxdb.host.tag cmd-line flag with metrics.influxdb.tags - a comma-separated key/value tags, that are passed to the InfluxDB reporter, so that we can index measurements with multiple tags, and not just one host tag.
This will be useful for Swarm, where we want to index measurements not just with the host tag, but also with bzzkey and git commit version (for long-running deployments).
When some of the same messages are redefined anywhere in a Go project,
the protobuf package panics (see
https://github.com/golang/protobuf/issues/178).
Since this package is internal, there is no way to work around it, as
one cannot use it directly, but also cannot define the same messages.
There is no downside in making the package accessible.
This change unbreaks the build and removes racy access to
disableCheckFreq. Even though the field is set while holding
the lock, it was read outside of the protected section.
When opening the wallet, ask for passphrase as well as for the PIN
and return the relevant error (PIN/passphrase required). Open must then
be called again with either PIN or passphrase to advance the process.
This also updates the console bridge to support passphrase authentication.
This PR adds a new fork which disables EIP-1283. Internally it's called Petersburg,
but the genesis/config field is ConstantinopleFix.
The block numbers are:
7280000 for Constantinople on Mainnet
7280000 for ConstantinopleFix on Mainnet
4939394 for ConstantinopleFix on Ropsten
9999999 for ConstantinopleFix on Rinkeby (real number decided later)
This PR also defaults to using the same ConstantinopleFix number as whatever
Constantinople is set to. That is, it will default to mainnet behaviour if ConstantinopleFix
is not set.This means that for private networks which have already transitioned
to Constantinople, this PR will break the network unless ConstantinopleFix is
explicitly set!
* Initial work on a graphql API
* Added receipts, and more transaction fields.
* Finish receipts, add logs
* Add transactionCount to block
* Add types and .
* Update Block type to be compatible with ethql
* Rename nonce to transactionCount in Account, to be compatible with ethql
* Update transaction, receipt and log to match ethql
* Add query operator, for a range of blocks
* Added ommerCount to Block
* Add transactionAt and ommerAt to Block
* Added sendRawTransaction mutation
* Add Call and EstimateGas to graphQL API
* Refactored to use hexutil.Bytes instead of HexBytes
* Replace BigNum with hexutil.Big
* Refactor call and estimateGas to use ethapi struct type
* Replace ethgraphql.Address with common.Address
* Replace ethgraphql.Hash with common.Hash
* Converted most quantities to Long instead of Int
* Add support for logs
* Fix bug in runFilter
* Restructured Transaction to work primarily with headers, so uncle data is reported properly
* Add gasPrice API
* Add protocolVersion API
* Add syncing API
* Moved schema into its own source file
* Move some single use args types into anonymous structs
* Add doc-comments
* Fixed backend fetching to use context
* Added (very) basic tests
* Add documentation to the graphql schema
* Fix reversion for formatting of big numbers
* Correct spelling error
* s/BigInt/Long/
* Update common/types.go
* Fixes in response to review
* Fix lint error
* Updated calls on private functions
* Fix typo in graphql.go
* Rollback ethapi breaking changes for graphql support
Co-Authored-By: Arachnid <arachnid@notdot.net>
receipts may be null for very short time in some condition. For this case, we should not add the null value into cache. Because you will not get the right result if you keep requesting that receipt.
* swarm/network: fix data race in stream.(*Peer).handleOfferedHashesMsg()
handleOfferedHashesMsg() contained a data race:
- read => in a goroutine, call to c.batchDone()
- write => in the main thread, write to c.sessionAt
c.batchDone() contained a call to c.AddInterval(). Client was a value
receiver for AddInterval. So on c.AddInterval() call the whole client
struct got copied (read) while one of its field was modified in
handleOfferedHashesMsg() (write).
fixesethersphere/go-ethereum#1086
* swarm/network: simplify some trivial statements
* accounts/abi: fix name styling when unpacking abi fields w/ underscores
ABI fields with underscores that are being unpacked
into structs expect structs with following form:
int_one -> Int_one
whereas in abigen the generated structs are camelcased
int_one -> IntOne
so updated the unpack method to expect camelcased structs as well.
* swarm/network: Revised depth calculation with tests
* swarm/network: WIP remove redundant "full" function
* swarm/network: WIP peerpot refactor
* swarm/network: Make test methods submethod of peerpot and embed kad
* swarm/network: Remove commented out code
* swarm/network: Rename health test functions
* swarm/network: Too many n's
* swarm/network: Change hive Healthy func to accept addresses
* swarm/network: Add Healthy proxy method for api in hive
* swarm/network: Skip failing test out of scope for PR
* swarm/network: Skip all tests dependent on SuggestPeers
* swarm/network: Remove commented code and useless kad Pof member
* swarm/network: Remove more unused code, add counter on depth test errors
* swarm/network: WIP Create Healthy assertion tests
* swarm/network: Roll back health related methods receiver change
* swarm/network: Hardwire network minproxbinsize in swarm sim
* swarm/network: Rework Health test to strict
Pending add test for saturation
And add test for as many as possible up to saturation
* swarm/network: Skip discovery tests (dependent on SuggestPeer)
* swarm/network: Remove useless minProxBinSize in stream
* swarm/network: Remove unnecessary testing.T param to assert health
* swarm/network: Implement t.Helper() in checkHealth
* swarm/network: Rename check back to assert now that we have helper magic
* swarm/network: Revert WaitTillHealthy change (deferred to nxt PR)
* swarm/network: Kademlia tests GotNN => ConnectNN
* swarm/network: Renames and comments
* swarm/network: Add comments
* p2p/simulation: WIP minimal snapshot test
* p2p/simulation: Add snapshot create, load and verify to snapshot test
* build: add test tag for tests
* p2p/simulations, build: Revert travis change, build test sym always
* p2p/simulations: Add comments, timeout check on additional events
* p2p/simulation: Add benchmark template for minimal peer protocol init
* p2p/simulations: Remove unused code
* p2p/simulation: Correct timer reset
* p2p/simulations: Put snapshot check events in buffer and call blocking
* p2p/simulations: TestSnapshot fail if Load function returns early
* p2p/simulations: TestSnapshot wait for all connections before returning
* p2p/simulation: Revert to before wait for snap load (5e75594)
* p2p/simulations: add "conns after load" subtest to TestSnapshot
and nudge
* swarm/shed: add metrics to each shed db
* swarm/shed: push metrics prefix up
* swarm/shed: rename prefix to metricsPrefix
* swarm/shed: unexport Meter, remove Mutex for quit channel
* geth/core/eth: implement constantinople override flag
* les: implemnent constantinople override flag for les clients
* cmd/geth, eth, les: fix typo, move flag to experimentals
* Rejects peers that respond with a different hash for any of the passed in block numbers.
* Meant for emergency situations when the network forks unexpectedly.
* swarm/network: Hive - do not notify peer if discovery is disabled
* p2p/simulations: validate all connections on loading a snapshot
* p2p/simulations: track all connections in on snapshot loading
* p2p/simulations: add snapshotLoadTimeout variable
* p2p/simulations: ignore control events in snapshot load
* p2p/simulations: simplify event loop synchronization
* p2p/simulations: return already connected error from Load function
* p2p/simulations: log warning on snapshot loading disconnection
Until this commit, when sending an RPC request that called `NewEVM`, a blank `vm.Config`
would be taken so as to set some options, based on the default configuration. If some extra
configuration switches were passed to the blockchain, those would be ignored.
This PR adds a function to get the config from the blockchain, and this is what is now used
for RPC calls.
Some subsequent changes need to be made, see https://github.com/ethereum/go-ethereum/pull/17955#pullrequestreview-182237244
for the details of the discussion.
* swarm/network/stream/: added stream protocol version match tests
* Increase BZZ version due to streamer version change; version tests
* swarm/network: increased hive and test protocol version
TryUpdate does not call t.trie.TryUpdate(key, value) and calls t.trie.TryDelete
instead. The update operation simply deletes the corresponding entry, though
it could retrieve later by odr. However, it adds further network overhead.
- Have `${DataDir}/bzzd.ipc` as IPC path default.
- Respect the `--datadir` flag.
- Keep only the global `--ipcpath` flag and drop the local `--ipcpath` flag
as flags might overwrite each other. (Note: before global `--ipcpath`
was ignored even if it was set)
fixes ethersphere#795
- Replace "crypto/rand" to "math/rand" for files content generation
- Remove swarm/network_test.go.Shuffle and swarm/btm/btm_test.go.Shuffle - because go1.9 support dropped (see https://github.com/ethereum/go-ethereum/pull/17807 and comments to swarm/network_test.go.Shuffle)
On file access LDBStore's tryAccessIdx() function created a faulty
GC Index Data entry, because not indexing the ikey correctly.
That caused the chunk addresses/hashes to start with '00' and the last
two digits were dropped. => Incorrect chunk address.
Besides the fix, the commit also contains a schema change which will
run the CleanGCIndex() function to clean the GC index from erroneous
entries.
Note: CleanGCIndex() rebuilds the index from scratch which can take
a really-really long time with a huge DB (possibly an hour).
Access count was not incremented when chunk was retrieved
from cache. So the garbage collector might have deleted the most
frequently accessed chunk from disk.
Co-authored-by: Ferenc Szabo <ferenc.szabo@ethereum.org>
ethereum/go-ethereum#16734 introduced BlockHash to the FilterQuery
struct. However, ethclient was not updated to include BlockHash in the actual
RPC request.
* RequestFromPeers does not use peers marked as lightnode
* fix warning about variable name
* write tests for RequestFromPeers
* lightnodes should be omitted from the addressbook
* resolve pr comments regarding logging, formatting and comments
* resolve pr comments regarding comments and added a missing newline
* add assertions to check peers in live connections
* core: speed up GenerateChain
Use a mock implementation of ChainReader instead of creating
and destroying a BlockChain object for each generated block.
* eth/downloader: speed up tests by generating chain only once
This change reworks the downloader tests so they share a common test
blockchain instead of generating a chain in every test. The tests are
roughly twice as fast now.
* swarm: clean up unused private types and functions
Those that were identified by code inspection tool.
* swarm/storage: move/add Proximity GoDoc from deleted private function
The mentioned proximity() private function was deleted in:
1ca8fc1e6f
* cmd/clef: replace password arg with prompt (#17829)
Entering passwords on the command line is not secure as it is easy to recover from bash_history or the process table.
1. The clef command addpw was renamed to setpw to better describe the functionality
2. The <password> argument was removed and replaced with an interactive prompt
* cmd/clef: remove undeclared variable
This adds the global accumulated refund counter to the standard
json output as a numeric json value. Previously this was not very
interesting since it was not used much, but with the new sstore
gas changes the value is a lot more interesting from a consensus
investigation perspective.
* swarm/network/stream: disambiguate chunk delivery messages (retrieval vs syncing)
* swarm/network/stream: addressed PR comments
* swarm/network/stream: stream protocol version change due to new message types in this PR
* first impl of eth_getProof
* fixed docu
* added comments and refactored based on comments from holiman
* created structs
* handle errors correctly
* change Value to *hexutil.Big in order to have the same output as parity
* use ProofList as return type
This change extends the peer metrics collection:
- traces the life-cycle of the peers
- meters the peer traffic separately for every peer
- creates event feed for the peer events
- emits the peer events
* swarm/network/stream: generalize SetNextBatch and add Server SessionIndex
* swarm/network/stream: fix a typo in comment
* swarm/network/stream: remove live argument from NewSwarmSyncerServer
This PR adds enode.LocalNode and integrates it into the p2p
subsystem. This new object is the keeper of the local node
record. For now, a new version of the record is produced every
time the client restarts. We'll make it smarter to avoid that in
the future.
There are a couple of other changes in this commit: discovery now
waits for all of its goroutines at shutdown and the p2p server
now closes the node database after discovery has shut down. This
fixes a leveldb crash in tests. p2p server startup is faster
because it doesn't need to wait for the external IP query
anymore.
This fixes a rare deadlock with the inproc adapter:
- A node is stopped, which acquires Network.lock.
- The protocol code being simulated (swarm/network in my case)
waits for its goroutines to shut down.
- One of those goroutines calls into the simulation to add a peer,
which waits for Network.lock.
The fix for the deadlock is really simple, just release the lock
before stopping the simulation node.
Other changes in this PR clean up the exec adapter so it reports
node startup errors better and remove the docker adapter because
it just adds overhead.
In the exec adapter, node information is now posted to a one-shot
server. This avoids log parsing and allows reporting startup
errors to the simulation host.
A small change in package node was needed because simulation
nodes use port zero. Node.{HTTP,WS}Endpoint now return the live
endpoints after startup by checking the TCP listener.
Notifier tracks whether subscription are 'active'. A subscription
becomes active when the subscription ID has been sent to the client. If
the client sends notifications in the request handler before the
subscription becomes active they are dropped. The tests tried to work
around this problem by always waiting 5s before sending the first
notification.
Fix it by buffering notifications until the subscription becomes active.
This speeds up all subscription tests.
Also fix TestSubscriptionMultipleNamespaces to wait for three messages
per subscription instead of six. The test now finishes just after all
notifications have been received and doesn't hit the 30s timeout anymore.
* travis: exclude non-test jobs for PRs
We don't usually look at these builders and not starting them
removes ~15min of build time.
* build: don't run vet before tests
Recent versions of Go run vet during 'go test' and we have
a dedicated lint job.
* build: use -timeout 5m for tests
Tests sometimes hang on Travis. CI runs are aborted after 10min with no
output. Adding the timeout means we get to see the stack trace for
timeouts.
- Mime types generator (Standard "mime" package rely on system-settings, see mime.osInitMime)
- Changed swarm/api.Upload:
- simplify I/O throttling by semaphore primitive and use file name where possible
- f.Close() must be called in Defer - otherwise panic or future added early return will cause leak of file descriptors
- one error was suppressed
When CLI tests were spanning new nodes, the log level verbosity was
hard coded as 6. So the Swarm process was always polluting the test
output with TRACE level logs.
Now `go test -v ./cmd/swarm -loglevel 0` works as expected.
* swarm/storage/mru: Adaptive Frequency
swarm/storage/mru/lookup: fixed getBaseTime
Added NewEpoch constructor
swarm/api/client: better error handling in GetResource()
swarm/storage/mru: Renamed structures.
Renamed ResourceMetadata to ResourceID.
Renamed ResourceID.Name to ResourceID.Topic
swarm/storage/mru: Added binarySerializer interface and test tools
swarm/storage/mru/lookup: Changed base time to time and + marshallers
swarm/storage/mru: Added ResourceID (former resourceMetadata)
swarm/storage/mru: Added ResourceViewId and serialization tests
swarm/storage/mru/lookup: fixed epoch unmarshaller. Added Epoch Equals
swarm/storage/mru: Fixes as per review comments
cmd/swarm: reworded resource create/update help text regarding topic
swarm/storage/mru: Added UpdateLookup and serializer tests
swarm/storage/mru: Added UpdateHeader, serializers and tests
swarm/storage/mru: changed UpdateAddr / epoch to Base()
swarm/storage/mru: Added resourceUpdate serializer and tests
swarm/storage/mru: Added SignedResourceUpdate tests and serializers
swarm/storage/mru/lookup: fixed GetFirstEpoch bug
swarm/storage/mru: refactor, comments, cleanup
Also added tests for Topic
swarm/storage/mru: handler tests pass
swarm/storage/mru: all resource package tests pass
swarm/storage/mru: resource test pass after adding
timestamp checking support
swarm/storage/mru: Added JSON serializers to ResourceIDView structures
swarm/storage/mru: Sever, client, API test pass
swarm/storage/mru: server test pass
swarm/storage/mru: Added topic length check
swarm/storage/mru: removed some literals,
improved "previous lookup" test case
swarm/storage/mru: some fixes and comments as per review
swarm/storage/mru: first working version without metadata chunk
swarm/storage/mru: Various fixes as per review
swarm/storage/mru: client test pass
swarm/storage/mru: resource query strings and manifest-less queries
swarm/storage/mru: simplify naming
swarm/storage/mru: first autofreq working version
swarm/storage/mru: renamed ToValues to AppendValues
swarm/resource/mru: Added ToValues / FromValues for URL query strings
swarm/storage/mru: Changed POST resource to work with query strings.
No more JSON.
swarm/storage/mru: removed resourceid
swarm/storage/mru: Opened up structures
swarm/storage/mru: Merged Request and SignedResourceUpdate
swarm/storage/mru: removed initial data from CLI resource create
swarm/storage/mru: Refactor Topic as a direct fixed-length array
swarm/storage/mru/lookup: Comprehensive GetNextLevel tests
swarm/storage/mru: Added comments
Added length checks in Topic
swarm/storage/mru: fixes in tests and some code comments
swarm/storage/mru/lookup: new optimized lookup algorithm
swarm/api: moved getResourceView to api out of server
swarm/storage/mru: Lookup algorithm working
swarm/storage/mru: comments and renamed NewLookupParams
Deleted commented code
swarm/storage/mru/lookup: renamed Epoch.LaterThan to After
swarm/storage/mru/lookup: Comments and tidying naming
swarm/storage/mru: fix lookup algorithm
swarm/storage/mru: exposed lookup hint
removed updateheader
swarm/storage/mru/lookup: changed GetNextEpoch for initial values
swarm/storage/mru: resource tests pass
swarm/storage/mru: valueSerializer interface and tests
swarm/storage/mru/lookup: Comments, improvements, fixes, more tests
swarm/storage/mru: renamed UpdateLookup to ID, LookupParams to Query
swarm/storage/mru: renamed query receiver var
swarm/cmd: MRU CLI tests
* cmd/swarm: remove rogue fmt
* swarm/storage/mru: Add version / header for future use
* swarm/storage/mru: Fixes/comments as per review
cmd/swarm: remove rogue fmt
swarm/storage/mru: Add version / header for future use-
* swarm/storage/mru: fix linter errors
* cmd/swarm: Speeded up TestCLIResourceUpdate
* signer: remove local path disclosure from extapi
* signer: show more data in cli ui
* rpc: make http server forward UA and Origin via Context
* signer, clef/core: ui changes + display UA and Origin
* signer: cliui - indicate less trust in remote headers, see https://github.com/ethereum/go-ethereum/issues/17637
* signer: prevent possibility swap KV-entries in aes_gcm storage, fixes#17635
* signer: remove ecrecover from external API
* signer,clef: default reject instead of warn + valideate new passwords. fixes#17632 and #17631
* signer: check calldata length even if no ABI signature is present
* signer: fix failing testcase
* clef: remove account import from external api
* signer: allow space in passwords, improve error messsage
* signer/storage: fix typos
The contributing instructions in the README are not in the GitHub contributing
guide, which means that people coming from the GitHub issues are less likely to
see them.
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
Interpreter initialization is left to the PRs implementing them.
Options for external interpreters are passed after a colon in the
`--vm.ewasm` and `--vm.evm` switches.
* ethdb: unified code comment style.
* puppeth: it is unnecessary to alloc pre-funded to 256 addresses
* Revert "puppeth: it is unnecessary to alloc pre-funded to 256 addresses"
This reverts commit 5e04fbccf0.
* puppeth: fix comment typo
* Revert "ethdb: unified code comment style."
This reverts commit a581efb3f0.
* swarm/network: simplify kademlia/hive; rid interfaces
* swarm, swarm/network/stream, swarm/netork/simulations,, swarm/pss: adapt to new Kad API
* swarm/network: minor changes re review; add missing lock to NeighbourhoodDepthC
* swarm/storage/encryption: async segmentwise encryption/decryption
* swarm/storage: adapt hasherstore to encryption API change
* swarm/api: adapt RefEncryption for AC to new Encryption API
* swarm/storage/encryption: address review comments
Makes Interface interface a bit more stateless and abstract.
Obviously this change is dictated by EVMC design. The EVMC tries to keep the responsibility for EVM features totally inside the VMs, if feasible. This makes VM "stateless" because VM does not need to pass any information between executions, all information is included in parameters of the execute function.
This commit does a few things at once:
- Updates the tests to contain the latest data from ethereum/tests repo.
- Enables Constantinople state tests. This is needed to be able to
fuzz-test the evm with constantinople rules.
- Fixes the error in opSAR that we've known about for some time. I was
kind of saving it to see if we hit upon it with the random test
generator, but it's difficult to both enable the tests and have the
bug there -- we don't want to forget about it, so maybe it's better
to just fix it.
* miner: commit state which is relative with sealing result
* consensus, core, miner, mobile: introduce sealHash interface
* miner: evict pending task with threshold
* miner: go fmt
* les: fix crasher in NodeInfo when running as server
The ProtocolManager computes CHT and Bloom trie roots by asking the
indexers for their current head. It tried to get the indexers from
LesOdr, but no LesOdr instance is created in server mode.
Attempt to fix this by moving the indexers, protocol creation and
NodeInfo to a new lesCommons struct which is embedded into both server
and client.
All this setup code should really be cleaned up, but this is just a
hotfix so we have to do that some other time.
* les: fix commons protocol maker
This PR enables the indexers to work in light client mode by
downloading a part of these tries (the Merkle proofs of the last
values of the last known section) in order to be able to add new
values and recalculate subsequent hashes. It also adds CHT data to
NodeInfo.
* backends: increase gaslimit in order to allow tests of large contracts
* backends: increase gaslimit in order to allow tests of large contracts
* backends: increase gaslimit in order to allow tests of large contracts
This PR implements les.freeClientPool. It also adds a simulated clock
in common/mclock, which enables time-sensitive tests to run quickly
and still produce accurate results, and package common/prque which is
a generalised variant of prque that enables removing elements other
than the top one from the queue.
les.freeClientPool implements a client database that limits the
connection time of each client and manages accepting/rejecting
incoming connections and even kicking out some connected clients. The
pool calculates recent usage time for each known client (a value that
increases linearly when the client is connected and decreases
exponentially when not connected). Clients with lower recent usage are
preferred, unknown nodes have the highest priority. Already connected
nodes receive a small bias in their favor in order to avoid accepting
and instantly kicking out clients.
Note: the pool can use any string for client identification. Using
signature keys for that purpose would not make sense when being known
has a negative value for the client. Currently the LES protocol
manager uses IP addresses (without port address) to identify clients.
* swarm: Added lightnode flag
Added --lightnode command line parameter
Added LightNode to Handshake message
* swarm/config: Fixed variable naming
* cmd/swarm: Changed BoolTFlag to BoolFlag for SwarmLightNodeEnabled
* swarm/network: Changed logging
* swarm/network: Changed protocol version testing
* swarm/network: Renamed DefaultNetworkID variable to TestProtocolNetworkID
* swarm/network: Bumped protocol version
* swarm/network: Changed LightNode handhsake test to table driven
* swarm/network: Changed back TestProtocolVersion to 5 for now
* swarm/network: Moved the test configuration inside the test function scope
* swarm/network/simulation: increase the sleep duration for TestRun
* cmd/swarm, swarm: fix failing tests on mac
* cmd/swarm: update TestCLISwarmFs skip comment
* swarm/network/simulation: adjust disconnections on simulation close
* swarm/network/simulation: call cleanups after net shutdown
* consensus/ethash: start remote ggoroutine to handle remote mining
* consensus/ethash: expose remote miner api
* consensus/ethash: expose submitHashrate api
* miner, ethash: push empty block to sealer without waiting execution
* consensus, internal: add getHashrate API for ethash
* consensus: add three method for consensus interface
* miner: expose consensus engine running status to miner
* eth, miner: specify etherbase when miner created
* miner: commit new work when consensus engine is started
* consensus, miner: fix some logics
* all: delete useless interfaces
* consensus: polish a bit
- Update benchmarks to use a pool of int pools.
Unless benchmarks are aborted with segmentation fault.
Signed-off-by: Hyung-Kyu Choi <hqueue@users.noreply.github.com>
* rpc: Make HTTP server timeout values configurable
* rpc: Remove flags for setting HTTP Timeouts, configuring via .toml is sufficient.
* rpc: Replace separate constants with a single default struct.
* rpc: Update HTTP Server Read and Write Timeouts to 30s.
* rpc: Remove redundant NewDefaultHTTPTimeouts function.
* rpc: document HTTPTimeouts.
* rpc: sanitize timeout values for library use
* build: add support for different package and binary names
* build: bump up copyright date
* build: change default PackageName to empty string
* build, internal, swarm: enhance build/release process
* build: hack ethereum-swarm as a "depends" in deb package
* build/ci: remove redundant variables
* build, cmd, mobile, params, swarm: remove VERSION file; rename Version to VersionMeta;
* internal: remove VERSION() method which reads VERSION file
* build: fix VersionFilePath to Version
* Makefile: remove clean_go_build_cache.sh until it works
* Makefile: revert removal of clean_go_build_cache.sh
- Define an Interpreter interface
- One contract can call contracts from other interpreter types.
- Pass the interpreter to the operands instead of the evm.
This is meant to prevent type assertions in operands.
Our original wrapper code had two parts. One taken from a third
party repository (who took it from upstream Go) licensed under
BSD-3. The second written by Jeff, Felix and Gustav, licensed
under LGPL. This made this package problematic to use from the
outside.
With the agreement of the original copyright holders, this commit
changes the license of the LGPL portions of the code to BSD-3:
---
I agree changing from LGPL to a BSD style license.
Jeff
---
Hey guys,
My preference would be to relicense to GNUBL, but I'm also OK with BSD.
Cheers,
Gustav
---
Felix Lange (fjl):
I would approve anything that makes our licensing less complicated
---
* swarm/storage/mru: Add embedded publickey and remove ENS dep
This commit breaks swarm, swarm/api...
but tests in swarm/storage/mru pass
* swarm: Refactor swarm, swarm/api to mru changes, make tests pass
* swarm/storage/mru: Remove self from recv, remove test ens vldtr
* swarm/storage/mru: Remove redundant test, expose ResourceHash mthd
* swarm/storage/mru: Make HeaderGetter mandatory + godoc fixes
* swarm/storage: Remove validator prefix for metadata chunk
* swarm/storage/mru: Use Address instead of PublicKey
* swarm/storage/mru: Change index from name to metadata chunk addr
* swarm/storage/mru: Refactor swarm/api/... to MRU index changes
* swarm/storage/mru: Refactor cleanup
* swarm/storage/mru: Rebase cleanup
* swarm: Use constructor for GenericSigner MRU in swarm.go
* swarm/storage: Change to BMTHash for MRU hashing
* swarm/storage: Reduce loglevel on chunk validator logs
* swarm/storage/mru: Delint
* swarm: MRU Rebase cleanup
* swarm/storage/mru: client-side mru signatures
Rebase to PR #668 and fix all conflicts
* swarm/storage/mru: refactor and documentation
* swarm/resource/mru: error-checking tests for parseUpdate/newUpdateChunk
* swarm/storage/mru: Added resourcemetadata tests
* swarm/storage/mru: Added tests for UpdateRequest
* swarm/storage/mru: more test coverage for UpdateRequest and comments
* swarm/storage/mru: Avoid fake chunks in parseUpdate()
* swarm/storage/mru: Documented resource.go extensively
moved some functions where they make most sense
* swarm/storage/mru: increase test coverage for UpdateRequest and
variable name changes throughout to increase consistency
* swarm/storage/mru: moved default timestamp to NewCreateRequest-
* swarm/storage/mru: lookup refactor
* swarm/storage/mru: added comments and renamed raw flag to rawmru
* swarm/storage/mru: fix receiver typo
* swarm/storage/mru: refactored update chunk new/create
* swarm/storage/mru: refactored signature digest to avoid malleability
* swarm/storage/mru: optimize update data serialization
* swarm/storage/mru: refactor and cleanup
* swarm/storage/mru: add timestamp struct and serialization
* swarm/storage/mru: fix lint error and mark some old code for deletion
* swarm/storage/mru: remove unnecessary variable
* swarm/storage/mru: Added more comments throughout
* swarm/storage/mru: Refactored metadata chunk layout + extensive error...
* swarm/storage/mru: refactor cli parser
Changed resource info output to JSON
* swarm/storage/mru: refactor serialization for extensibility
refactored error messages to NewErrorf
* swarm/storage/mru: Moved Signature to resource_sign.
Check Sign errors in server tests
* swarm/storage/mru: Remove isSafeName() checks
* swarm/storage/mru: scrubbed off all references to "block" for time
* swarm/storage/mru: removed superfluous isSynced() call.
* swarm/storage/mru: remove isMultihash() and ToSafeName functions
* swarm/storage/mru: various fixes and comments
* swarm/storage/mru: decoupled cli for independent create/update
* Made resource name optional
* Removed unused LookupPrevious
* swarm/storage/mru: Decoupled resource create / update & refactor
* swarm/storage/mru: Fixed some comments as per issues raised in PR #743
* swarm/storage/mru: Cosmetic changes as per #743 comments
* swarm/storage/mru: refct request encoder/decoder > marshal/unmarshal
* swarm/storage/mru: Cosmetic changes as per review in #748
* swarm/storage/mru: removed timestamp proof placeholder
* swarm/storage/mru: cosmetic/doc/fixes changes as per comments in #704
* swarm/storage/mru: removed unnecessary check in Handler.update
* swarm/storage/mru: Implemented Marshaler/Unmarshaler iface in Request
* swarm/storage/mru: Fixed linter error
* swarm/storage/mru: removed redundant address in signature digest
* swarm/storage/mru: fixed bug: LookupLatestVersionInPeriod not working
* swarm/storage/mru: Unfold Request creation API for create or update+create
set common time source for mru package
* swarm/api/http: fix HandleGetResource error variable shadowed
when requesting a resource that does not exist
* swarm/storage/mru: Add simple check to detect duplicate updates
* swarm/storage/mru: moved Multihash() to the right place.
* cmd/swarm: remove unneeded clientaccountmanager.go
* swarm/storage/mru: Changed some comments as per reviews in #784
* swarm/storage/mru: Made SignedResourceUpdate.GetDigest() public
* swarm/storage/mru: cosmetic changes as per comments in #784
* cmd/swarm: Inverted --multihash flag default
* swarm/storage/mru: removed Verify from SignedResourceUpdate.fromChunk
* swarm/storage/mru: Moved validation code out of serializer
Cosmetic / comment changes
* swarm/storage/mru: Added unit tests for UpdateLookup
* swarm/storage/mru: Increased coverage of metadata serialization
* swarm/storage/mru: Increased test coverage of updateHeader serializers
* swarm/storage/mru: Add resourceUpdate serializer test
* cmd/swarm: minor cli flag text adjustments
* cmd/swarm, swarm/storage, swarm: fix mingw on windows test issues
* cmd/swarm: support for smoke tests on the production swarm cluster
* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion
* changed colour of landing page
* landing page reacts to enter keypress
* swarm/api/http: sticky footer for swarm landing page using flex
* swarm/api/http: sticky footer for error pages and fix for multiple choices
* swarm: propagate ctx to internal apis (#754)
* swarm/simnet: add basic node/service functions
* swarm/netsim: add buckets for global state and kademlia health check
* swarm/netsim: Use sync.Map as bucket and provide cleanup function for...
* swarm, swarm/netsim: adjust SwarmNetworkTest
* swarm/netsim: fix tests
* swarm: added visualization option to sim net redesign
* swarm/netsim: support multiple services per node
* swarm/netsim: remove redundant return statement
* swarm/netsim: add comments
* swarm: shutdown HTTP in Simulation.Close
* swarm: sim HTTP server timeout
* swarm/netsim: add more simulation methods and peer events examples
* swarm/netsim: add WaitKademlia example
* swarm/netsim: fix comments
* swarm/netsim: terminate peer events goroutines on simulation done
* swarm, swarm/netsim: naming updates
* swarm/netsim: return not healthy kademlias on WaitTillHealthy
* swarm: fix WaitTillHealthy call in testSwarmNetwork
* swarm/netsim: allow bucket to have any type for a key
* swarm: Added snapshots to new netsim
* swarm/netsim: add more tests for bucket
* swarm/netsim: move http related things into separate files
* swarm/netsim: add AddNodeWithService option
* swarm/netsim: add more tests and Start* methods
* swarm/netsim: add peer events and kademlia tests
* swarm/netsim: fix some tests flakiness
* swarm/netsim: improve random nodes selection, fix TestStartStop* tests
* swarm/netsim: remove time measurement from TestClose to avoid flakiness
* swarm/netsim: builder pattern for netsim HTTP server (#773)
* swarm/netsim: add connect related tests
* swarm/netsim: add comment for TestPeerEvents
* swarm: rename netsim package to network/simulation
- AsyncHasher implements AsyncWriter interface
- add extra level for zerohashes in pool to lookup empty data hash
- remove unused segment, hash and depth fields from Tree
- Hash pkg function -> syncHash moved to test
- add asyncHash helper func to tests using shuffle
- add TestAsyncCorrectness to tests
- add BenchmarkBMTAsync to tests
- refactor benchmarks using subbenchmarks
- improved comments
- preinitialise base hashers on the nodes
* cmd/swarm: minor cli flag text adjustments
* swarm/api/http: sticky footer for swarm landing page using flex
* swarm/api/http: sticky footer for error pages and fix for multiple choices
* cmd/swarm, swarm/storage, swarm: fix mingw on windows test issues
* cmd/swarm: update description of swarm cmd
* swarm: added network ID test
* cmd/swarm: support for smoke tests on the production swarm cluster
* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion
* swarm: propagate ctx to internal apis (#754)
* swarm/metrics: collect disk measurements
* swarm/bmt: fix io.Writer interface
* Write now tolerates arbitrary variable buffers
* added variable buffer tests
* Write loop and finalise optimisation
* refactor / rename
* add tests for empty input
* swarm/pss: (UPDATE) Generic notifications package (#744)
swarm/pss: Generic package for creating pss notification svcs
* swarm: Adding context to more functions
* swarm/api: change colour of landing page in templates
* swarm/api: change landing page to react to enter keypress
* core: fix func TxDifference
fix a typo in func comment;
change named return to unnamed as there's explicit return in the body
* fix another typo in TxDifference
* p2p/discover: move bond logic from table to transport
This commit moves node endpoint verification (bonding) from the table to
the UDP transport implementation. Previously, adding a node to the table
entailed pinging the node if needed. With this change, the ping-back
logic is embedded in the packet handler at a lower level.
It is easy to verify that the basic protocol is unchanged: we still
require a valid pong reply from the node before findnode is accepted.
The node database tracked the time of last ping sent to the node and
time of last valid pong received from the node. Node endpoints are
considered verified when a valid pong is received and the time of last
pong was called 'bond time'. The time of last ping sent was unused. In
this commit, the last ping database entry is repurposed to mean last
ping _received_. This entry is now used to track whether the node needs
to be pinged back.
The other big change is how nodes are added to the table. We used to add
nodes in Table.bond, which ran when a remote node pinged us or when we
encountered the node in a neighbors reply. The transport now adds to the
table directly after the endpoint is verified through ping. To ensure
that the Table can't be filled just by pinging the node repeatedly, we
retain the isInitDone check. During init, only nodes from neighbors
replies are added.
* p2p/discover: reduce findnode failure counter on success
* p2p/discover: remove unused parameter of loadSeedNodes
* p2p/discover: improve ping-back check and comments
* p2p/discover: add neighbors reply nodes always, not just during init
These RPC calls are analogous to Parity's parity_addReservedPeer and
parity_removeReservedPeer.
They are useful for adjusting the trusted peer set during runtime,
without requiring restarting the server.
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.
This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
* debug: Use pprof goroutine writer in debug.Stacks() to ensure all goroutines are captured.
* Up to 64MB limit, previous code only captured first 1MB of goroutines.
* internal/debug: simplify stacks handler
* fix typo
* fix pointer receiver
This package was meant to hold an improved 256 bit integer library, but
the effort was abandoned in 2015. AFAIK nothing ever used this package.
Time to say goodbye.
Prior to this change, when geth was started with `geth -dev -rpc`,
it would report a network id of `1` in response to the `net_version` RPC
request. But the actual network id it used to verify transactions
was `1337`.
This change causes geth instead respond with `1337` to the `net_version`
RPC when geth is started with `geth -dev -rpc`.
* vm/test: add tests+benchmarks for mstore
* core/vm: less alloc and copying for mstore
* core/vm: less allocs in sload
* vm: check for errors more correctly
This commit adds all changes needed for the merge of swarm-network-rewrite.
The changes:
- build: increase linter timeout
- contracts/ens: export ensNode
- log: add Output method and enable fractional seconds in format
- metrics: relax test timeout
- p2p: reduced some log levels, updates to simulation packages
- rpc: increased maxClientSubscriptionBuffer to 20000
Improves test portability by resolving 127.0.0.1:0
to get a random free port instead of the hard coded one. Now
the test works if you have a running node on the same
interface already.
Fixes#15685
This PR fixes a retriever logic bug. When a peer had a soft timeout
and then a response arrived, it always assumed it was the same peer
even though it could have been a later requested one that did not time
out at all yet. In this case the logic went to an illegal state and
deadlocked, causing a goroutine leak.
Fixes#16243 and replaces #16359.
Thanks to @riceke for finding the bug in the logic.
ToECDSAPub was unsafe because it returned a non-nil key with nil X, Y in
case of invalid input. This change replaces ToECDSAPub with
UnmarshalPubkey across the codebase.
The error produced when using a Parity RPC was the following:
ERROR: transaction did not get mined: failed to get tx for txid 0xbdeb094b3278019383c8da148ff1cb5b5dbd61bf8731bc2310ac1b8ed0235226: json: cannot unmarshal non-string into Go struct field txExtraInfo.blockHash of type common.Hash
Allow the --abi flag to be given - to indicate that it should read the
ABI information from standard input. It expects to read the solc output
with the --combined-json flag providing bin, abi, userdoc, devdoc, and
metadata, and works very similarly to the internal invocation of solc,
except it allows external invocation of solc.
This facilitates integration with more complex solc invocations, such
as invocations that require path remapping or --allow-paths tweaks.
Simple usage example:
solc --combined-json bin,abi,userdoc,devdoc,metadata *.sol | abigen --abi -
* metrics: expvar support for ResettingTimer
* metrics: use integers for percentiles; remove Overall
* metrics: fix edge-case panic for index-out-of-range
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.
In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
* core: use a wrapped `map` and `sync.RWMutex` for `TxPool.all` to remove contention in `TxPool.Get`.
* core: Remove redundant `txLookup.Find` and improve comments on txLookup methods.
This applies spec changes from ethereum/EIPs#1049 and adds support for
pluggable identity schemes.
Some care has been taken to make the "v4" scheme standalone. It uses
public APIs only and could be moved out of package enr at any time.
A couple of minor changes were needed to make identity schemes work:
- The sequence number is now updated in Set instead of when signing.
- Record is now copy-safe, i.e. calling Set on a shallow copy doesn't
modify the record it was copied from.
Feed keeps active subscription channels in a slice called 'f.sendCases'.
The Send method tracks the active cases in a local variable 'cases'
whose value is f.sendCases initially. 'cases' shrinks to a shorter
prefix of f.sendCases every time a send succeeds, moving the successful
case out of range of the active case list.
This can be confusing because the two slices share a backing array. Add
more comments to document what is going on. Also add a test for removing
a case that is in 'f.sentCases' but not 'cases'.
* whisper/mailserver: pass init error to the caller
* whisper/mailserver: add returns to fmt.Errorf
* whisper/mailserver: check err in mailserver init test
This commit is meant to allow ecosystem projects such as ethersphere
to minimize CI build times by specifying an environment variable with
the packages to run tests on.
If the environment variable isn't defined the build script will test
all packages so this shouldn't affect the main go-ethereum repository.
The 'from' and 'to' methods on StateTransitions are reader methods and
shouldn't have inadvertent side effects on state.
It is safe to remove the check in 'from' because account existence is
implicitly checked by the nonce and balance checks. If the account has
non-zero balance or nonce, it must exist. Even if the sender account has
nonce zero at the start of the state transition or no balance, the nonce
is incremented before execution and the account will be created at that
time.
It is safe to remove the check in 'to' because the EVM creates the
account if necessary.
Fixes#15119
* common: delete StringToAddress, StringToHash
These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).
* eth/filters: remove incorrect use of common.BytesToAddress
Most of these methods did not contain all the relevant information
inside the object and were not using a similar formatting type.
Moreover, the existence of a suboptimal String method breaks usage
with more advanced data dumping tools like go-spew.
- Fixes#16271. What was appeneded was a pointer to
an object that changes during the iteration.
- The topic is allocated as a 4-byte array, fill partial topics
with 0s. Partial topics are currently disabled, but would
crash as they rely on the presence of byte number 3.
* crypto/bn256: full switchover to cloudflare's code
* crypto/bn256: only use cloudflare for optimized architectures
* crypto/bn256: upstream fallback for non-optimized code
* .travis, build: drop support for Go 1.8 (need type aliases)
* crypto/bn256/cloudflare: enable curve mul lattice optimization
* core/vm, crypto/bn256: switch over to cloudflare library
* crypto/bn256: unmarshal constraint + start pure go impl
* crypto/bn256: combo cloudflare and google lib
* travis: drop 386 test job
whisper: refactoring go-routines workflow
Move the call mailServer.Init() down (to the bottom of the function) because if the function initialize() completes successfully, then it will be followed by mailServer.Close() in shutdown(). The workflow of the corresponding goroutines is clearer now.
* accounts/abi/bind: support for multi-dim arrays
Also:
- reduce usage of regexes a bit.
- fix minor Java syntax problems
Fixes#15648
* accounts/abi/bind: Add some more documentation
* accounts/abi/bind: Improve code readability
* accounts/abi: bugfix for unpacking nested arrays
The code previously assumed the arrays/slices were always 1 level
deep. While the packing supports nested arrays (!!!).
The current code for unpacking doesn't return the "consumed" length, so
this fix had to work around that by calculating it (i.e. packing and
getting resulting length) after the unpacking of the array element.
It's far from ideal, but unpacking behaviour is fixed now.
* accounts/abi: Fix unpacking of nested arrays
Removed the temporary workaround of packing to calculate size, which was
incorrect for slice-like types anyway.
Full size of nested arrays is used now.
* accounts/abi: deeply nested array unpack test
Test unpacking of an array nested more than one level.
* accounts/abi: Add deeply nested array pack test
Same as the deep nested array unpack test, but the other way around.
* accounts/abi/bind: deeply nested arrays bind test
Test the usage of bindings that were generated
for methods with multi-dimensional (and not
just a single extra dimension, like foo[2][3])
array arguments and returns.
edit: trigger rebuild, CI failed to fetch linter module.
* accounts/abi/bind: improve array binding
wrapArray uses a regex now, and arrayBindingJava is improved.
* accounts/abi: Improve naming of element size func
The full step size for unpacking an array
is now retrieved with "getFullElemSize".
* accounts/abi: support nested nested array args
Previously, the code only considered the outer-size of the array,
ignoring the size of the contents. This was fine for most types,
but nested arrays are packed directly into it, and count towards
the total size. This resulted in arguments following a nested
array to replicate some of the binary contents of the array.
The fix: for arrays, calculate their complete contents size:
count the arg.Type.Elem.Size when Elem is an Array, and
repeat when their child is an array too, etc.
The count is the number of 32 byte elements, similar to how it
previously counted, but nested.
* accounts/abi: Test deep nested arr multi-arguments
Arguments with a deeply nested array should not cause the next arguments
to be read from the wrong position.
The diagnostic tool was saving the unencrypted version of the messages, which is an obvious
security flaw. As of this commit:
* encrypted messages saved instead of plain text.
* all messages are stored, even that created by the user of wnode.
whisper: mailserver no longer supports the signature validation
Mailserver is provided as an example, but client validation belongs to the upper layer protocol and
needs not be covered in this example. The check that was previously available hinders the switch
to libp2p so we agreed not to include that check in that example code anymore.
- added a case error struct that contains information about certain error cases
in which we would like to output more information to the client
- added a validation method that iterates and adds the information that is
stored in the error cases
* swarm: began work on GetHandleFile method re: issue #155
* swarm: now able to serve landing page template
* swarm: added landing page template
* swarm: landing page has working input
* swarm: fixed CSS issue in template
* swarm: deleted extra lines
* swarm: deleted time header and made redirect a relative path
* swarm: removed code mistakenly left
This change removes a peer information from dialing history
when peer is removed from static list. It allows to force a
server to re-dial concrete peer if it is needed.
In our case we are running geth node on mobile devices, and
it is common for a network connection to flap on mobile.
Almost every time it flaps or network connection is changed
from cellular to wifi peers are disconnected with read
timeout. And usually it takes 30 seconds (default expiration
timeout) to recover connection with static peers after
connectivity is restored.
This change allows us to reconnect with peers almost
immediately and it seems harmless enough.
I forgot to change the check in udp.go when I changed Table.bond to be
based on lastPong instead of node presence in db. Rename lastPong to
bondTime and add hasBond so it's clearer what this DB key is used for
now.
* swarm: added script to HTML header to create favicon addresses #153
* swarm: moved data blob direclty into link tag, removed script
* swarm: added favicon info to other html templates
* swarm: fixing test errors
* swarm: fixing favicon test
* swarm: fixing travis tests
* swarm: added script to HTML header to create favicon addresses #153
* swarm: moved data blob direclty into link tag, removed script
* swarm: added favicon info to other html templates
* swarm: fixing test errors
* swarm: fixing favicon test
* swarm: fixing travis tests
This is in preparation for the switch to libp2p: the ID generated
will be from a private key created with the help of libp2p's crypto
library, while Whisper will still use Go's default crypto libraries
for encrypting its messages. This change removes a conflict.
It shouldn't have any impact as the person receiving emails is
the user, not the node.
- according to implementation of `IntrinsicGas`
we can continue execution since problem will be detected
later. However, early return is future-proof for changes.
Talk about "state" instead of "trie timing", "trie memory" and remove
the overzealous warning when the limit is just reached. Since the time
limit is always reached on slow machines, move the message to info level
so users don't freak out about internal details.
* cmd,node,rpc: add allowedHosts to prevent dns rebinding attacks
* p2p,node: Fix bug with dumpconfig introduced in r54aeb8e4c0bb9f0e7a6c67258af67df3b266af3d
* rpc: add wildcard support for rpcallowedhosts + go fmt
* cmd/geth, cmd/utils, node, rpc: ignore direct ip(v4/6) addresses in rpc virtual hostnames check
* http, rpc, utils: make vhosts into map, address review concerns
* node: change log messages to use geth standard (not sprintf)
* rpc: fix spelling
* p2p: add DialRatio for configuration of inbound vs. dialed connections
* p2p: add connection flags to PeerInfo
* p2p/netutil: add SameNet, DistinctNetSet
* p2p/discover: improve revalidation and seeding
This changes node revalidation to be periodic instead of on-demand. This
should prevent issues where dead nodes get stuck in closer buckets
because no other node will ever come along to replace them.
Every 5 seconds (on average), the last node in a random bucket is
checked and moved to the front of the bucket if it is still responding.
If revalidation fails, the last node is replaced by an entry of the
'replacement list' containing recently-seen nodes.
Most close buckets are removed because it's very unlikely we'll ever
encounter a node that would fall into any of those buckets.
Table seeding is also improved: we now require a few minutes of table
membership before considering a node as a potential seed node. This
should make it less likely to store short-lived nodes as potential
seeds.
* p2p/discover: fix nits in UDP transport
We would skip sending neighbors replies if there were fewer than
maxNeighbors results and CheckRelayIP returned an error for the last
one. While here, also resolve a TODO about pong reply tokens.
* les, light: fix CHT trie retrievals
* les, light: minor polishes, test remote CHT retrievals
* les, light: deterministic nodeset rlp, bloombits test skeleton
* les: add an event emission to the les bloombits test
* les: drop dead tester code
The bulk of the issue was to adapt to the new requirement
that a v6 filter has to either contain a symmertric key or
an asymmetric one.
This commits revert one of the fixes that I made to remove
a linter warning: unexporting NewSentMessage. This is not
really a problem as I have a cleanup in the pipe that will
solve this issue.
* Remove --fast flag and clarify default
`--fast` is no longer a flag it's `--syncmode "fast"` and that is the default
* Remove --cache flag
--cache=512 is no longer required as of 1.8 as the default has been increased
* README: Minor cache amount fix, mention Rinkeby
* whisper: fixes warnings from the code linter
* whisper: more non-API-breaking changes
The remaining lint errors are because of auto-generated
files and one is because an exported function has a non-
exported return type. Changing this would break the API,
and will be part of another commit for easier reversal.
* whisper: un-export NewSentMessage to please the linter
This is an API change, which is why it's in its own commit.
This change was initiated after the linter complained that
the returned type wasn't exported. I chose to un-export
the function instead of exporting the type, because that
type is an implementation detail that I would like to
change in the near future to make the code more
readable and with an increased coverage.
* whisper: update gencodec output after upgrading it to new lint standards
* rpc: Support specifying HTTP client in RPC dialing
Adds a minimal interface that captures http.Client and adds a new method
rpc.DialHTTPClient that takes a client using that interface. The existing
rpc.DialHTTP method is then alternatively implemented by using the new
rpc.DialHTTPClient method provided with a standard *http.Client.
* rpc: fix minor doc typos
* dashboard: footer, deep state update
* dashboard: resolve asset path
* dashboard: prevent state update on every reconnection
* dashboard: fix linter issue
* dashboard, cmd: minor UI fix, include commit hash
* dashboard: gitCommit renamed to commit
* dashboard: move the geth version to the right, make commit optional
* dashboard: memory, traffic and CPU on footer
* dashboard: fix merge
* dashboard: CPU, diskIO on footer
* dashboard: rename variables, use group declaration
* dashboard: docs
* consensus/ethash: add maxEpoch constant
* consensus/ethash: improve cache/dataset handling
There are two fixes in this commit:
Unmap the memory through a finalizer like the libethash wrapper did. The
release logic was incorrect and freed the memory while it was being
used, leading to crashes like in #14495 or #14943.
Track caches and datasets using simplelru instead of reinventing LRU
logic. This should make it easier to see whether it's correct.
* consensus/ethash: restore 'future item' logic in lru
* consensus/ethash: use mmap even in test mode
This makes it possible to shorten the time taken for TestCacheFileEvict.
* consensus/ethash: shuffle func calc*Size comments around
* consensus/ethash: ensure future cache/dataset is in the lru cache
* consensus/ethash: add issue link to the new test
* consensus/ethash: fix vet
* consensus/ethash: fix test
* consensus: tiny issue + nitpick fixes
This commit affects p2p/discv5 "topic discovery" by running it on
the same UDP port where the old discovery works. This is realized
by giving an "unhandled" packet channel to the old v4 discovery
packet handler where all invalid packets are sent. These packets
are then processed by v5. v5 packets are always invalid when
interpreted by v4 and vice versa. This is ensured by adding one
to the first byte of the packet hash in v5 packets.
DiscoveryV5Bootnodes is also changed to point to new bootnodes
that are implementing the changed packet format with modified
hash. Existing and new v5 bootnodes are both running on different
ports ATM.
This commit:
- Adds a --msgfile option to read the message to sign from a file
instead of command line argument.
- Adds a unit test for signing subcommands.
- Removes some weird whitespace in the code.
* dashboard: footer, deep state update
* dashboard: resolve asset path
* dashboard: remove bundle.js
* dashboard: prevent state update on every reconnection
* dashboard: fix linter issue
* dashboard, cmd: minor UI fix, include commit hash
* remove geth binary
* dashboard: gitCommit renamed to commit
* dashboard: move the geth version to the right, make commit optional
* dashboard: commit limited to 7 characters
* dashboard: limit commit length on client side
* dashboard: run go generate
* common/fdlimit: Move fdlimit files to separate package
When go-ethereum is used as a library the calling program need to set
the FD limit.
This commit extract fdlimit files to a separate package so it can be
used outside of go-ethereum.
* common/fdlimit: Remove FdLimit from functions signature
* common/fdlimit: Rename fdlimit functions
The first should address a long term issue where we recommend a gas
price that is greater than that required for 50% of transactions in
recent blocks, which can lead to gas price inflation as people take
this figure and add a margin to it, resulting in a positive feedback
loop.
* core/types, core/vm, eth, tests: regenerate gencodec files
* Makefile: update devtools target
Install protoc-gen-go and print reminders about npm, solc and protoc.
Also switch to github.com/kevinburke/go-bindata because it's more
maintained.
* contracts/ens: update contracts and regenerate with solidity v0.4.19
The newer upstream version of the FIFSRegistrar contract doesn't set the
resolver anymore. The resolver is now deployed separately.
* contracts/release: regenerate with solidity v0.4.19
* contracts/chequebook: fix fallback and regenerate with solidity v0.4.19
The contract didn't have a fallback function, payments would be rejected
when compiled with newer solidity. References to 'mortal' and 'owned'
use the local file system so we can compile without network access.
* p2p/discv5: regenerate with recent stringer
* cmd/faucet: regenerate
* dashboard: regenerate
* eth/tracers: regenerate
* internal/jsre/deps: regenerate
* dashboard: avoid sed -i because it's not portable
* accounts/usbwallet/internal/trezor: fix go generate warnings
* go-ethereum/dashboard: update assets.go
Use current rsc/go-bindata instead of jteeuwen/go-bindata, to get
balanced tree in very long string concatenations.
This works around problems in current Go distributions.
For golang/go#23222.
* dashboard: run last two go:generate steps for linter
+ The event slice unpacker doesn't correctly extract element from the
slice. The indexed arguments are not ignored as they should be
(the data offset should not include the indexed arguments).
+ The `Elem()` call in the slice unpack doesn't work.
The Slice related tests fails because of that.
+ the check in the loop are suboptimal and have been extracted
out of the loop.
+ extracted common code from event and method tupleUnpack
* cmd/utils: Add check on hard limit, skip test if below target
* cmd/utils: Cross platform compatible fd limit test
* cmd/utils: Remove syscall.Rlimit in test
* cmd/utils: comment fd utility method
* swarm/api: url scheme bzz-hash to get hashes of swarm content (#15238)
Update URI to support bzz-hash scheme and handle such HTTP requests by
responding with hash of the content as a text/plain response.
* swarm/api: return hash of the content for bzz-hash:// requests
* swarm/api: revert "return hash of the content for bzz-hash:// requests"
Return hashes of the content that would be returned by bzz-raw
request.
* swarm/api/http: handle error in TestBzzGetPath
* swarm/api: remove extra blank line in comment
ethkey is a new tool that serves as a command line interface to
the basic key management functionalities of geth. It currently
supports:
- generating keyfiles
- inspecting keyfiles (print public and private key)
- signing messages
- verifying signed messages
This change inlines the logic of bytesAreProper at its sole
callsite, ABI.Unpack, and applies the multiple-of-32 test only in
the case of unpacking methods. Event data is not required to be a
multiple of 32 bytes long.
Changed the communication protocol for ordinary message,
according to EIP 627. Messages will be send in bundles, i.e.
array of messages will be sent instead of single message.
* crypto: ensure that VerifySignature rejects malleable signatures
It already rejected them when using libsecp256k1, make sure the nocgo
version does the same thing.
* crypto: simplify check
* crypto: fix build
* swarm/api: url scheme bzz-list for getting list of files from manifest
Replace query parameter list=true for listing all files contained
in a swarm manifest with a new URL scheme bzz-list.
* swarm: replaace bzzr and bzzi schemes with bzz-raw and bzz-immutable
New URI Shemes are added and old ones are deprecated, but not removed.
Old Schemes bzzr and bzzi are functional for backward compatibility.
* swarm/api: completely remove bzzr and bzzi schemes
Remove old schemes in favour of bzz-raw and
bzz-immutable.
* swarm/api: revert "completely remove bzzr and bzzi schemes"
Keep bzzr and bzzi schemes for backward compatibility. At least
until 0.3 swarm release.
Specifying ENS API CLI flag, env variable or configuration
field is required for ENS resolving. Backward compatibility is
preserved with --ens-api="" CLI flag value.
Merge with changes that implement config file PR #15548.
Field *EnsApi string* in swarm/api.Config is replaced with
*EnsAPIs []string*.
A new field *EnsDisabled bool* is added to swarm/api.Config for
easy way to disable ENS resolving with config file.
Signature of function swarm.NewSwarm is changed and simplified.
This commit adds a TOML configuration option to swarm. It reuses
the TOML configuration structure used in geth with swarm
customized items.
The commit:
* Adds a "dumpconfig" command to the swarm executable which
allows printing the (default) configuration to stdout, which
then can be redirected to a file in order to customize it.
* Adds a "--config <file>" option to the swarm executable which will
allow to load a configuration file in TOML format from the
specified location in order to initialize the Swarm node The
override priorities are like follows: environment variables
override command line arguments override config file override
default config.
With this change,
key, err := crypto.HexToECDSA("000000...")
returns nil key and an error instead of a non-nil key with nil X
and Y inside. Issue found by @guidovranken.
Now that the AES salt has been moved to the payload, padding must
be adjusted to hide it, lest an attacker guesses that the packet
uses symmetric encryption.
We need those operations for p2p/enr.
Also upgrade github.com/btcsuite/btcd/btcec to the latest version
and improve BenchmarkSha3. The benchmark printed extra output
that confused tools like benchstat and ignored N.
Allow multiple --ens-api flags to be specified with value format
[tld:][contract-addr@]url.
Backward compatibility with only one --ens-api flag and --ens-addr
flag is preserved and conflict cases are handled:
- multiple --ens-api with --ens-addr returns an error
- single --ens-api with contract address and --ens-addr with
different contract address returns an error
Previously implemented --ens-endpoint is removed. Its functionality
is replaced with multiple --ens-api flags.
Generator in the current lib uses -2 as the y point when doing
ScalarBaseMult, this makes it so that points/signatures generated
from libs like py_ecc don't match/validate as pretty much all
other libs (including libsnark) have (1, 2) as the standard
generator.
This does not affect consensus as the generator is never used in
the VM, points are always explicitly defined and there is not
ScalarBaseMult op - it only makes it so that doing "import
github.com/ethereum/go-ethereum/crypto/bn256" doesn't generate
bad points in userland tools.
Updated use of Parallel and added some subtests to help isolate
them. Increased timeout in RequestHeadersByNumber so it
doesn't time out and causes other tests to break.
p2p/simulations: introduce dialBan
- Refactor simulations/network connection getters to support
avoiding simultaneous dials between two peers If two peers dial
simultaneously, the connection will be dropped to help avoid
that, we essentially lock the connection object with a
timestamp which serves as a ban on dialing for a period of time
(dialBanTimeout).
- The connection getter InitConn can be wrapped and passed to the
nodes via adapters.NodeConfig#Reachable field and then used by
the respective services when they initiate connections. This
massively stablise the emerging connectivity when running with
hundreds of nodes bootstrapping a network.
p2p: add Inbound public method to p2p.Peer
p2p/simulations: Add server id to logs to support debugging
in-memory network simulations when multiple peers are logging.
p2p: SetupConn now returns error. The dialer checks the error and
only calls resolve if the actual TCP dial fails.
* core/vm: track 63/64 call gas off stack
Gas calculations in gasCall* relayed the available gas for calls by
replacing it on the stack. This lead to inconsistent traces, which we
papered over by copying the pre-execution stack in trace mode.
This change relays available gas using a temporary variable, off the
stack, and allows removing the weird copy.
* core/vm: remove stackCopy
* core/vm: pop call gas into pool
* core/vm: to -> addr
* trie: make fullnode children hash calculation concurrently
* trie: thread out only on topmost fullnode
* trie: clean up full node children hash calculation
* trie: minor code fixups
* cmd, consensus, eth: split ethash related config to it own
* eth, consensus: minor polish
* eth, consenus, console: compress pow testing config field to single one
* consensus, eth: document pow mode
* eth, internal: Implement using trie diffs
* eth, internal: Changes in response to review
* eth: More fixes to getModifiedAccountsBy*
* eth: minor polishes on error capitalization
* .dockerignore, internal/build: Read git information directly from file
This commit changes the way of retrieving git commit and branch for build
environment from running git command to reading git files directly.
This commit also adds required git files into Docker build context.
fixes: #15346
* .dockerignore: workaround for including some files in .git
github.com/stretchr/testify is a useful library for doing
assertion in tests. It makes assertions in test more less verbose and
more comfortable to read and use.
This changes behaviour: before if the owner/amount didn't match, it
resulted in a successful execution without doing anything; now it
results in a failed execution using the revert opcode (remainder gas
is not consumed).
* core: allow price bump at threshold
* core: test changes to allow price bump at threshold
* core: reinstate tx replacement test underneath threshold
* core: minor test failure message cleanups
This PR implements the new LES protocol version extensions:
* new and more efficient Merkle proofs reply format (when replying to
a multiple Merkle proofs request, we just send a single set of trie
nodes containing all necessary nodes)
* BBT (BloomBitsTrie) works similarly to the existing CHT and contains
the bloombits search data to speed up log searches
* GetTxStatusMsg returns the inclusion position or the
pending/queued/unknown state of a transaction referenced by hash
* an optional signature of new block data (number/hash/td) can be
included in AnnounceMsg to provide an option for "very light
clients" (mobile/embedded devices) to skip expensive Ethash check
and accept multiple signatures of somewhat trusted servers (still a
lot better than trusting a single server completely and retrieving
everything through RPC). The new client mode is not implemented in
this PR, just the protocol extension.
* cmd, consensus, core, miner: instatx clique for --dev
* cmd, consensus, clique: support configurable --dev block times
* cmd, core: allow --dev to use persistent storage too
If the file already existed, the WriteResponse.Size was being set
as the length of the entire file, not just the amount that was
written to the existing file.
Fixes#15216
The accountCache contains a file cache, and remembers from
scan to scan what files were present earlier. Thus, whenever
there's a change, the scan phase only bothers processing new
and removed files.
* core/types: make Signer derive address instead of public key
There are two reasons to do this now: The upcoming ethclient signer
doesn't know the public key, just the address. EIP 208 will introduce a
new signer which derives the 'entry point' address for transactions with
zero signature. The entry point has no public key.
Other changes to the interface ease the path make to moving signature
crypto out of core/types later.
* ethclient, mobile: add TransactionSender
The new method can get the right signer without any crypto, and without
knowledge of the signature scheme that was used when the transaction was
included.
When implementing the new bloombits based filter, I've accidentally broke null
topics by removing the special casing of common.Hash{} filter rules, which
acted as the wildcard topic until now.
This PR fixes the regression, but instead of using the magic hash
common.Hash{} as the null wildcard, the PR reworks the code to handle nil
topics during parsing, converting a JSON null into nil []common.Hash topic.
This patch fixes the following issues:
* The contract executes send() when it does not have enough balance.
* The contract always sends a total amount of zero.
This commit introduces a network simulation framework which
can be used to run simulated networks of devp2p nodes. The
intention is to use this for testing protocols, performing
benchmarks and visualising emergent network behaviour.
* params: Updated finalized gascosts for ECMUL/MODEXP
* core,tests: Updates pending new tests
* tests: Updated with new tests
* core: revert state transition bugfix
* tests: Add expected failures due to #15119
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment
bmt is a new package that provides hashers for binary merkle tree hashes on
size-limited chunks. the main motivation is that using BMT hash as the chunk
hash of the swarm hash offers logsize inclusion proofs for arbitrary files on a
32-byte resolution completely viable to use in challenges on the blockchain.
* cmd/evm: adds ability to run individual state test file
* cmd/evm: Fix statetest runner to be more json friendly
* cmd/evm, tests: minor polishes, dump state on fail
Using a Timer over Ticker seems to be a lot better, though I cannot fully
account for why that it behaves so (since Ticker should be more bursty, but not
necessarily more active over time, but that may depend on how long window it
uses to decide on when to tick next)
This fixes a regression where the new Failed field in ReceiptForStorage
rejected previously stored receipts. Fix it by removing the new field
and store status in the PostState field. This also removes massive RLP
hackery around the status field.
The lock file was ineffective because opening leveldb storage in
read-only mode doesn't really take the lock. Fix it by including a
dedicated flock library (which is actually split out of goleveldb).
* Fix STATICCALL so it is able to call precompiles too
* Fix write detection to use the correct value argument of CALL
* Fix write protection to ignore the value in CALLCODE
Blockchain tests now include the "network" which determines the chain
config to use. Remove config matching based on test name and share the
name-to-config index with state tests.
Byzantium/Constantinople tests are still skipped because most of them
fail anyway.
Previously, NewManifest was asynchronous so subsequent code which tried
to use the returned manifest could error as the manifest was not yet
persisted.
This patch updates the Address type in common/types.go so that the Hex
function provides an EIP55-compliant output string. The implementation is pretty lightweight;
on my laptop the benchmark gives 1100ns/op, with the majority of that value due to the Keccak hash.
* core: reduce txpool event loop goroutines and sync structs
* cmd, core, eth: journal local transactions to disk
* core: journal replacement pending transactions too
* core: separate transaction journal from pool
* mobile: don't retain transient []byte in CallMsg.SetData
Go mobile doesn't copy []byte parameters, for performance and to allow
writes to the byte array be reflected in the native byte array.
Unfortunately, that means []byte arguments are only valid during the
call it is being passed into.
CallMsg.SetData retains such a byte array. Copy it instead
Fixes#14675
* mobile: copy all []byte arguments from gomobile
To avoid subtle errors when accidentially retaining an otherwise
transient byte slice coming from gomobile, copy all byte slices before
use.
* mobile: replace copySlice with common.CopyBytes
* core: remove redundant storage of transactions and receipts
* core, eth, internal: new transaction schema usage polishes
* eth: implement upgrade mechanism for db deduplication
* core, eth: drop old sequential key db upgrader
* eth: close last iterator on successful db upgrage
* core: prefix the lookup entries to make their purpose clearer
This PR polishes the EIP 100 difficulty adjustment algorithm
to match the same mechanisms as the Homestead was implemented
to keep the code uniform. It also avoids a few memory allocs
by reusing big1 and big2, pulling it out of the common package
and into ethash.
The commit also fixes chain maker to forward the uncle hash
when creating a simulated chain (it wasn't needed until now
so we just skipped a copy there).
Before you do a feature request please check and make sure that it isn't possible
through some other means. The JavaScript enabled console is a powerful feature
in the right hands. Please check our [Bitchin' tricks](https://github.com/ethereum/go-ethereum/wiki/bitchin-tricks) wiki page for more info
and help.
## Contributing
If you'd like to contribute to go-ethereum please fork, fix, commit and
send a pull request. Commits which do not comply with the coding standards
are ignored (use gofmt!).
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
See [Developers' Guide](https://github.com/ethereum/go-ethereum/wiki/Developers'-Guide)
for more details on configuring your environment, testing, and
dependency management.
If you'd like to contribute to Swarm, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit more
complex changes though, please check up with the core devs first on [our Swarm gitter channel](https://gitter.im/ethersphere/orange-lounge)
to ensure those changes are in line with the general philosophy of the project and/or get some
early feedback which can make both your efforts much lighter as well as our review and merge
procedures quick and simple.
Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Pull requests need to be based on and opened against the `master` branch.
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions. It's helpful to search the existing GitHub issues first. It's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of. Please note that this is an issue tracker reserved for bug reports and feature requests. For general questions please use the gitter channel https://gitter.im/ethereum/swarm or the ethereum stack exchange at https://ethereum.stackexchange.com. -->
- sed -i '.bak' 's/repo.join/!repo.join/g' $(dirname `gem which cocoapods`)/cocoapods/sources_manager.rb
- if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then git clone --depth=1 https://github.com/CocoaPods/Specs.git ~/.cocoapods/repos/master && pod setup --verbose; fi
- xctool -version
- xcrun simctl list
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
# This builder does the Azure archive purges to avoid accumulating junk
- os:linux
- if:type = cron
os:linux
dist:trusty
sudo:required
go:1.8.3
go:1.12.x
env:
- azure-purge
script:
- go run build/ci.go purge -store gethstore/builds -days 14
This release is not backward compatible with the previous versions of Swarm due to changes to the wire protocol of the Retrieve Request messages. Please update your nodes.
### Bug fixes and Improvements
* [#1503](https://github.com/ethersphere/swarm/pull/1503): network/simulation: add ExecAdapter capability to swarm simulations
* [#1495](https://github.com/ethersphere/swarm/pull/1495): build: enable ubuntu ppa disco (19.04) builds
* [#1395](https://github.com/ethersphere/swarm/pull/1395): swarm/storage: support for uploading 100gb files
* [#1344](https://github.com/ethersphere/swarm/pull/1344): swarm/network, swarm/storage: simplification of fetchers
* [#1488](https://github.com/ethersphere/swarm/pull/1488): docker: include git commit hash in swarm version
## v0.4.1 (June 13, 2019)
### Improvements
* [#1465](https://github.com/ethersphere/swarm/pull/1465): network: bump proto versions due to change in OfferedHashesMsg
* [#1428](https://github.com/ethersphere/swarm/pull/1428): swarm-smoke: add debug flag
* [#1422](https://github.com/ethersphere/swarm/pull/1422): swarm/network/stream: remove dead code
* [#1463](https://github.com/ethersphere/swarm/pull/1463): docker: create new dockerfiles that are context aware
* [#1466](https://github.com/ethersphere/swarm/pull/1466): changelog for releases
### Bug fixes
* [#1460](https://github.com/ethersphere/swarm/pull/1460): storage: fix alignement panics on 32 bit arch
* [#1422](https://github.com/ethersphere/swarm/pull/1422), [#19650](https://github.com/ethereum/go-ethereum/pull/19650): swarm/network/stream: remove dead code
* [#19599](https://github.com/ethereum/go-ethereum/pull/19599): swarm/storage: fix SubscribePull to not skip chunks
### Notes
* Swarm has split the codebase ([go-ethereum#19661](https://github.com/ethereum/go-ethereum/pull/19661), [#1405](https://github.com/ethersphere/swarm/pull/1405)) from [ethereum/go-ethereum](https://github.com/ethereum/go-ethereum). The code is now under [ethersphere/swarm](https://github.com/ethersphere/swarm)
* New docker images (>=0.4.0) can now be found under https://hub.docker.com/r/ethersphere/swarm
## v0.4.0 (May 17, 2019)
### Changes
* Implemented parallel feed lookups within Swarm Feeds
* Updated syncing protocol subscription algorithm
* Implemented EIP-1577 - Multiaddr support for ENS
* Improved LocalStore implementation
* Added support for syncing tags which provide the ability to measure how long it will take for an uploaded file to sync to the network
* Fixed data race bugs within PSS
* Improved end-to-end integration tests
* Various performance improvements and bug fixes
* Improved instrumentation - metrics and OpenTracing traces
### Notes
This release is not backward compatible with the previous versions of Swarm due to the new LocalStore implementation. If you wish to keep your data, you should run a data migration prior to running this version.
BZZ network ID has been updated to 4.
Swarm v0.4.0 introduces major changes to the existing codebase. Among other things, the storage layer has been rewritten to be more modular and flexible in a manner that will accommodate for our future needs. Since Swarm at this point does not provide any storage guarantees, we have made the decision to not impose any migrations on the nodes that we maintain as part of the public test network, nor on our users. We have provided a [manual](https://github.com/ethersphere/swarm/blob/master/docs/Migration-v0.3-to-v0.4.md) for those of you who are running private deployments and would like to migrate your data to the new local storage schema.
Swarm is a distributed storage platform and content distribution service, a native base layer service of the ethereum web3 stack. The primary objective of Swarm is to provide a decentralized and redundant store for dapp code and data as well as block chain and state data. Swarm is also set out to provide various base layer services for web3, including node-to-node messaging, media streaming, decentralised database services and scalable state-channel infrastructure for decentralised service economies.
Automated builds are available for stable releases and the unstable master branch.
Binary archives are published at https://geth.ethereum.org/downloads/.
Building Swarm requires Go (version 1.11 or later).
Building geth requires both a Go (version 1.7 or later) and a C compiler.
You can install them using your favourite package manager.
Once the dependencies are installed, run
To simply compile the `swarm` binary without a `GOPATH`:
make geth
or, to build the full suite of utilities:
make all
## Executables
The go-ethereum project comes with several wrappers/executables found in the `cmd` directory.
| Command | Description |
|:----------:|-------------|
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default) archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options) for command line options. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI) with expanded functionality if the contract bytecode is also available. However it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://github.com/ethereum/go-ethereum/wiki/Native-DApps:-Go-bindings-to-Ethereum-contracts) wiki page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `disasm` | Bytecode disassembler to convert EVM (Ethereum Virtual Machine) bytecode into more user friendly assembly-like opcodes (e.g. `echo "6001" | disasm`). For details on the individual opcodes, please see pages 22-30 of the [Ethereum Yellow Paper](http://gavwood.com/paper.pdf). |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow insolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug`). |
| `gethrpctest` | Developer utility tool to support our [ethereum/rpc-test](https://github.com/ethereum/rpc-tests) test suite which validates baseline conformity to the [Ethereum JSON RPC](https://github.com/ethereum/wiki/wiki/JSON-RPC) specs. Please see the [test suite's readme](https://github.com/ethereum/rpc-tests/blob/master/README.md) for details. |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://github.com/ethereum/wiki/wiki/RLP)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `swarm` | swarm daemon and tools. This is the entrypoint for the swarm network. `swarm --help` for command line options and subcommands. See https://swarm-guide.readthedocs.io for swarm documentation. |
## Running geth
Going through all the possible command line flags is out of scope here (please consult our
[CLI Wiki page](https://github.com/ethereum/go-ethereum/wiki/Command-Line-Options)), but we've
enumerated a few common parameter combos to get you up to speed quickly on how you can run your
own Geth instance.
### Full node on the main Ethereum network
By far the most common scenario is people wanting to simply interact with the Ethereum network:
create accounts; transfer funds; deploy and interact with contracts. For this particular use-case
the user doesn't care about years-old historical data, so we can fast-sync quickly to the current
state of the network. To do so:
```
$ geth --fast --cache=512 console
```bash
$ git clone https://github.com/ethersphere/swarm
$ cd swarm
$ make swarm
```
This command will:
You will find the binary under `./build/bin/swarm`.
* Start geth in fast sync mode (`--fast`), causing it to download more data in exchange for avoiding
processing the entire history of the Ethereum network, which is very CPU intensive.
* Bump the memory allowance of the database to 512MB (`--cache=512`), which can help significantly in
sync times especially for HDD users. This flag is optional and you can set it as high or as low as
you'd like, though we'd recommend the 512MB - 2GB range.
* Start up Geth's built-in interactive [JavaScript console](https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console),
(via the trailing `console` subcommand) through which you can invoke all official [`web3` methods](https://github.com/ethereum/wiki/wiki/JavaScript-API)
as well as Geth's own [management APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs).
This too is optional and if you leave it out you can always attach to an already running Geth instance
with `geth attach`.
To build a vendored `swarm` using `go get` you must have `GOPATH` set. Then run:
### Full node on the Ethereum test network
Transitioning towards developers, if you'd like to play around with creating Ethereum contracts, you
almost certainly would like to do that without any real money involved until you get the hang of the
entire system. In other words, instead of attaching to the main network, you want to join the **test**
network with your node, which is fully equivalent to the main network, but with play-Ether only.
```
$ geth --testnet --fast --cache=512 console
```bash
$ go get -d github.com/ethersphere/swarm
$ go install github.com/ethersphere/swarm/cmd/swarm
```
The `--fast`, `--cache` flags and `console` subcommand have the exact same meaning as above and they
are equally useful on the testnet too. Please see above for their explanations if you've skipped to
here.
## Running Swarm
Specifying the `--testnet` flag however will reconfigure your Geth instance a bit:
Going through all the possible command line flags is out of scope here, but we've enumerated a few common parameter combos to get you up to speed quickly on how you can run your own Swarm node.
* Instead of using the default data directory (`~/.ethereum` on Linux for example), Geth will nest
itself one level deeper into a `testnet` subfolder (`~/.ethereum/testnet` on Linux). Note, on OSX
and Linux this also means that attaching to a running testnet node requires the use of a custom
endpoint since `geth attach` will try to attach to a production node endpoint by default. E.g.
`geth attach <datadir>/testnet/geth.ipc`. Windows users are not affected by this.
* Instead of connecting the main Ethereum network, the client will connect to the test network,
which uses different P2P bootnodes, different network IDs and genesis states.
*Note: Although there are some internal protective measures to prevent transactions from crossing
over between the main network and test network, you should make sure to always use separate accounts
for play-money and real-money. Unless you manually move accounts, Geth will by default correctly
separate the two networks and will not make any accounts available between them.*
To run Swarm you need an Ethereum account. Download and install [Geth](https://geth.ethereum.org) if you don't have it on your system. You can create a new Ethereum account by running the following command:
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a configuration file via:
```
$ geth --config /path/to/your_config.toml
```bash
$ geth account new
```
To get an idea how the file should look like you can use the `dumpconfig` subcommand to export your existing configuration:
You will be prompted for a password:
```
$ geth --your-favourite-flags dumpconfig
Your new account is locked with a password. Please give a password. Do not forget this password.
Passphrase:
Repeat passphrase:
```
*Note: This works only with geth v1.6.0 and above*
#### Docker quick start
One of the quickest ways to get Ethereum up and running on your machine is by using Docker:
Once you have specified the password, the output will be the Ethereum address representing that account. For example:
```
docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
This will start geth in fast sync mode with a DB memory allowance of 512MB just as the above command does. It will also create a persistent volume in your home directory for saving your blockchain as well as map the default ports. There is also an `alpine` tag available for a slim version of the image.
Using this account, connect to Swarm with
### Programatically interfacing Geth nodes
```bash
$ swarm --bzzaccount <your-account-here>
As a developer, sooner rather than later you'll want to start interacting with Geth and the Ethereum
network via your own programs and not manually through the console. To aid this, Geth has built in
support for a JSON-RPC based APIs ([standard APIs](https://github.com/ethereum/wiki/wiki/JSON-RPC) and
[Geth specific APIs](https://github.com/ethereum/go-ethereum/wiki/Management-APIs)). These can be
exposed via HTTP, WebSockets and IPC (unix sockets on unix based platforms, and named pipes on Windows).
The IPC interface is enabled by default and exposes all the APIs supported by Geth, whereas the HTTP
and WS interfaces need to manually be enabled and only expose a subset of APIs due to security reasons.
These can be turned on/off and configured as you'd expect.
HTTP based JSON-RPC API options:
*`--rpc` Enable the HTTP-RPC server
*`--rpcaddr` HTTP-RPC server listening interface (default: "localhost")
*`--rpcport` HTTP-RPC server listening port (default: 8545)
*`--rpcapi` API's offered over the HTTP-RPC interface (default: "eth,net,web3")
*`--rpccorsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
*`--ws` Enable the WS-RPC server
*`--wsaddr` WS-RPC server listening interface (default: "localhost")
*`--wsport` WS-RPC server listening port (default: 8546)
*`--wsapi` API's offered over the WS-RPC interface (default: "eth,net,web3")
*`--wsorigins` Origins from which to accept websockets requests
*`--ipcdisable` Disable the IPC-RPC server
*`--ipcapi` API's offered over the IPC-RPC interface (default: "admin,debug,eth,miner,net,personal,shh,txpool,web3")
*`--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to connect
via HTTP, WS or IPC to a Geth node configured with the above flags and you'll need to speak [JSON-RPC](http://www.jsonrpc.org/specification)
on all transports. You can reuse the same connection for multiple requests!
**Note: Please understand the security implications of opening up an HTTP/WS based transport before
doing so! Hackers on the internet are actively trying to subvert Ethereum nodes with exposed APIs!
Further, all browser tabs can access locally running webservers, so malicious webpages could try to
subvert locally available APIs!**
### Operating a private network
Maintaining your own private network is more involved as a lot of configurations taken for granted in
the official networks need to be manually set up.
#### Defining the private genesis state
First, you'll need to create the genesis state of your networks, which all nodes need to be aware of
and agree upon. This consists of a small JSON file (e.g. call it `genesis.json`):
When running, Swarm is accessible through an HTTP API on port 8500.
Confirm that it is up and running by pointing your browser to http://localhost:8500
### Ethereum Name Service resolution
The Ethereum Name Service is the Ethereum equivalent of DNS in the classic web. In order to use ENS to resolve names to Swarm content hashes (e.g. `bzz://theswarm.eth`), `swarm` has to connect to a `geth` instance, which is synced with the Ethereum mainnet. This is done using the `--ens-api` flag.
With the genesis state defined in the above JSON file, you'll need to initialize **every** Geth node
with it prior to starting it up to ensure all blockchain parameters are correctly set:
For more information on usage, features or command line flags, please consult the Documentation.
```
$ geth init path/to/genesis.json
## Documentation
Swarm documentation can be found at [https://swarm-guide.readthedocs.io](https://swarm-guide.readthedocs.io).
## Docker
Swarm container images are available at Docker Hub: [ethersphere/swarm](https://hub.docker.com/r/ethersphere/swarm)
### Docker tags
*`latest` - latest stable release
*`edge` - latest build from `master`
*`v0.x.y` - specific stable release
### Environment variables
*`PASSWORD` - *required* - Used to setup a sample Ethereum account in the data directory. If a data directory is mounted with a volume, the first Ethereum account from it is loaded, and Swarm will try to decrypt it non-interactively with `PASSWORD`
*`DATADIR` - *optional* - Defaults to `/root/.ethereum`
### Swarm command line arguments
All Swarm command line arguments are supported and can be sent as part of the CMD field to the Docker container.
**Examples:**
Running a Swarm container from the command line
```bash
$ docker run -e PASSWORD=password123 -t ethersphere/swarm \
--debug \
--verbosity 4
```
#### Creating the rendezvous point
Running a Swarm container with custom ENS endpoint
With all nodes that you want to run initialized to the desired genesis state, you'll need to start a
bootstrap node that others can use to find each other in your network and/or over the internet. The
clean way is to configure and run a dedicated bootnode:
```
$ bootnode --genkey=boot.key
$ bootnode --nodekey=boot.key
```bash
$ docker run -e PASSWORD=password123 -t ethersphere/swarm \
--ens-api http://1.2.3.4:8545 \
--debug \
--verbosity 4
```
With the bootnode online, it will display an [`enode` URL](https://github.com/ethereum/wiki/wiki/enode-url-format)
that other nodes can use to connect to it and exchange peer information. Make sure to replace the
displayed IP address information (most probably `[::]`) with your externally accessible IP to get the
actual `enode` URL.
Running a Swarm container with metrics enabled
*Note: You could also use a full fledged Geth node as a bootnode, but it's the less recommended way.*
#### Starting up your member nodes
With the bootnode operational and externally reachable (you can try `telnet <ip> <port>` to ensure
it's indeed reachable), start every subsequent Geth node pointed to the bootnode for peer discovery
via the `--bootnodes` flag. It will probably also be desirable to keep the data directory of your
private network separated, so do also specify a custom `--datadir` flag.
All dependencies are tracked in the `vendor` directory. We use `govendor` to manage them.
If you want to add a new dependency, run `govendor fetch <import-path>`, then commit the result.
If you want to update all dependencies to their latest upstream version, run `govendor fetch +v`.
### Testing
This section explains how to run unit, integration, and end-to-end tests in your development sandbox.
Testing one library:
```bash
$ go test -v -cpu 4 ./api
```
Note: Using options -cpu (number of cores allowed) and -v (logging even if no error) is recommended.
Testing only some methods:
```bash
$ go test -v -cpu 4 ./api -run TestMethod
```
Note: here all tests with prefix TestMethod will be run, so if you got TestMethod, TestMethod1, then both!
Running benchmarks:
```bash
$ go test -v -cpu 4 -bench . -run BenchmarkJoin
```
### Profiling Swarm
This section explains how to add Go `pprof` profiler to Swarm
If `swarm` is started with the `--pprof` option, a debugging HTTP server is made available on port 6060.
You can bring up http://localhost:6060/debug/pprof to see the heap, running routines etc.
By clicking full goroutine stack dump (clicking http://localhost:6060/debug/pprof/goroutine?debug=2) you can generate trace that is useful for debugging.
### Metrics and Instrumentation in Swarm
This section explains how to visualize and use existing Swarm metrics and how to instrument Swarm with a new metric.
Swarm metrics system is based on the `go-metrics` library.
The most common types of measurements we use in Swarm are `counters` and `resetting timers`. Consult the `go-metrics` documentation for full reference of available types.
Swarm supports an InfluxDB exporter. Consult the help section to learn about the command line arguments used to configure it:
```bash
$ swarm --help | grep metrics
```
We use Grafana and InfluxDB to visualise metrics reported by Swarm. We keep our Grafana dashboards under version control at https://github.com/ethersphere/grafana-dashboards. You could use them or design your own.
We have built a tool to help with automatic start of Grafana and InfluxDB and provisioning of dashboards at https://github.com/nonsense/stateth, which requires that you have Docker installed.
Once you have `stateth` installed, and you have Docker running locally, you have to:
1. Run `stateth` and keep it running in the background
3. Open Grafana at http://localhost:3000 and view the dashboards to gain insight into Swarm.
## Public Gateways
Swarm offers a local HTTP proxy API that Dapps can use to interact with Swarm. The Ethereum Foundation is hosting a public gateway, which allows free access so that people can try Swarm without running their own node.
The Swarm public gateways are temporary and users should not rely on their existence for production services.
The Swarm public gateway can be found at https://swarm-gateways.net and is always running the latest `stable` Swarm release.
## Swarm Dapps
You can find a few reference Swarm decentralised applications at: https://swarm-gateways.net/bzz:/swarmapps.eth
Their source code can be found at: https://github.com/ethersphere/swarm-dapps
## Contributing
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request
If you'd like to contribute to Swarm, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit more
complex changes though, please check up with the core devs first on [our gitter channel](https://gitter.im/ethereum/go-ethereum)
complex changes though, please check up with the core devs first on [our Swarm gitter channel](https://gitter.im/ethersphere/orange-lounge)
to ensure those changes are in line with the general philosophy of the project and/or get some
early feedback which can make both your efforts much lighter as well as our review and merge
procedures quick and simple.
@ -285,18 +329,17 @@ Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) guidelines.
* Pull requests need to be based on and opened against the `master` branch.
returnnil,fmt.Errorf("Ledger v%d.%d.%d doesn't support signing this transaction, please update to v1.0.3 at least",w.version[0],w.version[1],w.version[2])
}
// All infos gathered and metadata checks out, request signing
<-w.commsLock
deferfunc(){w.commsLock<-struct{}{}}()
// Ensure the device isn't screwed with while user confirmation is pending
// TODO(karalabe): remove if hotplug lands on Windows
respondError(w,r,fmt.Sprintf("invalid URI %q",r.URL.Path),http.StatusBadRequest)
return
}
ifuri.Addr!=""&&strings.HasPrefix(uri.Addr,"0x"){
uri.Addr=strings.TrimPrefix(uri.Addr,"0x")
msg:=fmt.Sprintf(`The requested hash seems to be prefixed with '0x'. You will be redirected to the correct URL within 5 seconds.<br/>
Please click <a href='%[1]s'>here</a> if your browser does not redirect you within 5 seconds.<script>setTimeout("location.href='%[1]s';",5000);</script>`,"/"+uri.String())
msg+=fmt.Sprintf("Disambiguation:<br/>Your request may refer to multiple choices.<br/>Click <a class=\"orange\" href='"+"/"+uri.String()+"'>here</a> if your browser does not redirect you within 5 seconds.<script>setTimeout(\"location.href='%s';\",5000);</script><br/>","/"+uri.String())
// this cannot be in a switch since an Accept header can have multiple values: "Accept: */*, text/html, application/xhtml+xml, application/xml;q=0.9, */*;q=0.8"
ifresp.StatusCode!=404&&!strings.Contains(string(respbody),"Invalid URI "/this_should_fail_as_no_bzz_protocol_present": unknown scheme"){
t.Fatalf("Response body does not match, expected: %v, to contain: %v; received code %d, expected code: %d",string(respbody),"Invalid bzz URI: unknown scheme",400,resp.StatusCode)
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.