swarm: codebase split from go-ethereum (#1405)

This commit is contained in:
Rafael Matias
2019-06-03 12:28:18 +02:00
committed by Anton Evangelatov
parent 7a22da98b9
commit b046760db1
1540 changed files with 4654 additions and 129393 deletions

152
network/README.md Normal file
View File

@ -0,0 +1,152 @@
## Streaming
Streaming is a new protocol of the swarm bzz bundle of protocols.
This protocol provides the basic logic for chunk-based data flow.
It implements simple retrieve requests and delivery using priority queue.
A data exchange stream is a directional flow of chunks between peers.
The source of datachunks is the upstream, the receiver is called the
downstream peer. Each streaming protocol defines an outgoing streamer
and an incoming streamer, the former installing on the upstream,
the latter on the downstream peer.
Subscribe on StreamerPeer launches an incoming streamer that sends
a subscribe msg upstream. The streamer on the upstream peer
handles the subscribe msg by installing the relevant outgoing streamer
. The modules now engage in a process of upstream sending a sequence of hashes of
chunks downstream (OfferedHashesMsg). The downstream peer evaluates which hashes are needed
and get it delivered by sending back a msg (WantedHashesMsg).
Historical syncing is supported - currently not the right abstraction --
state kept across sessions by saving a series of intervals after their last
batch actually arrived.
Live streaming is also supported, by starting session from the first item
after the subscription.
Provable data exchange. In case a stream represents a swarm document's data layer
or higher level chunks, streaming up to a certain index is always provable. It saves on
sending intermediate chunks.
Using the streamer logic, various stream types are easy to implement:
* light node requests:
* url lookup with offset
* document download
* document upload
* syncing
* live session syncing
* historical syncing
* simple retrieve requests and deliveries
* swarm feeds streams
* receipting for finger pointing
## Syncing
Syncing is the process that makes sure storer nodes end up storing all and only the chunks that are requested from them.
### Requirements
- eventual consistency: so each chunk historical should be syncable
- since the same chunk can and will arrive from many peers, (network traffic should be
optimised, only one transfer of data per chunk)
- explicit request deliveries should be prioritised higher than recent chunks received
during the ongoing session which in turn should be higher than historical chunks.
- insured chunks should get receipted for finger pointing litigation, the receipts storage
should be organised efficiently, upstream peer should also be able to find these
receipts for a deleted chunk easily to refute their challenge.
- syncing should be resilient to cut connections, metadata should be persisted that
keep track of syncing state across sessions, historical syncing state should survive restart
- extra data structures to support syncing should be kept at minimum
- syncing is not organized separately for chunk types (Swarm feed updates v regular content chunk)
- various types of streams should have common logic abstracted
Syncing is now entirely mediated by the localstore, ie., no processes or memory leaks due to network contention.
When a new chunk is stored, its chunk hash is index by proximity bin
peers syncronise by getting the chunks closer to the downstream peer than to the upstream one.
Consequently peers just sync all stored items for the kad bin the receiving peer falls into.
The special case of nearest neighbour sets is handled by the downstream peer
indicating they want to sync all kademlia bins with proximity equal to or higher
than their depth.
This sync state represents the initial state of a sync connection session.
Retrieval is dictated by downstream peers simply using a special streamer protocol.
Syncing chunks created during the session by the upstream peer is called live session syncing
while syncing of earlier chunks is historical syncing.
Once the relevant chunk is retrieved, downstream peer looks up all hash segments in its localstore
and sends to the upstream peer a message with a a bitvector to indicate
missing chunks (e.g., for chunk `k`, hash with chunk internal index which case )
new items. In turn upstream peer sends the relevant chunk data alongside their index.
On sending chunks there is a priority queue system. If during looking up hashes in its localstore,
downstream peer hits on an open request then a retrieve request is sent immediately to the upstream peer indicating
that no extra round of checks is needed. If another peers syncer hits the same open request, it is slightly unsafe to not ask
that peer too: if the first one disconnects before delivering or fails to deliver and therefore gets
disconnected, we should still be able to continue with the other. The minimum redundant traffic coming from such simultaneous
eventualities should be sufficiently rare not to warrant more complex treatment.
Session syncing involves downstream peer to request a new state on a bin from upstream.
using the new state, the range (of chunks) between the previous state and the new one are retrieved
and chunks are requested identical to the historical case. After receiving all the missing chunks
from the new hashes, downstream peer will request a new range. If this happens before upstream peer updates a new state,
we say that session syncing is live or the two peers are in sync. In general the time interval passed since downstream peer request up to the current session cursor is a good indication of a permanent (probably increasing) lag.
If there is no historical backlog, and downstream peer has an acceptable 'last synced' tag, then it is said to be fully synced with the upstream peer.
If a peer is fully synced with all its storer peers, it can advertise itself as globally fully synced.
The downstream peer persists the record of the last synced offset. When the two peers disconnect and
reconnect syncing can start from there.
This situation however can also happen while historical syncing is not yet complete.
Effectively this means that the peer needs to persist a record of an arbitrary array of offset ranges covered.
### Delivery requests
once the appropriate ranges of the hashstream are retrieved and buffered, downstream peer just scans the hashes, looks them up in localstore, if not found, create a request entry.
The range is referenced by the chunk index. Alongside the name (indicating the stream, e.g., content chunks for bin 6) and the range
downstream peer sends a 128 long bitvector indicating which chunks are needed.
Newly created requests are satisfied bound together in a waitgroup which when done, will promptt sending the next one.
to be able to do check and storage concurrently, we keep a buffer of one, we start with two batches of hashes.
If there is nothing to give, upstream peers SetNextBatch is blocking. Subscription ends with an unsubscribe. which removes the syncer from the map.
Canceling requests (for instance the late chunks of an erasure batch) should be a chan closed
on the request
Simple request is also a subscribe
different streaming protocols are different p2p protocols with same message types.
the constructor is the Run function itself. which takes a streamerpeer as argument
### provable streams
The swarm hash over the hash stream has many advantages. It implements a provable data transfer
and provide efficient storage for receipts in the form of inclusion proofs useable for finger pointing litigation.
When challenged on a missing chunk, upstream peer will provide an inclusion proof of a chunk hash against the state of the
sync stream. In order to be able to generate such an inclusion proof, upstream peer needs to store the hash index (counting consecutive hash-size segments) alongside the chunk data and preserve it even when the chunk data is deleted until the chunk is no longer insured.
if there is no valid insurance on the files the entry may be deleted.
As long as the chunk is preserved, no takeover proof will be needed since the node can respond to any challenge.
However, once the node needs to delete an insured chunk for capacity reasons, a receipt should be available to
refute the challenge by finger pointing to a downstream peer.
As part of the deletion protocol then, hashes of insured chunks to be removed are pushed to an infinite stream for every bin.
Downstream peer on the other hand needs to make sure that they can only be finger pointed about a chunk they did receive and store.
For this the check of a state should be exhaustive. If historical syncing finishes on one state, all hashes before are covered, no
surprises. In other words historical syncing this process is self verifying. With session syncing however, it is not enough to check going back covering the range from old offset to new. Continuity (i.e., that the new state is extension of the old) needs to be verified: after downstream peer reads the range into a buffer, it appends the buffer the last known state at the last known offset and verifies the resulting hash matches
the latest state. Past intervals of historical syncing are checked via the session root.
Upstream peer signs the states, downstream peers can use as handover proofs.
Downstream peers sign off on a state together with an initial offset.
Once historical syncing is complete and the session does not lag, downstream peer only preserves the latest upstream state and store the signed version.
Upstream peer needs to keep the latest takeover states: each deleted chunk's hash should be covered by takeover proof of at least one peer. If historical syncing is complete, upstream peer typically will store only the latest takeover proof from downstream peer.
Crucially, the structure is totally independent of the number of peers in the bin, so it scales extremely well.
## implementation
The simplest protocol just involves upstream peer to prefix the key with the kademlia proximity order (say 0-15 or 0-31)
and simply iterate on index per bin when syncing with a peer.
priority queues are used for sending chunks so that user triggered requests should be responded to first, session syncing second, and historical with lower priority.
The request on chunks remains implemented as a dataless entry in the memory store.
The lifecycle of this object should be more carefully thought through, ie., when it fails to retrieve it should be removed.

View File

@ -0,0 +1,62 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bitvector
import (
"errors"
)
var errInvalidLength = errors.New("invalid length")
type BitVector struct {
len int
b []byte
}
func New(l int) (bv *BitVector, err error) {
return NewFromBytes(make([]byte, l/8+1), l)
}
func NewFromBytes(b []byte, l int) (bv *BitVector, err error) {
if l <= 0 {
return nil, errInvalidLength
}
if len(b)*8 < l {
return nil, errInvalidLength
}
return &BitVector{
len: l,
b: b,
}, nil
}
func (bv *BitVector) Get(i int) bool {
bi := i / 8
return bv.b[bi]&(0x1<<uint(i%8)) != 0
}
func (bv *BitVector) Set(i int, v bool) {
bi := i / 8
cv := bv.Get(i)
if cv != v {
bv.b[bi] ^= 0x1 << uint8(i%8)
}
}
func (bv *BitVector) Bytes() []byte {
return bv.b
}

View File

@ -0,0 +1,104 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package bitvector
import "testing"
func TestBitvectorNew(t *testing.T) {
_, err := New(0)
if err != errInvalidLength {
t.Errorf("expected err %v, got %v", errInvalidLength, err)
}
_, err = NewFromBytes(nil, 0)
if err != errInvalidLength {
t.Errorf("expected err %v, got %v", errInvalidLength, err)
}
_, err = NewFromBytes([]byte{0}, 9)
if err != errInvalidLength {
t.Errorf("expected err %v, got %v", errInvalidLength, err)
}
_, err = NewFromBytes(make([]byte, 8), 8)
if err != nil {
t.Error(err)
}
}
func TestBitvectorGetSet(t *testing.T) {
for _, length := range []int{
1,
2,
4,
8,
9,
15,
16,
} {
bv, err := New(length)
if err != nil {
t.Errorf("error for length %v: %v", length, err)
}
for i := 0; i < length; i++ {
if bv.Get(i) {
t.Errorf("expected false for element on index %v", i)
}
}
func() {
defer func() {
if err := recover(); err == nil {
t.Errorf("expecting panic")
}
}()
bv.Get(length + 8)
}()
for i := 0; i < length; i++ {
bv.Set(i, true)
for j := 0; j < length; j++ {
if j == i {
if !bv.Get(j) {
t.Errorf("element on index %v is not set to true", i)
}
} else {
if bv.Get(j) {
t.Errorf("element on index %v is not false", i)
}
}
}
bv.Set(i, false)
if bv.Get(i) {
t.Errorf("element on index %v is not set to false", i)
}
}
}
}
func TestBitvectorNewFromBytesGet(t *testing.T) {
bv, err := NewFromBytes([]byte{8}, 8)
if err != nil {
t.Error(err)
}
if !bv.Get(3) {
t.Fatalf("element 3 is not set to true: state %08b", bv.b[0])
}
}

30
network/common.go Normal file
View File

@ -0,0 +1,30 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"fmt"
"strings"
)
func LogAddrs(nns [][]byte) string {
var nnsa []string
for _, nn := range nns {
nnsa = append(nnsa, fmt.Sprintf("%08x", nn[:4]))
}
return strings.Join(nnsa, ", ")
}

220
network/discovery.go Normal file
View File

@ -0,0 +1,220 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"context"
"fmt"
"sync"
"github.com/ethersphere/swarm/pot"
)
// discovery bzz extension for requesting and relaying node address records
var sortPeers = noSortPeers
// Peer wraps BzzPeer and embeds Kademlia overlay connectivity driver
type Peer struct {
*BzzPeer
kad *Kademlia
sentPeers bool // whether we already sent peer closer to this address
mtx sync.RWMutex //
peers map[string]bool // tracks node records sent to the peer
depth uint8 // the proximity order advertised by remote as depth of saturation
}
// NewPeer constructs a discovery peer
func NewPeer(p *BzzPeer, kad *Kademlia) *Peer {
d := &Peer{
kad: kad,
BzzPeer: p,
peers: make(map[string]bool),
}
// record remote as seen so we never send a peer its own record
d.seen(p.BzzAddr)
return d
}
// HandleMsg is the message handler that delegates incoming messages
func (d *Peer) HandleMsg(ctx context.Context, msg interface{}) error {
switch msg := msg.(type) {
case *peersMsg:
return d.handlePeersMsg(msg)
case *subPeersMsg:
return d.handleSubPeersMsg(msg)
default:
return fmt.Errorf("unknown message type: %T", msg)
}
}
// NotifyDepth sends a message to all connections if depth of saturation is changed
func NotifyDepth(depth uint8, kad *Kademlia) {
f := func(val *Peer, po int) bool {
val.NotifyDepth(depth)
return true
}
kad.EachConn(nil, 255, f)
}
// NotifyPeer informs all peers about a newly added node
func NotifyPeer(p *BzzAddr, k *Kademlia) {
f := func(val *Peer, po int) bool {
val.NotifyPeer(p, uint8(po))
return true
}
k.EachConn(p.Address(), 255, f)
}
// NotifyPeer notifies the remote node (recipient) about a peer if
// the peer's PO is within the recipients advertised depth
// OR the peer is closer to the recipient than self
// unless already notified during the connection session
func (d *Peer) NotifyPeer(a *BzzAddr, po uint8) {
// immediately return
if (po < d.getDepth() && pot.ProxCmp(d.kad.BaseAddr(), d, a) != 1) || d.seen(a) {
return
}
resp := &peersMsg{
Peers: []*BzzAddr{a},
}
go d.Send(context.TODO(), resp)
}
// NotifyDepth sends a subPeers Msg to the receiver notifying them about
// a change in the depth of saturation
func (d *Peer) NotifyDepth(po uint8) {
go d.Send(context.TODO(), &subPeersMsg{Depth: po})
}
/*
peersMsg is the message to pass peer information
It is always a response to a peersRequestMsg
The encoding of a peer address is identical the devp2p base protocol peers
messages: [IP, Port, NodeID],
Note that a node's FileStore address is not the NodeID but the hash of the NodeID.
TODO:
To mitigate against spurious peers messages, requests should be remembered
and correctness of responses should be checked
If the proxBin of peers in the response is incorrect the sender should be
disconnected
*/
// peersMsg encapsulates an array of peer addresses
// used for communicating about known peers
// relevant for bootstrapping connectivity and updating peersets
type peersMsg struct {
Peers []*BzzAddr
}
// String pretty prints a peersMsg
func (msg peersMsg) String() string {
return fmt.Sprintf("%T: %v", msg, msg.Peers)
}
// handlePeersMsg called by the protocol when receiving peerset (for target address)
// list of nodes ([]PeerAddr in peersMsg) is added to the overlay db using the
// Register interface method
func (d *Peer) handlePeersMsg(msg *peersMsg) error {
// register all addresses
if len(msg.Peers) == 0 {
return nil
}
for _, a := range msg.Peers {
d.seen(a)
NotifyPeer(a, d.kad)
}
return d.kad.Register(msg.Peers...)
}
// subPeers msg is communicating the depth of the overlay table of a peer
type subPeersMsg struct {
Depth uint8
}
// String returns the pretty printer
func (msg subPeersMsg) String() string {
return fmt.Sprintf("%T: request peers > PO%02d. ", msg, msg.Depth)
}
// handleSubPeersMsg handles incoming subPeersMsg
// this message represents the saturation depth of the remote peer
// saturation depth is the radius within which the peer subscribes to peers
// the first time this is received we send peer info on all
// our connected peers that fall within peers saturation depth
// otherwise this depth is just recorded on the peer, so that
// subsequent new connections are sent iff they fall within the radius
func (d *Peer) handleSubPeersMsg(msg *subPeersMsg) error {
d.setDepth(msg.Depth)
// only send peers after the initial subPeersMsg
if !d.sentPeers {
var peers []*BzzAddr
// iterate connection in ascending order of disctance from the remote address
d.kad.EachConn(d.Over(), 255, func(p *Peer, po int) bool {
// terminate if we are beyond the radius
if uint8(po) < msg.Depth {
return false
}
if !d.seen(p.BzzAddr) { // here just records the peer sent
peers = append(peers, p.BzzAddr)
}
return true
})
// if useful peers are found, send them over
if len(peers) > 0 {
go d.Send(context.TODO(), &peersMsg{Peers: sortPeers(peers)})
}
}
d.sentPeers = true
return nil
}
// seen takes a peer address and checks if it was sent to a peer already
// if not, marks the peer as sent
func (d *Peer) seen(p *BzzAddr) bool {
d.mtx.Lock()
defer d.mtx.Unlock()
k := string(p.Address())
if d.peers[k] {
return true
}
d.peers[k] = true
return false
}
func (d *Peer) getDepth() uint8 {
d.mtx.RLock()
defer d.mtx.RUnlock()
return d.depth
}
func (d *Peer) setDepth(depth uint8) {
d.mtx.Lock()
defer d.mtx.Unlock()
d.depth = depth
}
func noSortPeers(peers []*BzzAddr) []*BzzAddr {
return peers
}

264
network/discovery_test.go Normal file
View File

@ -0,0 +1,264 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"crypto/ecdsa"
crand "crypto/rand"
"encoding/binary"
"fmt"
"math/rand"
"net"
"sort"
"testing"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/protocols"
p2ptest "github.com/ethereum/go-ethereum/p2p/testing"
"github.com/ethersphere/swarm/pot"
)
/***
*
* - after connect, that outgoing subpeersmsg is sent
*
*/
func TestSubPeersMsg(t *testing.T) {
params := NewHiveParams()
s, pp, err := newHiveTester(params, 1, nil)
if err != nil {
t.Fatal(err)
}
node := s.Nodes[0]
raddr := NewAddr(node)
pp.Register(raddr)
// start the hive and wait for the connection
pp.Start(s.Server)
defer pp.Stop()
// send subPeersMsg to the peer
err = s.TestExchanges(p2ptest.Exchange{
Label: "outgoing subPeersMsg",
Expects: []p2ptest.Expect{
{
Code: 1,
Msg: &subPeersMsg{Depth: 0},
Peer: node.ID(),
},
},
})
if err != nil {
t.Fatal(err)
}
}
const (
maxPO = 8 // PO of pivot and control; chosen to test enough cases but not run too long
maxPeerPO = 6 // pivot has no peers closer than this to the control peer
maxPeersPerPO = 3
)
// TestInitialPeersMsg tests if peersMsg response to incoming subPeersMsg is correct
func TestInitialPeersMsg(t *testing.T) {
for po := 0; po < maxPO; po++ {
for depth := 0; depth < maxPO; depth++ {
t.Run(fmt.Sprintf("PO=%d,advertised depth=%d", po, depth), func(t *testing.T) {
testInitialPeersMsg(t, po, depth)
})
}
}
}
// testInitialPeersMsg tests that the correct set of peer info is sent
// to another peer after receiving their subPeersMsg request
func testInitialPeersMsg(t *testing.T, peerPO, peerDepth int) {
// generate random pivot address
prvkey, err := crypto.GenerateKey()
if err != nil {
t.Fatal(err)
}
defer func(orig func([]*BzzAddr) []*BzzAddr) {
sortPeers = orig
}(sortPeers)
sortPeers = testSortPeers
pivotAddr := pot.NewAddressFromBytes(PrivateKeyToBzzKey(prvkey))
// generate control peers address at peerPO wrt pivot
peerAddr := pot.RandomAddressAt(pivotAddr, peerPO)
// construct kademlia and hive
to := NewKademlia(pivotAddr[:], NewKadParams())
hive := NewHive(NewHiveParams(), to, nil)
// expected addrs in peersMsg response
var expBzzAddrs []*BzzAddr
connect := func(a pot.Address, po int) (addrs []*BzzAddr) {
n := rand.Intn(maxPeersPerPO)
for i := 0; i < n; i++ {
peer, err := newDiscPeer(pot.RandomAddressAt(a, po))
if err != nil {
t.Fatal(err)
}
hive.On(peer)
addrs = append(addrs, peer.BzzAddr)
}
return addrs
}
register := func(a pot.Address, po int) {
addr := pot.RandomAddressAt(a, po)
hive.Register(&BzzAddr{OAddr: addr[:]})
}
// generate connected and just registered peers
for po := maxPeerPO; po >= 0; po-- {
// create a fake connected peer at po from peerAddr
ons := connect(peerAddr, po)
// create a fake registered address at po from peerAddr
register(peerAddr, po)
// we collect expected peer addresses only up till peerPO
if po < peerDepth {
continue
}
expBzzAddrs = append(expBzzAddrs, ons...)
}
// add extra connections closer to pivot than control
for po := peerPO + 1; po < maxPO; po++ {
ons := connect(pivotAddr, po)
if peerDepth <= peerPO {
expBzzAddrs = append(expBzzAddrs, ons...)
}
}
// create a special bzzBaseTester in which we can associate `enode.ID` to the `bzzAddr` we created above
s, _, err := newBzzBaseTesterWithAddrs(prvkey, [][]byte{peerAddr[:]}, DiscoverySpec, hive.Run)
if err != nil {
t.Fatal(err)
}
defer s.Stop()
// peerID to use in the protocol tester testExchange expect/trigger
peerID := s.Nodes[0].ID()
// block until control peer is found among hive peers
found := false
for attempts := 0; attempts < 2000; attempts++ {
found = hive.Peer(peerID) != nil
if found {
break
}
time.Sleep(1 * time.Millisecond)
}
if !found {
t.Fatal("timeout waiting for peer connection to start")
}
// pivotDepth is the advertised depth of the pivot node we expect in the outgoing subPeersMsg
pivotDepth := hive.Saturation()
// the test exchange is as follows:
// 1. pivot sends to the control peer a `subPeersMsg` advertising its depth (ignored)
// 2. peer sends to pivot a `subPeersMsg` advertising its own depth (arbitrarily chosen)
// 3. pivot responds with `peersMsg` with the set of expected peers
err = s.TestExchanges(
p2ptest.Exchange{
Label: "outgoing subPeersMsg",
Expects: []p2ptest.Expect{
{
Code: 1,
Msg: &subPeersMsg{Depth: uint8(pivotDepth)},
Peer: peerID,
},
},
},
p2ptest.Exchange{
Label: "trigger subPeersMsg and expect peersMsg",
Triggers: []p2ptest.Trigger{
{
Code: 1,
Msg: &subPeersMsg{Depth: uint8(peerDepth)},
Peer: peerID,
},
},
Expects: []p2ptest.Expect{
{
Code: 0,
Msg: &peersMsg{Peers: testSortPeers(expBzzAddrs)},
Peer: peerID,
Timeout: 100 * time.Millisecond,
},
},
})
// for values MaxPeerPO < peerPO < MaxPO the pivot has no peers to offer to the control peer
// in this case, no peersMsg will be sent out, and we would run into a time out
if len(expBzzAddrs) == 0 {
if err != nil {
if err.Error() != "exchange #1 \"trigger subPeersMsg and expect peersMsg\": timed out" {
t.Fatalf("expected timeout, got %v", err)
}
return
}
t.Fatalf("expected timeout, got no error")
}
if err != nil {
t.Fatal(err)
}
}
func testSortPeers(peers []*BzzAddr) []*BzzAddr {
comp := func(i, j int) bool {
vi := binary.BigEndian.Uint64(peers[i].OAddr)
vj := binary.BigEndian.Uint64(peers[j].OAddr)
return vi < vj
}
sort.Slice(peers, comp)
return peers
}
// as we are not creating a real node via the protocol,
// we need to create the discovery peer objects for the additional kademlia
// nodes manually
func newDiscPeer(addr pot.Address) (*Peer, error) {
pKey, err := ecdsa.GenerateKey(crypto.S256(), crand.Reader)
if err != nil {
return nil, err
}
pubKey := pKey.PublicKey
nod := enode.NewV4(&pubKey, net.IPv4(127, 0, 0, 1), 0, 0)
bzzAddr := &BzzAddr{OAddr: addr[:], UAddr: []byte(nod.String())}
id := nod.ID()
p2pPeer := p2p.NewPeer(id, id.String(), nil)
return NewPeer(&BzzPeer{
Peer: protocols.NewPeer(p2pPeer, &dummyMsgRW{}, DiscoverySpec),
BzzAddr: bzzAddr,
}, nil), nil
}
type dummyMsgRW struct{}
func (d *dummyMsgRW) ReadMsg() (p2p.Msg, error) {
return p2p.Msg{}, nil
}
func (d *dummyMsgRW) WriteMsg(msg p2p.Msg) error {
return nil
}

93
network/enr.go Normal file
View File

@ -0,0 +1,93 @@
package network
import (
"fmt"
"io"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/protocols"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethersphere/swarm/log"
)
// ENRAddrEntry is the entry type to store the bzz key in the enode
type ENRAddrEntry struct {
data []byte
}
func NewENRAddrEntry(addr []byte) *ENRAddrEntry {
return &ENRAddrEntry{
data: addr,
}
}
func (b ENRAddrEntry) Address() []byte {
return b.data
}
// ENRKey implements enr.Entry
func (b ENRAddrEntry) ENRKey() string {
return "bzzkey"
}
// EncodeRLP implements rlp.Encoder
func (b ENRAddrEntry) EncodeRLP(w io.Writer) error {
log.Debug("in encoderlp", "b", b, "p", fmt.Sprintf("%p", &b))
return rlp.Encode(w, &b.data)
}
// DecodeRLP implements rlp.Decoder
func (b *ENRAddrEntry) DecodeRLP(s *rlp.Stream) error {
byt, err := s.Bytes()
if err != nil {
return err
}
b.data = byt
log.Debug("in decoderlp", "b", b, "p", fmt.Sprintf("%p", &b))
return nil
}
type ENRLightNodeEntry bool
func (b ENRLightNodeEntry) ENRKey() string {
return "bzzlightnode"
}
type ENRBootNodeEntry bool
func (b ENRBootNodeEntry) ENRKey() string {
return "bzzbootnode"
}
func getENRBzzPeer(p *p2p.Peer, rw p2p.MsgReadWriter, spec *protocols.Spec) *BzzPeer {
var lightnode ENRLightNodeEntry
var bootnode ENRBootNodeEntry
// retrieve the ENR Record data
record := p.Node().Record()
record.Load(&lightnode)
record.Load(&bootnode)
// get the address; separate function as long as we need swarm/network:NewAddr() to call it
addr := getENRBzzAddr(p.Node())
// build the peer using the retrieved data
return &BzzPeer{
Peer: protocols.NewPeer(p, rw, spec),
LightNode: bool(lightnode),
BzzAddr: addr,
}
}
func getENRBzzAddr(nod *enode.Node) *BzzAddr {
var addr ENRAddrEntry
record := nod.Record()
record.Load(&addr)
return &BzzAddr{
OAddr: addr.data,
UAddr: []byte(nod.String()),
}
}

336
network/fetcher.go Normal file
View File

@ -0,0 +1,336 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"context"
"fmt"
"sync"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/tracing"
olog "github.com/opentracing/opentracing-go/log"
)
const (
defaultSearchTimeout = 1 * time.Second
// maximum number of forwarded requests (hops), to make sure requests are not
// forwarded forever in peer loops
maxHopCount uint8 = 20
)
// Time to consider peer to be skipped.
// Also used in stream delivery.
var RequestTimeout = 10 * time.Second
type RequestFunc func(context.Context, *Request) (*enode.ID, chan struct{}, error)
// Fetcher is created when a chunk is not found locally. It starts a request handler loop once and
// keeps it alive until all active requests are completed. This can happen:
// 1. either because the chunk is delivered
// 2. or because the requester cancelled/timed out
// Fetcher self destroys itself after it is completed.
// TODO: cancel all forward requests after termination
type Fetcher struct {
protoRequestFunc RequestFunc // request function fetcher calls to issue retrieve request for a chunk
addr storage.Address // the address of the chunk to be fetched
offerC chan *enode.ID // channel of sources (peer node id strings)
requestC chan uint8 // channel for incoming requests (with the hopCount value in it)
searchTimeout time.Duration
skipCheck bool
ctx context.Context
}
type Request struct {
Addr storage.Address // chunk address
Source *enode.ID // nodeID of peer to request from (can be nil)
SkipCheck bool // whether to offer the chunk first or deliver directly
peersToSkip *sync.Map // peers not to request chunk from (only makes sense if source is nil)
HopCount uint8 // number of forwarded requests (hops)
}
// NewRequest returns a new instance of Request based on chunk address skip check and
// a map of peers to skip.
func NewRequest(addr storage.Address, skipCheck bool, peersToSkip *sync.Map) *Request {
return &Request{
Addr: addr,
SkipCheck: skipCheck,
peersToSkip: peersToSkip,
}
}
// SkipPeer returns if the peer with nodeID should not be requested to deliver a chunk.
// Peers to skip are kept per Request and for a time period of RequestTimeout.
// This function is used in stream package in Delivery.RequestFromPeers to optimize
// requests for chunks.
func (r *Request) SkipPeer(nodeID string) bool {
val, ok := r.peersToSkip.Load(nodeID)
if !ok {
return false
}
t, ok := val.(time.Time)
if ok && time.Now().After(t.Add(RequestTimeout)) {
// deadline expired
r.peersToSkip.Delete(nodeID)
return false
}
return true
}
// FetcherFactory is initialised with a request function and can create fetchers
type FetcherFactory struct {
request RequestFunc
skipCheck bool
}
// NewFetcherFactory takes a request function and skip check parameter and creates a FetcherFactory
func NewFetcherFactory(request RequestFunc, skipCheck bool) *FetcherFactory {
return &FetcherFactory{
request: request,
skipCheck: skipCheck,
}
}
// New constructs a new Fetcher, for the given chunk. All peers in peersToSkip
// are not requested to deliver the given chunk. peersToSkip should always
// contain the peers which are actively requesting this chunk, to make sure we
// don't request back the chunks from them.
// The created Fetcher is started and returned.
func (f *FetcherFactory) New(ctx context.Context, source storage.Address, peers *sync.Map) storage.NetFetcher {
fetcher := NewFetcher(ctx, source, f.request, f.skipCheck)
go fetcher.run(peers)
return fetcher
}
// NewFetcher creates a new Fetcher for the given chunk address using the given request function.
func NewFetcher(ctx context.Context, addr storage.Address, rf RequestFunc, skipCheck bool) *Fetcher {
return &Fetcher{
addr: addr,
protoRequestFunc: rf,
offerC: make(chan *enode.ID),
requestC: make(chan uint8),
searchTimeout: defaultSearchTimeout,
skipCheck: skipCheck,
ctx: ctx,
}
}
// Offer is called when an upstream peer offers the chunk via syncing as part of `OfferedHashesMsg` and the node does not have the chunk locally.
func (f *Fetcher) Offer(source *enode.ID) {
// First we need to have this select to make sure that we return if context is done
select {
case <-f.ctx.Done():
return
default:
}
// This select alone would not guarantee that we return of context is done, it could potentially
// push to offerC instead if offerC is available (see number 2 in https://golang.org/ref/spec#Select_statements)
select {
case f.offerC <- source:
case <-f.ctx.Done():
}
}
// Request is called when an upstream peer request the chunk as part of `RetrieveRequestMsg`, or from a local request through FileStore, and the node does not have the chunk locally.
func (f *Fetcher) Request(hopCount uint8) {
// First we need to have this select to make sure that we return if context is done
select {
case <-f.ctx.Done():
return
default:
}
if hopCount >= maxHopCount {
log.Debug("fetcher request hop count limit reached", "hops", hopCount)
return
}
// This select alone would not guarantee that we return of context is done, it could potentially
// push to offerC instead if offerC is available (see number 2 in https://golang.org/ref/spec#Select_statements)
select {
case f.requestC <- hopCount + 1:
case <-f.ctx.Done():
}
}
// start prepares the Fetcher
// it keeps the Fetcher alive within the lifecycle of the passed context
func (f *Fetcher) run(peers *sync.Map) {
var (
doRequest bool // determines if retrieval is initiated in the current iteration
wait *time.Timer // timer for search timeout
waitC <-chan time.Time // timer channel
sources []*enode.ID // known sources, ie. peers that offered the chunk
requested bool // true if the chunk was actually requested
hopCount uint8
)
gone := make(chan *enode.ID) // channel to signal that a peer we requested from disconnected
// loop that keeps the fetching process alive
// after every request a timer is set. If this goes off we request again from another peer
// note that the previous request is still alive and has the chance to deliver, so
// requesting again extends the search. ie.,
// if a peer we requested from is gone we issue a new request, so the number of active
// requests never decreases
for {
select {
// incoming offer
case source := <-f.offerC:
log.Trace("new source", "peer addr", source, "request addr", f.addr)
// 1) the chunk is offered by a syncing peer
// add to known sources
sources = append(sources, source)
// launch a request to the source iff the chunk was requested (not just expected because its offered by a syncing peer)
doRequest = requested
// incoming request
case hopCount = <-f.requestC:
// 2) chunk is requested, set requested flag
// launch a request iff none been launched yet
doRequest = !requested
log.Trace("new request", "request addr", f.addr, "doRequest", doRequest)
requested = true
// peer we requested from is gone. fall back to another
// and remove the peer from the peers map
case id := <-gone:
peers.Delete(id.String())
doRequest = requested
log.Trace("peer gone", "peer id", id.String(), "request addr", f.addr, "doRequest", doRequest)
// search timeout: too much time passed since the last request,
// extend the search to a new peer if we can find one
case <-waitC:
doRequest = requested
log.Trace("search timed out: requesting", "request addr", f.addr, "doRequest", doRequest)
// all Fetcher context closed, can quit
case <-f.ctx.Done():
log.Trace("terminate fetcher", "request addr", f.addr)
// TODO: send cancellations to all peers left over in peers map (i.e., those we requested from)
return
}
// need to issue a new request
if doRequest {
var err error
sources, err = f.doRequest(gone, peers, sources, hopCount)
if err != nil {
log.Info("unable to request", "request addr", f.addr, "err", err)
}
}
// if wait channel is not set, set it to a timer
if requested {
if wait == nil {
wait = time.NewTimer(f.searchTimeout)
defer wait.Stop()
waitC = wait.C
} else {
// stop the timer and drain the channel if it was not drained earlier
if !wait.Stop() {
select {
case <-wait.C:
default:
}
}
// reset the timer to go off after defaultSearchTimeout
wait.Reset(f.searchTimeout)
}
}
doRequest = false
}
}
// doRequest attempts at finding a peer to request the chunk from
// * first it tries to request explicitly from peers that are known to have offered the chunk
// * if there are no such peers (available) it tries to request it from a peer closest to the chunk address
// excluding those in the peersToSkip map
// * if no such peer is found an error is returned
//
// if a request is successful,
// * the peer's address is added to the set of peers to skip
// * the peer's address is removed from prospective sources, and
// * a go routine is started that reports on the gone channel if the peer is disconnected (or terminated their streamer)
func (f *Fetcher) doRequest(gone chan *enode.ID, peersToSkip *sync.Map, sources []*enode.ID, hopCount uint8) ([]*enode.ID, error) {
var i int
var sourceID *enode.ID
var quit chan struct{}
req := &Request{
Addr: f.addr,
SkipCheck: f.skipCheck,
peersToSkip: peersToSkip,
HopCount: hopCount,
}
foundSource := false
// iterate over known sources
for i = 0; i < len(sources); i++ {
req.Source = sources[i]
var err error
log.Trace("fetcher.doRequest", "request addr", f.addr, "peer", req.Source.String())
sourceID, quit, err = f.protoRequestFunc(f.ctx, req)
if err == nil {
// remove the peer from known sources
// Note: we can modify the source although we are looping on it, because we break from the loop immediately
sources = append(sources[:i], sources[i+1:]...)
foundSource = true
break
}
}
// if there are no known sources, or none available, we try request from a closest node
if !foundSource {
req.Source = nil
var err error
sourceID, quit, err = f.protoRequestFunc(f.ctx, req)
if err != nil {
// if no peers found to request from
return sources, err
}
}
// add peer to the set of peers to skip from now
peersToSkip.Store(sourceID.String(), time.Now())
// if the quit channel is closed, it indicates that the source peer we requested from
// disconnected or terminated its streamer
// here start a go routine that watches this channel and reports the source peer on the gone channel
// this go routine quits if the fetcher global context is done to prevent process leak
go func() {
select {
case <-quit:
gone <- sourceID
case <-f.ctx.Done():
}
// finish the request span
spanId := fmt.Sprintf("stream.send.request.%v.%v", *sourceID, req.Addr)
span := tracing.ShiftSpanByKey(spanId)
if span != nil {
span.LogFields(olog.String("finish", "from doRequest"))
span.Finish()
}
}()
return sources, nil
}

476
network/fetcher_test.go Normal file
View File

@ -0,0 +1,476 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"context"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
)
var requestedPeerID = enode.HexID("3431c3939e1ee2a6345e976a8234f9870152d64879f30bc272a074f6859e75e8")
var sourcePeerID = enode.HexID("99d8594b52298567d2ca3f4c441a5ba0140ee9245e26460d01102a52773c73b9")
// mockRequester pushes every request to the requestC channel when its doRequest function is called
type mockRequester struct {
// requests []Request
requestC chan *Request // when a request is coming it is pushed to requestC
waitTimes []time.Duration // with waitTimes[i] you can define how much to wait on the ith request (optional)
count int //counts the number of requests
quitC chan struct{}
}
func newMockRequester(waitTimes ...time.Duration) *mockRequester {
return &mockRequester{
requestC: make(chan *Request),
waitTimes: waitTimes,
quitC: make(chan struct{}),
}
}
func (m *mockRequester) doRequest(ctx context.Context, request *Request) (*enode.ID, chan struct{}, error) {
waitTime := time.Duration(0)
if m.count < len(m.waitTimes) {
waitTime = m.waitTimes[m.count]
m.count++
}
time.Sleep(waitTime)
m.requestC <- request
// if there is a Source in the request use that, if not use the global requestedPeerId
source := request.Source
if source == nil {
source = &requestedPeerID
}
return source, m.quitC, nil
}
// TestFetcherSingleRequest creates a Fetcher using mockRequester, and run it with a sample set of peers to skip.
// mockRequester pushes a Request on a channel every time the request function is called. Using
// this channel we test if calling Fetcher.Request calls the request function, and whether it uses
// the correct peers to skip which we provided for the fetcher.run function.
func TestFetcherSingleRequest(t *testing.T) {
requester := newMockRequester()
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
peers := []string{"a", "b", "c", "d"}
peersToSkip := &sync.Map{}
for _, p := range peers {
peersToSkip.Store(p, time.Now())
}
go fetcher.run(peersToSkip)
fetcher.Request(0)
select {
case request := <-requester.requestC:
// request should contain all peers from peersToSkip provided to the fetcher
for _, p := range peers {
if _, ok := request.peersToSkip.Load(p); !ok {
t.Fatalf("request.peersToSkip misses peer")
}
}
// source peer should be also added to peersToSkip eventually
time.Sleep(100 * time.Millisecond)
if _, ok := request.peersToSkip.Load(requestedPeerID.String()); !ok {
t.Fatalf("request.peersToSkip does not contain peer returned by the request function")
}
// hopCount in the forwarded request should be incremented
if request.HopCount != 1 {
t.Fatalf("Expected request.HopCount 1 got %v", request.HopCount)
}
// fetch should trigger a request, if it doesn't happen in time, test should fail
case <-time.After(200 * time.Millisecond):
t.Fatalf("fetch timeout")
}
}
// TestCancelStopsFetcher tests that a cancelled fetcher does not initiate further requests even if its fetch function is called
func TestFetcherCancelStopsFetcher(t *testing.T) {
requester := newMockRequester()
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
peersToSkip := &sync.Map{}
// we start the fetcher, and then we immediately cancel the context
go fetcher.run(peersToSkip)
cancel()
// we call Request with an active context
fetcher.Request(0)
// fetcher should not initiate request, we can only check by waiting a bit and making sure no request is happening
select {
case <-requester.requestC:
t.Fatalf("cancelled fetcher initiated request")
case <-time.After(200 * time.Millisecond):
}
}
// TestFetchCancelStopsRequest tests that calling a Request function with a cancelled context does not initiate a request
func TestFetcherCancelStopsRequest(t *testing.T) {
t.Skip("since context is now per fetcher, this test is likely redundant")
requester := newMockRequester(100 * time.Millisecond)
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
peersToSkip := &sync.Map{}
// we start the fetcher with an active context
go fetcher.run(peersToSkip)
// we call Request with a cancelled context
fetcher.Request(0)
// fetcher should not initiate request, we can only check by waiting a bit and making sure no request is happening
select {
case <-requester.requestC:
t.Fatalf("cancelled fetch function initiated request")
case <-time.After(200 * time.Millisecond):
}
// if there is another Request with active context, there should be a request, because the fetcher itself is not cancelled
fetcher.Request(0)
select {
case <-requester.requestC:
case <-time.After(200 * time.Millisecond):
t.Fatalf("expected request")
}
}
// TestOfferUsesSource tests Fetcher Offer behavior.
// In this case there should be 1 (and only one) request initiated from the source peer, and the
// source nodeid should appear in the peersToSkip map.
func TestFetcherOfferUsesSource(t *testing.T) {
requester := newMockRequester(100 * time.Millisecond)
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
peersToSkip := &sync.Map{}
// start the fetcher
go fetcher.run(peersToSkip)
// call the Offer function with the source peer
fetcher.Offer(&sourcePeerID)
// fetcher should not initiate request
select {
case <-requester.requestC:
t.Fatalf("fetcher initiated request")
case <-time.After(200 * time.Millisecond):
}
// call Request after the Offer
fetcher.Request(0)
// there should be exactly 1 request coming from fetcher
var request *Request
select {
case request = <-requester.requestC:
if *request.Source != sourcePeerID {
t.Fatalf("Expected source id %v got %v", sourcePeerID, request.Source)
}
case <-time.After(200 * time.Millisecond):
t.Fatalf("fetcher did not initiate request")
}
select {
case <-requester.requestC:
t.Fatalf("Fetcher number of requests expected 1 got 2")
case <-time.After(200 * time.Millisecond):
}
// source peer should be added to peersToSkip eventually
time.Sleep(100 * time.Millisecond)
if _, ok := request.peersToSkip.Load(sourcePeerID.String()); !ok {
t.Fatalf("SourcePeerId not added to peersToSkip")
}
}
func TestFetcherOfferAfterRequestUsesSourceFromContext(t *testing.T) {
requester := newMockRequester(100 * time.Millisecond)
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
peersToSkip := &sync.Map{}
// start the fetcher
go fetcher.run(peersToSkip)
// call Request first
fetcher.Request(0)
// there should be a request coming from fetcher
var request *Request
select {
case request = <-requester.requestC:
if request.Source != nil {
t.Fatalf("Incorrect source peer id, expected nil got %v", request.Source)
}
case <-time.After(200 * time.Millisecond):
t.Fatalf("fetcher did not initiate request")
}
// after the Request call Offer
fetcher.Offer(&sourcePeerID)
// there should be a request coming from fetcher
select {
case request = <-requester.requestC:
if *request.Source != sourcePeerID {
t.Fatalf("Incorrect source peer id, expected %v got %v", sourcePeerID, request.Source)
}
case <-time.After(200 * time.Millisecond):
t.Fatalf("fetcher did not initiate request")
}
// source peer should be added to peersToSkip eventually
time.Sleep(100 * time.Millisecond)
if _, ok := request.peersToSkip.Load(sourcePeerID.String()); !ok {
t.Fatalf("SourcePeerId not added to peersToSkip")
}
}
// TestFetcherRetryOnTimeout tests that fetch retries after searchTimeOut has passed
func TestFetcherRetryOnTimeout(t *testing.T) {
requester := newMockRequester()
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
// set searchTimeOut to low value so the test is quicker
fetcher.searchTimeout = 250 * time.Millisecond
peersToSkip := &sync.Map{}
// start the fetcher
go fetcher.run(peersToSkip)
// call the fetch function with an active context
fetcher.Request(0)
// after 100ms the first request should be initiated
time.Sleep(100 * time.Millisecond)
select {
case <-requester.requestC:
default:
t.Fatalf("fetch did not initiate request")
}
// after another 100ms no new request should be initiated, because search timeout is 250ms
time.Sleep(100 * time.Millisecond)
select {
case <-requester.requestC:
t.Fatalf("unexpected request from fetcher")
default:
}
// after another 300ms search timeout is over, there should be a new request
time.Sleep(300 * time.Millisecond)
select {
case <-requester.requestC:
default:
t.Fatalf("fetch did not retry request")
}
}
// TestFetcherFactory creates a FetcherFactory and checks if the factory really creates and starts
// a Fetcher when it return a fetch function. We test the fetching functionality just by checking if
// a request is initiated when the fetch function is called
func TestFetcherFactory(t *testing.T) {
requester := newMockRequester(100 * time.Millisecond)
addr := make([]byte, 32)
fetcherFactory := NewFetcherFactory(requester.doRequest, false)
peersToSkip := &sync.Map{}
fetcher := fetcherFactory.New(context.Background(), addr, peersToSkip)
fetcher.Request(0)
// check if the created fetchFunction really starts a fetcher and initiates a request
select {
case <-requester.requestC:
case <-time.After(200 * time.Millisecond):
t.Fatalf("fetch timeout")
}
}
func TestFetcherRequestQuitRetriesRequest(t *testing.T) {
requester := newMockRequester()
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
// make sure the searchTimeout is long so it is sure the request is not
// retried because of timeout
fetcher.searchTimeout = 10 * time.Second
peersToSkip := &sync.Map{}
go fetcher.run(peersToSkip)
fetcher.Request(0)
select {
case <-requester.requestC:
case <-time.After(200 * time.Millisecond):
t.Fatalf("request is not initiated")
}
close(requester.quitC)
select {
case <-requester.requestC:
case <-time.After(200 * time.Millisecond):
t.Fatalf("request is not initiated after failed request")
}
}
// TestRequestSkipPeer checks if PeerSkip function will skip provided peer
// and not skip unknown one.
func TestRequestSkipPeer(t *testing.T) {
addr := make([]byte, 32)
peers := []enode.ID{
enode.HexID("3431c3939e1ee2a6345e976a8234f9870152d64879f30bc272a074f6859e75e8"),
enode.HexID("99d8594b52298567d2ca3f4c441a5ba0140ee9245e26460d01102a52773c73b9"),
}
peersToSkip := new(sync.Map)
peersToSkip.Store(peers[0].String(), time.Now())
r := NewRequest(addr, false, peersToSkip)
if !r.SkipPeer(peers[0].String()) {
t.Errorf("peer not skipped")
}
if r.SkipPeer(peers[1].String()) {
t.Errorf("peer skipped")
}
}
// TestRequestSkipPeerExpired checks if a peer to skip is not skipped
// after RequestTimeout has passed.
func TestRequestSkipPeerExpired(t *testing.T) {
addr := make([]byte, 32)
peer := enode.HexID("3431c3939e1ee2a6345e976a8234f9870152d64879f30bc272a074f6859e75e8")
// set RequestTimeout to a low value and reset it after the test
defer func(t time.Duration) { RequestTimeout = t }(RequestTimeout)
RequestTimeout = 250 * time.Millisecond
peersToSkip := new(sync.Map)
peersToSkip.Store(peer.String(), time.Now())
r := NewRequest(addr, false, peersToSkip)
if !r.SkipPeer(peer.String()) {
t.Errorf("peer not skipped")
}
time.Sleep(500 * time.Millisecond)
if r.SkipPeer(peer.String()) {
t.Errorf("peer skipped")
}
}
// TestRequestSkipPeerPermanent checks if a peer to skip is not skipped
// after RequestTimeout is not skipped if it is set for a permanent skipping
// by value to peersToSkip map is not time.Duration.
func TestRequestSkipPeerPermanent(t *testing.T) {
addr := make([]byte, 32)
peer := enode.HexID("3431c3939e1ee2a6345e976a8234f9870152d64879f30bc272a074f6859e75e8")
// set RequestTimeout to a low value and reset it after the test
defer func(t time.Duration) { RequestTimeout = t }(RequestTimeout)
RequestTimeout = 250 * time.Millisecond
peersToSkip := new(sync.Map)
peersToSkip.Store(peer.String(), true)
r := NewRequest(addr, false, peersToSkip)
if !r.SkipPeer(peer.String()) {
t.Errorf("peer not skipped")
}
time.Sleep(500 * time.Millisecond)
if !r.SkipPeer(peer.String()) {
t.Errorf("peer not skipped")
}
}
func TestFetcherMaxHopCount(t *testing.T) {
requester := newMockRequester()
addr := make([]byte, 32)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := NewFetcher(ctx, addr, requester.doRequest, true)
peersToSkip := &sync.Map{}
go fetcher.run(peersToSkip)
// if hopCount is already at max no request should be initiated
select {
case <-requester.requestC:
t.Fatalf("cancelled fetcher initiated request")
case <-time.After(200 * time.Millisecond):
}
}

251
network/hive.go Normal file
View File

@ -0,0 +1,251 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"fmt"
"sync"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/state"
)
/*
Hive is the logistic manager of the swarm
When the hive is started, a forever loop is launched that
asks the kademlia nodetable
to suggest peers to bootstrap connectivity
*/
// HiveParams holds the config options to hive
type HiveParams struct {
Discovery bool // if want discovery of not
PeersBroadcastSetSize uint8 // how many peers to use when relaying
MaxPeersPerRequest uint8 // max size for peer address batches
KeepAliveInterval time.Duration
}
// NewHiveParams returns hive config with only the
func NewHiveParams() *HiveParams {
return &HiveParams{
Discovery: true,
PeersBroadcastSetSize: 3,
MaxPeersPerRequest: 5,
KeepAliveInterval: 500 * time.Millisecond,
}
}
// Hive manages network connections of the swarm node
type Hive struct {
*HiveParams // settings
*Kademlia // the overlay connectiviy driver
Store state.Store // storage interface to save peers across sessions
addPeer func(*enode.Node) // server callback to connect to a peer
// bookkeeping
lock sync.Mutex
peers map[enode.ID]*BzzPeer
ticker *time.Ticker
}
// NewHive constructs a new hive
// HiveParams: config parameters
// Kademlia: connectivity driver using a network topology
// StateStore: to save peers across sessions
func NewHive(params *HiveParams, kad *Kademlia, store state.Store) *Hive {
return &Hive{
HiveParams: params,
Kademlia: kad,
Store: store,
peers: make(map[enode.ID]*BzzPeer),
}
}
// Start stars the hive, receives p2p.Server only at startup
// server is used to connect to a peer based on its NodeID or enode URL
// these are called on the p2p.Server which runs on the node
func (h *Hive) Start(server *p2p.Server) error {
log.Info("Starting hive", "baseaddr", fmt.Sprintf("%x", h.BaseAddr()[:4]))
// if state store is specified, load peers to prepopulate the overlay address book
if h.Store != nil {
log.Info("Detected an existing store. trying to load peers")
if err := h.loadPeers(); err != nil {
log.Error(fmt.Sprintf("%08x hive encoutered an error trying to load peers", h.BaseAddr()[:4]))
return err
}
}
// assigns the p2p.Server#AddPeer function to connect to peers
h.addPeer = server.AddPeer
// ticker to keep the hive alive
h.ticker = time.NewTicker(h.KeepAliveInterval)
// this loop is doing bootstrapping and maintains a healthy table
go h.connect()
return nil
}
// Stop terminates the updateloop and saves the peers
func (h *Hive) Stop() error {
log.Info(fmt.Sprintf("%08x hive stopping, saving peers", h.BaseAddr()[:4]))
h.ticker.Stop()
if h.Store != nil {
if err := h.savePeers(); err != nil {
return fmt.Errorf("could not save peers to persistence store: %v", err)
}
if err := h.Store.Close(); err != nil {
return fmt.Errorf("could not close file handle to persistence store: %v", err)
}
}
log.Info(fmt.Sprintf("%08x hive stopped, dropping peers", h.BaseAddr()[:4]))
h.EachConn(nil, 255, func(p *Peer, _ int) bool {
log.Info(fmt.Sprintf("%08x dropping peer %08x", h.BaseAddr()[:4], p.Address()[:4]))
p.Drop()
return true
})
log.Info(fmt.Sprintf("%08x all peers dropped", h.BaseAddr()[:4]))
return nil
}
// connect is a forever loop
// at each iteration, ask the overlay driver to suggest the most preferred peer to connect to
// as well as advertises saturation depth if needed
func (h *Hive) connect() {
for range h.ticker.C {
addr, depth, changed := h.SuggestPeer()
if h.Discovery && changed {
NotifyDepth(uint8(depth), h.Kademlia)
}
if addr == nil {
continue
}
log.Trace(fmt.Sprintf("%08x hive connect() suggested %08x", h.BaseAddr()[:4], addr.Address()[:4]))
under, err := enode.ParseV4(string(addr.Under()))
if err != nil {
log.Warn(fmt.Sprintf("%08x unable to connect to bee %08x: invalid node URL: %v", h.BaseAddr()[:4], addr.Address()[:4], err))
continue
}
log.Trace(fmt.Sprintf("%08x attempt to connect to bee %08x", h.BaseAddr()[:4], addr.Address()[:4]))
h.addPeer(under)
}
}
// Run protocol run function
func (h *Hive) Run(p *BzzPeer) error {
h.trackPeer(p)
defer h.untrackPeer(p)
dp := NewPeer(p, h.Kademlia)
depth, changed := h.On(dp)
// if we want discovery, advertise change of depth
if h.Discovery {
if changed {
// if depth changed, send to all peers
NotifyDepth(depth, h.Kademlia)
} else {
// otherwise just send depth to new peer
dp.NotifyDepth(depth)
}
NotifyPeer(p.BzzAddr, h.Kademlia)
}
defer h.Off(dp)
return dp.Run(dp.HandleMsg)
}
func (h *Hive) trackPeer(p *BzzPeer) {
h.lock.Lock()
h.peers[p.ID()] = p
h.lock.Unlock()
}
func (h *Hive) untrackPeer(p *BzzPeer) {
h.lock.Lock()
delete(h.peers, p.ID())
h.lock.Unlock()
}
// NodeInfo function is used by the p2p.server RPC interface to display
// protocol specific node information
func (h *Hive) NodeInfo() interface{} {
return h.String()
}
// PeerInfo function is used by the p2p.server RPC interface to display
// protocol specific information any connected peer referred to by their NodeID
func (h *Hive) PeerInfo(id enode.ID) interface{} {
p := h.Peer(id)
if p == nil {
return nil
}
addr := NewAddr(p.Node())
return struct {
OAddr hexutil.Bytes
UAddr hexutil.Bytes
}{
OAddr: addr.OAddr,
UAddr: addr.UAddr,
}
}
// Peer returns a bzz peer from the Hive. If there is no peer
// with the provided enode id, a nil value is returned.
func (h *Hive) Peer(id enode.ID) *BzzPeer {
h.lock.Lock()
defer h.lock.Unlock()
return h.peers[id]
}
// loadPeers, savePeer implement persistence callback/
func (h *Hive) loadPeers() error {
var as []*BzzAddr
err := h.Store.Get("peers", &as)
if err != nil {
if err == state.ErrNotFound {
log.Info(fmt.Sprintf("hive %08x: no persisted peers found", h.BaseAddr()[:4]))
return nil
}
return err
}
log.Info(fmt.Sprintf("hive %08x: peers loaded", h.BaseAddr()[:4]))
return h.Register(as...)
}
// savePeers, savePeer implement persistence callback/
func (h *Hive) savePeers() error {
var peers []*BzzAddr
h.Kademlia.EachAddr(nil, 256, func(pa *BzzAddr, i int) bool {
if pa == nil {
log.Warn(fmt.Sprintf("empty addr: %v", i))
return true
}
log.Trace("saving peer", "peer", pa)
peers = append(peers, pa)
return true
})
if err := h.Store.Put("peers", peers); err != nil {
return fmt.Errorf("could not save peers: %v", err)
}
return nil
}

177
network/hive_test.go Normal file
View File

@ -0,0 +1,177 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"io/ioutil"
"os"
"testing"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p"
p2ptest "github.com/ethereum/go-ethereum/p2p/testing"
"github.com/ethersphere/swarm/state"
)
func newHiveTester(params *HiveParams, n int, store state.Store) (*bzzTester, *Hive, error) {
// setup
prvkey, err := crypto.GenerateKey()
if err != nil {
return nil, nil, err
}
addr := PrivateKeyToBzzKey(prvkey)
to := NewKademlia(addr, NewKadParams())
pp := NewHive(params, to, store) // hive
bt, err := newBzzBaseTester(n, prvkey, DiscoverySpec, pp.Run)
if err != nil {
return nil, nil, err
}
return bt, pp, nil
}
// TestRegisterAndConnect verifies that the protocol runs successfully
// and that the peer connection exists afterwards
func TestRegisterAndConnect(t *testing.T) {
params := NewHiveParams()
s, pp, err := newHiveTester(params, 1, nil)
if err != nil {
t.Fatal(err)
}
node := s.Nodes[0]
raddr := NewAddr(node)
pp.Register(raddr)
// start the hive
err = pp.Start(s.Server)
if err != nil {
t.Fatal(err)
}
defer pp.Stop()
// both hive connect and disconect check have time delays
// therefore we need to verify that peer is connected
// so that we are sure that the disconnect timeout doesn't complete
// before the hive connect method is run at least once
timeout := time.After(time.Second)
for {
select {
case <-timeout:
t.Fatalf("expected connection")
default:
}
i := 0
pp.Kademlia.EachConn(nil, 256, func(addr *Peer, po int) bool {
i++
return true
})
if i > 0 {
break
}
time.Sleep(time.Millisecond)
}
// check that the connection actually exists
// the timeout error means no disconnection events
// were received within the a certain timeout
err = s.TestDisconnected(&p2ptest.Disconnect{
Peer: s.Nodes[0].ID(),
Error: nil,
})
if err == nil || err.Error() != "timed out waiting for peers to disconnect" {
t.Fatalf("expected no disconnection event")
}
}
// TestHiveStatePersistance creates a protocol simulation with n peers for a node
// After protocols complete, the node is shut down and the state is stored.
// Another simulation is created, where 0 nodes are created, but where the stored state is passed
// The test succeeds if all the peers from the stored state are known after the protocols of the
// second simulation have completed
//
// Actual connectivity is not in scope for this test, as the peers loaded from state are not known to
// the simulation; the test only verifies that the peers are known to the node
func TestHiveStatePersistance(t *testing.T) {
dir, err := ioutil.TempDir("", "hive_test_store")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
const peersCount = 5
startHive := func(t *testing.T, dir string) (h *Hive, cleanupFunc func()) {
store, err := state.NewDBStore(dir)
if err != nil {
t.Fatal(err)
}
params := NewHiveParams()
params.Discovery = false
prvkey, err := crypto.GenerateKey()
if err != nil {
t.Fatal(err)
}
h = NewHive(params, NewKademlia(PrivateKeyToBzzKey(prvkey), NewKadParams()), store)
s := p2ptest.NewProtocolTester(prvkey, 0, func(p *p2p.Peer, rw p2p.MsgReadWriter) error { return nil })
if err := h.Start(s.Server); err != nil {
t.Fatal(err)
}
cleanupFunc = func() {
err := h.Stop()
if err != nil {
t.Fatal(err)
}
s.Stop()
}
return h, cleanupFunc
}
h1, cleanup1 := startHive(t, dir)
peers := make(map[string]bool)
for i := 0; i < peersCount; i++ {
raddr := RandomAddr()
h1.Register(raddr)
peers[raddr.String()] = true
}
cleanup1()
// start the hive and check that we know of all expected peers
h2, cleanup2 := startHive(t, dir)
cleanup2()
i := 0
h2.Kademlia.EachAddr(nil, 256, func(addr *BzzAddr, po int) bool {
delete(peers, addr.String())
i++
return true
})
if i != peersCount {
t.Fatalf("invalid number of entries: got %v, want %v", i, peersCount)
}
if len(peers) != 0 {
t.Fatalf("%d peers left over: %v", len(peers), peers)
}
}

911
network/kademlia.go Normal file
View File

@ -0,0 +1,911 @@
// Copyright 2017 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"bytes"
"fmt"
"math/rand"
"strings"
"sync"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/pot"
sv "github.com/ethersphere/swarm/version"
)
/*
Taking the proximity order relative to a fix point x classifies the points in
the space (n byte long byte sequences) into bins. Items in each are at
most half as distant from x as items in the previous bin. Given a sample of
uniformly distributed items (a hash function over arbitrary sequence) the
proximity scale maps onto series of subsets with cardinalities on a negative
exponential scale.
It also has the property that any two item belonging to the same bin are at
most half as distant from each other as they are from x.
If we think of random sample of items in the bins as connections in a network of
interconnected nodes then relative proximity can serve as the basis for local
decisions for graph traversal where the task is to find a route between two
points. Since in every hop, the finite distance halves, there is
a guaranteed constant maximum limit on the number of hops needed to reach one
node from the other.
*/
var Pof = pot.DefaultPof(256)
// KadParams holds the config params for Kademlia
type KadParams struct {
// adjustable parameters
MaxProxDisplay int // number of rows the table shows
NeighbourhoodSize int // nearest neighbour core minimum cardinality
MinBinSize int // minimum number of peers in a row
MaxBinSize int // maximum number of peers in a row before pruning
RetryInterval int64 // initial interval before a peer is first redialed
RetryExponent int // exponent to multiply retry intervals with
MaxRetries int // maximum number of redial attempts
// function to sanction or prevent suggesting a peer
Reachable func(*BzzAddr) bool `json:"-"`
}
// NewKadParams returns a params struct with default values
func NewKadParams() *KadParams {
return &KadParams{
MaxProxDisplay: 16,
NeighbourhoodSize: 2,
MinBinSize: 2,
MaxBinSize: 4,
RetryInterval: 4200000000, // 4.2 sec
MaxRetries: 42,
RetryExponent: 2,
}
}
// Kademlia is a table of live peers and a db of known peers (node records)
type Kademlia struct {
lock sync.RWMutex
*KadParams // Kademlia configuration parameters
base []byte // immutable baseaddress of the table
addrs *pot.Pot // pots container for known peer addresses
conns *pot.Pot // pots container for live peer connections
depth uint8 // stores the last current depth of saturation
nDepth int // stores the last neighbourhood depth
nDepthMu sync.RWMutex // protects neighbourhood depth nDepth
nDepthSig []chan struct{} // signals when neighbourhood depth nDepth is changed
}
// NewKademlia creates a Kademlia table for base address addr
// with parameters as in params
// if params is nil, it uses default values
func NewKademlia(addr []byte, params *KadParams) *Kademlia {
if params == nil {
params = NewKadParams()
}
return &Kademlia{
base: addr,
KadParams: params,
addrs: pot.NewPot(nil, 0),
conns: pot.NewPot(nil, 0),
}
}
// entry represents a Kademlia table entry (an extension of BzzAddr)
type entry struct {
*BzzAddr
conn *Peer
seenAt time.Time
retries int
}
// newEntry creates a kademlia peer from a *Peer
func newEntry(p *BzzAddr) *entry {
return &entry{
BzzAddr: p,
seenAt: time.Now(),
}
}
// Label is a short tag for the entry for debug
func Label(e *entry) string {
return fmt.Sprintf("%s (%d)", e.Hex()[:4], e.retries)
}
// Hex is the hexadecimal serialisation of the entry address
func (e *entry) Hex() string {
return fmt.Sprintf("%x", e.Address())
}
// Register enters each address as kademlia peer record into the
// database of known peer addresses
func (k *Kademlia) Register(peers ...*BzzAddr) error {
k.lock.Lock()
defer k.lock.Unlock()
metrics.GetOrRegisterCounter("kad.register", nil).Inc(1)
var known, size int
for _, p := range peers {
log.Trace("kademlia trying to register", "addr", p)
// error if self received, peer should know better
// and should be punished for this
if bytes.Equal(p.Address(), k.base) {
return fmt.Errorf("add peers: %x is self", k.base)
}
var found bool
k.addrs, _, found, _ = pot.Swap(k.addrs, p, Pof, func(v pot.Val) pot.Val {
// if not found
if v == nil {
log.Trace("registering new peer", "addr", p)
// insert new offline peer into conns
return newEntry(p)
}
e := v.(*entry)
// if underlay address is different, still add
if !bytes.Equal(e.BzzAddr.UAddr, p.UAddr) {
log.Trace("underlay addr is different, so add again", "new", p, "old", e.BzzAddr)
// insert new offline peer into conns
return newEntry(p)
}
return v
})
if found {
known++
}
size++
}
k.setNeighbourhoodDepth()
return nil
}
// SuggestPeer returns an unconnected peer address as a peer suggestion for connection
func (k *Kademlia) SuggestPeer() (suggestedPeer *BzzAddr, saturationDepth int, changed bool) {
k.lock.Lock()
defer k.lock.Unlock()
metrics.GetOrRegisterCounter("kad.suggestpeer", nil).Inc(1)
radius := neighbourhoodRadiusForPot(k.conns, k.NeighbourhoodSize, k.base)
// collect undersaturated bins in ascending order of number of connected peers
// and from shallow to deep (ascending order of PO)
// insert them in a map of bin arrays, keyed with the number of connected peers
saturation := make(map[int][]int)
var lastPO int // the last non-empty PO bin in the iteration
saturationDepth = -1 // the deepest PO such that all shallower bins have >= k.MinBinSize peers
var pastDepth bool // whether po of iteration >= depth
k.conns.EachBin(k.base, Pof, 0, func(po, size int, f func(func(val pot.Val) bool) bool) bool {
// process skipped empty bins
for ; lastPO < po; lastPO++ {
// find the lowest unsaturated bin
if saturationDepth == -1 {
saturationDepth = lastPO
}
// if there is an empty bin, depth is surely passed
pastDepth = true
saturation[0] = append(saturation[0], lastPO)
}
lastPO = po + 1
// past radius, depth is surely passed
if po >= radius {
pastDepth = true
}
// beyond depth the bin is treated as unsaturated even if size >= k.MinBinSize
// in order to achieve full connectivity to all neighbours
if pastDepth && size >= k.MinBinSize {
size = k.MinBinSize - 1
}
// process non-empty unsaturated bins
if size < k.MinBinSize {
// find the lowest unsaturated bin
if saturationDepth == -1 {
saturationDepth = po
}
saturation[size] = append(saturation[size], po)
}
return true
})
// to trigger peer requests for peers closer than closest connection, include
// all bins from nearest connection upto nearest address as unsaturated
var nearestAddrAt int
k.addrs.EachNeighbour(k.base, Pof, func(_ pot.Val, po int) bool {
nearestAddrAt = po
return false
})
// including bins as size 0 has the effect that requesting connection
// is prioritised over non-empty shallower bins
for ; lastPO <= nearestAddrAt; lastPO++ {
saturation[0] = append(saturation[0], lastPO)
}
// all PO bins are saturated, ie., minsize >= k.MinBinSize, no peer suggested
if len(saturation) == 0 {
return nil, 0, false
}
// find the first callable peer in the address book
// starting from the bins with smallest size proceeding from shallow to deep
// for each bin (up until neighbourhood radius) we find callable candidate peers
for size := 0; size < k.MinBinSize && suggestedPeer == nil; size++ {
bins, ok := saturation[size]
if !ok {
// no bin with this size
continue
}
cur := 0
curPO := bins[0]
k.addrs.EachBin(k.base, Pof, curPO, func(po, _ int, f func(func(pot.Val) bool) bool) bool {
curPO = bins[cur]
// find the next bin that has size size
if curPO == po {
cur++
} else {
// skip bins that have no addresses
for ; cur < len(bins) && curPO < po; cur++ {
curPO = bins[cur]
}
if po < curPO {
cur--
return true
}
// stop if there are no addresses
if curPO < po {
return false
}
}
// curPO found
// find a callable peer out of the addresses in the unsaturated bin
// stop if found
f(func(val pot.Val) bool {
e := val.(*entry)
if k.callable(e) {
suggestedPeer = e.BzzAddr
return false
}
return true
})
return cur < len(bins) && suggestedPeer == nil
})
}
if uint8(saturationDepth) < k.depth {
k.depth = uint8(saturationDepth)
return suggestedPeer, saturationDepth, true
}
return suggestedPeer, 0, false
}
// On inserts the peer as a kademlia peer into the live peers
func (k *Kademlia) On(p *Peer) (uint8, bool) {
k.lock.Lock()
defer k.lock.Unlock()
metrics.GetOrRegisterCounter("kad.on", nil).Inc(1)
var ins bool
k.conns, _, _, _ = pot.Swap(k.conns, p, Pof, func(v pot.Val) pot.Val {
// if not found live
if v == nil {
ins = true
// insert new online peer into conns
return p
}
// found among live peers, do nothing
return v
})
if ins && !p.BzzPeer.LightNode {
a := newEntry(p.BzzAddr)
a.conn = p
// insert new online peer into addrs
k.addrs, _, _, _ = pot.Swap(k.addrs, p, Pof, func(v pot.Val) pot.Val {
return a
})
}
// calculate if depth of saturation changed
depth := uint8(k.saturation())
var changed bool
if depth != k.depth {
changed = true
k.depth = depth
}
k.setNeighbourhoodDepth()
return k.depth, changed
}
// setNeighbourhoodDepth calculates neighbourhood depth with depthForPot,
// sets it to the nDepth and sends a signal to every nDepthSig channel.
func (k *Kademlia) setNeighbourhoodDepth() {
nDepth := depthForPot(k.conns, k.NeighbourhoodSize, k.base)
var changed bool
k.nDepthMu.Lock()
if nDepth != k.nDepth {
k.nDepth = nDepth
changed = true
}
k.nDepthMu.Unlock()
if len(k.nDepthSig) > 0 && changed {
for _, c := range k.nDepthSig {
// Every nDepthSig channel has a buffer capacity of 1,
// so every receiver will get the signal even if the
// select statement has the default case to avoid blocking.
select {
case c <- struct{}{}:
default:
}
}
}
}
// NeighbourhoodDepth returns the value calculated by depthForPot function
// in setNeighbourhoodDepth method.
func (k *Kademlia) NeighbourhoodDepth() int {
k.nDepthMu.RLock()
defer k.nDepthMu.RUnlock()
return k.nDepth
}
// SubscribeToNeighbourhoodDepthChange returns the channel that signals
// when neighbourhood depth value is changed. The current neighbourhood depth
// is returned by NeighbourhoodDepth method. Returned function unsubscribes
// the channel from signaling and releases the resources. Returned function is safe
// to be called multiple times.
func (k *Kademlia) SubscribeToNeighbourhoodDepthChange() (c <-chan struct{}, unsubscribe func()) {
channel := make(chan struct{}, 1)
var closeOnce sync.Once
k.lock.Lock()
defer k.lock.Unlock()
k.nDepthSig = append(k.nDepthSig, channel)
unsubscribe = func() {
k.lock.Lock()
defer k.lock.Unlock()
for i, c := range k.nDepthSig {
if c == channel {
k.nDepthSig = append(k.nDepthSig[:i], k.nDepthSig[i+1:]...)
break
}
}
closeOnce.Do(func() { close(channel) })
}
return channel, unsubscribe
}
// Off removes a peer from among live peers
func (k *Kademlia) Off(p *Peer) {
k.lock.Lock()
defer k.lock.Unlock()
var del bool
if !p.BzzPeer.LightNode {
k.addrs, _, _, _ = pot.Swap(k.addrs, p, Pof, func(v pot.Val) pot.Val {
// v cannot be nil, must check otherwise we overwrite entry
if v == nil {
panic(fmt.Sprintf("connected peer not found %v", p))
}
del = true
return newEntry(p.BzzAddr)
})
} else {
del = true
}
if del {
k.conns, _, _, _ = pot.Swap(k.conns, p, Pof, func(_ pot.Val) pot.Val {
// v cannot be nil, but no need to check
return nil
})
k.setNeighbourhoodDepth()
}
}
func (k *Kademlia) ListKnown() []*BzzAddr {
res := []*BzzAddr{}
k.addrs.Each(func(val pot.Val) bool {
e := val.(*entry)
res = append(res, e.BzzAddr)
return true
})
return res
}
// EachConn is an iterator with args (base, po, f) applies f to each live peer
// that has proximity order po or less as measured from the base
// if base is nil, kademlia base address is used
func (k *Kademlia) EachConn(base []byte, o int, f func(*Peer, int) bool) {
k.lock.RLock()
defer k.lock.RUnlock()
k.eachConn(base, o, f)
}
func (k *Kademlia) eachConn(base []byte, o int, f func(*Peer, int) bool) {
if len(base) == 0 {
base = k.base
}
k.conns.EachNeighbour(base, Pof, func(val pot.Val, po int) bool {
if po > o {
return true
}
return f(val.(*Peer), po)
})
}
// EachAddr called with (base, po, f) is an iterator applying f to each known peer
// that has proximity order o or less as measured from the base
// if base is nil, kademlia base address is used
func (k *Kademlia) EachAddr(base []byte, o int, f func(*BzzAddr, int) bool) {
k.lock.RLock()
defer k.lock.RUnlock()
k.eachAddr(base, o, f)
}
func (k *Kademlia) eachAddr(base []byte, o int, f func(*BzzAddr, int) bool) {
if len(base) == 0 {
base = k.base
}
k.addrs.EachNeighbour(base, Pof, func(val pot.Val, po int) bool {
if po > o {
return true
}
return f(val.(*entry).BzzAddr, po)
})
}
// neighbourhoodRadiusForPot returns the neighbourhood radius of the kademlia
// neighbourhood radius encloses the nearest neighbour set with size >= neighbourhoodSize
// i.e., neighbourhood radius is the deepest PO such that all bins not shallower altogether
// contain at least neighbourhoodSize connected peers
// if there is altogether less than neighbourhoodSize peers connected, it returns 0
// caller must hold the lock
func neighbourhoodRadiusForPot(p *pot.Pot, neighbourhoodSize int, pivotAddr []byte) (depth int) {
if p.Size() <= neighbourhoodSize {
return 0
}
// total number of peers in iteration
var size int
f := func(v pot.Val, i int) bool {
// po == 256 means that addr is the pivot address(self)
if i == 256 {
return true
}
size++
// this means we have all nn-peers.
// depth is by default set to the bin of the farthest nn-peer
if size == neighbourhoodSize {
depth = i
return false
}
return true
}
p.EachNeighbour(pivotAddr, Pof, f)
return depth
}
// depthForPot returns the depth for the pot
// depth is the radius of the minimal extension of nearest neighbourhood that
// includes all empty PO bins. I.e., depth is the deepest PO such that
// - it is not deeper than neighbourhood radius
// - all bins shallower than depth are not empty
// caller must hold the lock
func depthForPot(p *pot.Pot, neighbourhoodSize int, pivotAddr []byte) (depth int) {
if p.Size() <= neighbourhoodSize {
return 0
}
// determining the depth is a two-step process
// first we find the proximity bin of the shallowest of the neighbourhoodSize peers
// the numeric value of depth cannot be higher than this
maxDepth := neighbourhoodRadiusForPot(p, neighbourhoodSize, pivotAddr)
// the second step is to test for empty bins in order from shallowest to deepest
// if an empty bin is found, this will be the actual depth
// we stop iterating if we hit the maxDepth determined in the first step
p.EachBin(pivotAddr, Pof, 0, func(po int, _ int, f func(func(pot.Val) bool) bool) bool {
if po == depth {
if maxDepth == depth {
return false
}
depth++
return true
}
return false
})
return depth
}
// callable decides if an address entry represents a callable peer
func (k *Kademlia) callable(e *entry) bool {
// not callable if peer is live or exceeded maxRetries
if e.conn != nil || e.retries > k.MaxRetries {
return false
}
// calculate the allowed number of retries based on time lapsed since last seen
timeAgo := int64(time.Since(e.seenAt))
div := int64(k.RetryExponent)
div += (150000 - rand.Int63n(300000)) * div / 1000000
var retries int
for delta := timeAgo; delta > k.RetryInterval; delta /= div {
retries++
}
// this is never called concurrently, so safe to increment
// peer can be retried again
if retries < e.retries {
log.Trace(fmt.Sprintf("%08x: %v long time since last try (at %v) needed before retry %v, wait only warrants %v", k.BaseAddr()[:4], e, timeAgo, e.retries, retries))
return false
}
// function to sanction or prevent suggesting a peer
if k.Reachable != nil && !k.Reachable(e.BzzAddr) {
log.Trace(fmt.Sprintf("%08x: peer %v is temporarily not callable", k.BaseAddr()[:4], e))
return false
}
e.retries++
log.Trace(fmt.Sprintf("%08x: peer %v is callable", k.BaseAddr()[:4], e))
return true
}
// BaseAddr return the kademlia base address
func (k *Kademlia) BaseAddr() []byte {
return k.base
}
// String returns kademlia table + kaddb table displayed with ascii
func (k *Kademlia) String() string {
k.lock.RLock()
defer k.lock.RUnlock()
return k.string()
}
// string returns kademlia table + kaddb table displayed with ascii
// caller must hold the lock
func (k *Kademlia) string() string {
wsrow := " "
var rows []string
rows = append(rows, "=========================================================================")
if len(sv.GitCommit) > 0 {
rows = append(rows, fmt.Sprintf("commit hash: %s", sv.GitCommit))
}
rows = append(rows, fmt.Sprintf("%v KΛÐΞMLIΛ hive: queen's address: %x", time.Now().UTC().Format(time.UnixDate), k.BaseAddr()))
rows = append(rows, fmt.Sprintf("population: %d (%d), NeighbourhoodSize: %d, MinBinSize: %d, MaxBinSize: %d", k.conns.Size(), k.addrs.Size(), k.NeighbourhoodSize, k.MinBinSize, k.MaxBinSize))
liverows := make([]string, k.MaxProxDisplay)
peersrows := make([]string, k.MaxProxDisplay)
depth := depthForPot(k.conns, k.NeighbourhoodSize, k.base)
rest := k.conns.Size()
k.conns.EachBin(k.base, Pof, 0, func(po, size int, f func(func(val pot.Val) bool) bool) bool {
var rowlen int
if po >= k.MaxProxDisplay {
po = k.MaxProxDisplay - 1
}
row := []string{fmt.Sprintf("%2d", size)}
rest -= size
f(func(val pot.Val) bool {
e := val.(*Peer)
row = append(row, fmt.Sprintf("%x", e.Address()[:2]))
rowlen++
return rowlen < 4
})
r := strings.Join(row, " ")
r = r + wsrow
liverows[po] = r[:31]
return true
})
k.addrs.EachBin(k.base, Pof, 0, func(po, size int, f func(func(val pot.Val) bool) bool) bool {
var rowlen int
if po >= k.MaxProxDisplay {
po = k.MaxProxDisplay - 1
}
if size < 0 {
panic("wtf")
}
row := []string{fmt.Sprintf("%2d", size)}
// we are displaying live peers too
f(func(val pot.Val) bool {
e := val.(*entry)
row = append(row, Label(e))
rowlen++
return rowlen < 4
})
peersrows[po] = strings.Join(row, " ")
return true
})
for i := 0; i < k.MaxProxDisplay; i++ {
if i == depth {
rows = append(rows, fmt.Sprintf("============ DEPTH: %d ==========================================", i))
}
left := liverows[i]
right := peersrows[i]
if len(left) == 0 {
left = " 0 "
}
if len(right) == 0 {
right = " 0"
}
rows = append(rows, fmt.Sprintf("%03d %v | %v", i, left, right))
}
rows = append(rows, "=========================================================================")
return "\n" + strings.Join(rows, "\n")
}
// PeerPot keeps info about expected nearest neighbours
// used for testing only
// TODO move to separate testing tools file
type PeerPot struct {
NNSet [][]byte
PeersPerBin []int
}
// NewPeerPotMap creates a map of pot record of *BzzAddr with keys
// as hexadecimal representations of the address.
// the NeighbourhoodSize of the passed kademlia is used
// used for testing only
// TODO move to separate testing tools file
func NewPeerPotMap(neighbourhoodSize int, addrs [][]byte) map[string]*PeerPot {
// create a table of all nodes for health check
np := pot.NewPot(nil, 0)
for _, addr := range addrs {
np, _, _ = pot.Add(np, addr, Pof)
}
ppmap := make(map[string]*PeerPot)
// generate an allknowing source of truth for connections
// for every kademlia passed
for i, a := range addrs {
// actual kademlia depth
depth := depthForPot(np, neighbourhoodSize, a)
// all nn-peers
var nns [][]byte
peersPerBin := make([]int, depth)
// iterate through the neighbours, going from the deepest to the shallowest
np.EachNeighbour(a, Pof, func(val pot.Val, po int) bool {
addr := val.([]byte)
// po == 256 means that addr is the pivot address(self)
// we do not include self in the map
if po == 256 {
return true
}
// append any neighbors found
// a neighbor is any peer in or deeper than the depth
if po >= depth {
nns = append(nns, addr)
} else {
// for peers < depth, we just count the number in each bin
// the bin is the index of the slice
peersPerBin[po]++
}
return true
})
log.Trace(fmt.Sprintf("%x PeerPotMap NNS: %s, peersPerBin", addrs[i][:4], LogAddrs(nns)))
ppmap[common.Bytes2Hex(a)] = &PeerPot{
NNSet: nns,
PeersPerBin: peersPerBin,
}
}
return ppmap
}
// Saturation returns the smallest po value in which the node has less than MinBinSize peers
// if the iterator reaches neighbourhood radius, then the last bin + 1 is returned
func (k *Kademlia) Saturation() int {
k.lock.RLock()
defer k.lock.RUnlock()
return k.saturation()
}
func (k *Kademlia) saturation() int {
prev := -1
radius := neighbourhoodRadiusForPot(k.conns, k.NeighbourhoodSize, k.base)
k.conns.EachBin(k.base, Pof, 0, func(po, size int, f func(func(val pot.Val) bool) bool) bool {
prev++
if po >= radius {
return false
}
return prev == po && size >= k.MinBinSize
})
if prev < 0 {
return 0
}
return prev
}
// isSaturated returns true if the kademlia is considered saturated, or false if not.
// It checks this by checking an array of ints called unsaturatedBins; each item in that array corresponds
// to the bin which is unsaturated (number of connections < k.MinBinSize).
// The bin is considered unsaturated only if there are actual peers in that PeerPot's bin (peersPerBin)
// (if there is no peer for a given bin, then no connection could ever be established;
// in a God's view this is relevant as no more peers will ever appear on that bin)
func (k *Kademlia) isSaturated(peersPerBin []int, depth int) bool {
// depth could be calculated from k but as this is called from `GetHealthInfo()`,
// the depth has already been calculated so we can require it as a parameter
// early check for depth
if depth != len(peersPerBin) {
return false
}
unsaturatedBins := make([]int, 0)
k.conns.EachBin(k.base, Pof, 0, func(po, size int, f func(func(val pot.Val) bool) bool) bool {
if po >= depth {
return false
}
log.Trace("peers per bin", "peersPerBin[po]", peersPerBin[po], "po", po)
// if there are actually peers in the PeerPot who can fulfill k.MinBinSize
if size < k.MinBinSize && size < peersPerBin[po] {
log.Trace("connections for po", "po", po, "size", size)
unsaturatedBins = append(unsaturatedBins, po)
}
return true
})
log.Trace("list of unsaturated bins", "unsaturatedBins", unsaturatedBins)
return len(unsaturatedBins) == 0
}
// knowNeighbours tests if all neighbours in the peerpot
// are found among the peers known to the kademlia
// It is used in Healthy function for testing only
// TODO move to separate testing tools file
func (k *Kademlia) knowNeighbours(addrs [][]byte) (got bool, n int, missing [][]byte) {
pm := make(map[string]bool)
depth := depthForPot(k.conns, k.NeighbourhoodSize, k.base)
// create a map with all peers at depth and deeper known in the kademlia
k.eachAddr(nil, 255, func(p *BzzAddr, po int) bool {
// in order deepest to shallowest compared to the kademlia base address
// all bins (except self) are included (0 <= bin <= 255)
if po < depth {
return false
}
pk := common.Bytes2Hex(p.Address())
pm[pk] = true
return true
})
// iterate through nearest neighbors in the peerpot map
// if we can't find the neighbor in the map we created above
// then we don't know all our neighbors
// (which sadly is all too common in modern society)
var gots int
var culprits [][]byte
for _, p := range addrs {
pk := common.Bytes2Hex(p)
if pm[pk] {
gots++
} else {
log.Trace(fmt.Sprintf("%08x: known nearest neighbour %s not found", k.base, pk))
culprits = append(culprits, p)
}
}
return gots == len(addrs), gots, culprits
}
// connectedNeighbours tests if all neighbours in the peerpot
// are currently connected in the kademlia
// It is used in Healthy function for testing only
func (k *Kademlia) connectedNeighbours(peers [][]byte) (got bool, n int, missing [][]byte) {
pm := make(map[string]bool)
// create a map with all peers at depth and deeper that are connected in the kademlia
// in order deepest to shallowest compared to the kademlia base address
// all bins (except self) are included (0 <= bin <= 255)
depth := depthForPot(k.conns, k.NeighbourhoodSize, k.base)
k.eachConn(nil, 255, func(p *Peer, po int) bool {
if po < depth {
return false
}
pk := common.Bytes2Hex(p.Address())
pm[pk] = true
return true
})
// iterate through nearest neighbors in the peerpot map
// if we can't find the neighbor in the map we created above
// then we don't know all our neighbors
var gots int
var culprits [][]byte
for _, p := range peers {
pk := common.Bytes2Hex(p)
if pm[pk] {
gots++
} else {
log.Trace(fmt.Sprintf("%08x: ExpNN: %s not found", k.base, pk))
culprits = append(culprits, p)
}
}
return gots == len(peers), gots, culprits
}
// Health state of the Kademlia
// used for testing only
type Health struct {
KnowNN bool // whether node knows all its neighbours
CountKnowNN int // amount of neighbors known
MissingKnowNN [][]byte // which neighbours we should have known but we don't
ConnectNN bool // whether node is connected to all its neighbours
CountConnectNN int // amount of neighbours connected to
MissingConnectNN [][]byte // which neighbours we should have been connected to but we're not
// Saturated: if in all bins < depth number of connections >= MinBinsize or,
// if number of connections < MinBinSize, to the number of available peers in that bin
Saturated bool
Hive string
}
// GetHealthInfo reports the health state of the kademlia connectivity
//
// The PeerPot argument provides an all-knowing view of the network
// The resulting Health object is a result of comparisons between
// what is the actual composition of the kademlia in question (the receiver), and
// what SHOULD it have been when we take all we know about the network into consideration.
//
// used for testing only
func (k *Kademlia) GetHealthInfo(pp *PeerPot) *Health {
k.lock.RLock()
defer k.lock.RUnlock()
if len(pp.NNSet) < k.NeighbourhoodSize {
log.Warn("peerpot NNSet < NeighbourhoodSize")
}
gotnn, countgotnn, culpritsgotnn := k.connectedNeighbours(pp.NNSet)
knownn, countknownn, culpritsknownn := k.knowNeighbours(pp.NNSet)
depth := depthForPot(k.conns, k.NeighbourhoodSize, k.base)
// check saturation
saturated := k.isSaturated(pp.PeersPerBin, depth)
log.Trace(fmt.Sprintf("%08x: healthy: knowNNs: %v, gotNNs: %v, saturated: %v\n", k.base, knownn, gotnn, saturated))
return &Health{
KnowNN: knownn,
CountKnowNN: countknownn,
MissingKnowNN: culpritsknownn,
ConnectNN: gotnn,
CountConnectNN: countgotnn,
MissingConnectNN: culpritsgotnn,
Saturated: saturated,
Hive: k.string(),
}
}
// Healthy return the strict interpretation of `Healthy` given a `Health` struct
// definition of strict health: all conditions must be true:
// - we at least know one peer
// - we know all neighbors
// - we are connected to all known neighbors
// - it is saturated
func (h *Health) Healthy() bool {
return h.KnowNN && h.ConnectNN && h.CountKnowNN > 0 && h.Saturated
}

672
network/kademlia_test.go Normal file
View File

@ -0,0 +1,672 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"fmt"
"os"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/protocols"
"github.com/ethersphere/swarm/pot"
)
func init() {
h := log.LvlFilterHandler(log.LvlWarn, log.StreamHandler(os.Stderr, log.TerminalFormat(true)))
log.Root().SetHandler(h)
}
func testKadPeerAddr(s string) *BzzAddr {
a := pot.NewAddressFromString(s)
return &BzzAddr{OAddr: a, UAddr: a}
}
func newTestKademliaParams() *KadParams {
params := NewKadParams()
params.MinBinSize = 2
params.NeighbourhoodSize = 2
return params
}
type testKademlia struct {
*Kademlia
t *testing.T
}
func newTestKademlia(t *testing.T, b string) *testKademlia {
base := pot.NewAddressFromString(b)
return &testKademlia{
Kademlia: NewKademlia(base, newTestKademliaParams()),
t: t,
}
}
func (tk *testKademlia) newTestKadPeer(s string, lightNode bool) *Peer {
return NewPeer(&BzzPeer{BzzAddr: testKadPeerAddr(s), LightNode: lightNode}, tk.Kademlia)
}
func (tk *testKademlia) On(ons ...string) {
for _, s := range ons {
tk.Kademlia.On(tk.newTestKadPeer(s, false))
}
}
func (tk *testKademlia) Off(offs ...string) {
for _, s := range offs {
tk.Kademlia.Off(tk.newTestKadPeer(s, false))
}
}
func (tk *testKademlia) Register(regs ...string) {
var as []*BzzAddr
for _, s := range regs {
as = append(as, testKadPeerAddr(s))
}
err := tk.Kademlia.Register(as...)
if err != nil {
panic(err.Error())
}
}
// tests the validity of neighborhood depth calculations
//
// in particular, it tests that if there are one or more consecutive
// empty bins above the farthest "nearest neighbor-peer" then
// the depth should be set at the farthest of those empty bins
//
// TODO: Make test adapt to change in NeighbourhoodSize
func TestNeighbourhoodDepth(t *testing.T) {
baseAddressBytes := RandomAddr().OAddr
kad := NewKademlia(baseAddressBytes, NewKadParams())
baseAddress := pot.NewAddressFromBytes(baseAddressBytes)
// generate the peers
var peers []*Peer
for i := 0; i < 7; i++ {
addr := pot.RandomAddressAt(baseAddress, i)
peers = append(peers, newTestDiscoveryPeer(addr, kad))
}
var sevenPeers []*Peer
for i := 0; i < 2; i++ {
addr := pot.RandomAddressAt(baseAddress, 7)
sevenPeers = append(sevenPeers, newTestDiscoveryPeer(addr, kad))
}
testNum := 0
// first try with empty kademlia
depth := kad.NeighbourhoodDepth()
if depth != 0 {
t.Fatalf("%d expected depth 0, was %d", testNum, depth)
}
testNum++
// add one peer on 7
kad.On(sevenPeers[0])
depth = kad.NeighbourhoodDepth()
if depth != 0 {
t.Fatalf("%d expected depth 0, was %d", testNum, depth)
}
testNum++
// add a second on 7
kad.On(sevenPeers[1])
depth = kad.NeighbourhoodDepth()
if depth != 0 {
t.Fatalf("%d expected depth 0, was %d", testNum, depth)
}
testNum++
// add from 0 to 6
for i, p := range peers {
kad.On(p)
depth = kad.NeighbourhoodDepth()
if depth != i+1 {
t.Fatalf("%d.%d expected depth %d, was %d", i+1, testNum, i, depth)
}
}
testNum++
kad.Off(sevenPeers[1])
depth = kad.NeighbourhoodDepth()
if depth != 6 {
t.Fatalf("%d expected depth 6, was %d", testNum, depth)
}
testNum++
kad.Off(peers[4])
depth = kad.NeighbourhoodDepth()
if depth != 4 {
t.Fatalf("%d expected depth 4, was %d", testNum, depth)
}
testNum++
kad.Off(peers[3])
depth = kad.NeighbourhoodDepth()
if depth != 3 {
t.Fatalf("%d expected depth 3, was %d", testNum, depth)
}
testNum++
}
// TestHighMinBinSize tests that the saturation function also works
// if MinBinSize is > 2, the connection count is < k.MinBinSize
// and there are more peers available than connected
func TestHighMinBinSize(t *testing.T) {
// a function to test for different MinBinSize values
testKad := func(minBinSize int) {
// create a test kademlia
tk := newTestKademlia(t, "11111111")
// set its MinBinSize to desired value
tk.KadParams.MinBinSize = minBinSize
// add a couple of peers (so we have NN and depth)
tk.On("00000000") // bin 0
tk.On("11100000") // bin 3
tk.On("11110000") // bin 4
first := "10000000" // add a first peer at bin 1
tk.Register(first) // register it
// we now have one registered peer at bin 1;
// iterate and connect one peer at each iteration;
// should be unhealthy until at minBinSize - 1
// we connect the unconnected but registered peer
for i := 1; i < minBinSize; i++ {
peer := fmt.Sprintf("1000%b", 8|i)
tk.On(peer)
if i == minBinSize-1 {
tk.On(first)
tk.checkHealth(true)
return
}
tk.checkHealth(false)
}
}
// test MinBinSizes of 3 to 5
testMinBinSizes := []int{3, 4, 5}
for _, k := range testMinBinSizes {
testKad(k)
}
}
// TestHealthStrict tests the simplest definition of health
// Which means whether we are connected to all neighbors we know of
func TestHealthStrict(t *testing.T) {
// base address is all zeros
// no peers
// unhealthy (and lonely)
tk := newTestKademlia(t, "11111111")
tk.checkHealth(false)
// know one peer but not connected
// unhealthy
tk.Register("11100000")
tk.checkHealth(false)
// know one peer and connected
// unhealthy: not saturated
tk.On("11100000")
tk.checkHealth(true)
// know two peers, only one connected
// unhealthy
tk.Register("11111100")
tk.checkHealth(false)
// know two peers and connected to both
// healthy
tk.On("11111100")
tk.checkHealth(true)
// know three peers, connected to the two deepest
// healthy
tk.Register("00000000")
tk.checkHealth(false)
// know three peers, connected to all three
// healthy
tk.On("00000000")
tk.checkHealth(true)
// add fourth peer deeper than current depth
// unhealthy
tk.Register("11110000")
tk.checkHealth(false)
// connected to three deepest peers
// healthy
tk.On("11110000")
tk.checkHealth(true)
// add additional peer in same bin as deepest peer
// unhealthy
tk.Register("11111101")
tk.checkHealth(false)
// four deepest of five peers connected
// healthy
tk.On("11111101")
tk.checkHealth(true)
// add additional peer in bin 0
// unhealthy: unsaturated bin 0, 2 known but 1 connected
tk.Register("00000001")
tk.checkHealth(false)
// Connect second in bin 0
// healthy
tk.On("00000001")
tk.checkHealth(true)
// add peer in bin 1
// unhealthy, as it is known but not connected
tk.Register("10000000")
tk.checkHealth(false)
// connect peer in bin 1
// depth change, is now 1
// healthy, 1 peer in bin 1 known and connected
tk.On("10000000")
tk.checkHealth(true)
// add second peer in bin 1
// unhealthy, as it is known but not connected
tk.Register("10000001")
tk.checkHealth(false)
// connect second peer in bin 1
// healthy,
tk.On("10000001")
tk.checkHealth(true)
// connect third peer in bin 1
// healthy,
tk.On("10000011")
tk.checkHealth(true)
// add peer in bin 2
// unhealthy, no depth change
tk.Register("11000000")
tk.checkHealth(false)
// connect peer in bin 2
// depth change - as we already have peers in bin 3 and 4,
// we have contiguous bins, no bin < po 5 is empty -> depth 5
// healthy, every bin < depth has the max available peers,
// even if they are < MinBinSize
tk.On("11000000")
tk.checkHealth(true)
// add peer in bin 2
// unhealthy, peer bin is below depth 5 but
// has more available peers (2) than connected ones (1)
// --> unsaturated
tk.Register("11000011")
tk.checkHealth(false)
}
func (tk *testKademlia) checkHealth(expectHealthy bool) {
tk.t.Helper()
kid := common.Bytes2Hex(tk.BaseAddr())
addrs := [][]byte{tk.BaseAddr()}
tk.EachAddr(nil, 255, func(addr *BzzAddr, po int) bool {
addrs = append(addrs, addr.Address())
return true
})
pp := NewPeerPotMap(tk.NeighbourhoodSize, addrs)
healthParams := tk.GetHealthInfo(pp[kid])
// definition of health, all conditions but be true:
// - we at least know one peer
// - we know all neighbors
// - we are connected to all known neighbors
health := healthParams.Healthy()
if expectHealthy != health {
tk.t.Fatalf("expected kademlia health %v, is %v\n%v", expectHealthy, health, tk.String())
}
}
func (tk *testKademlia) checkSuggestPeer(expAddr string, expDepth int, expChanged bool) {
tk.t.Helper()
addr, depth, changed := tk.SuggestPeer()
log.Trace("suggestPeer return", "addr", addr, "depth", depth, "changed", changed)
if binStr(addr) != expAddr {
tk.t.Fatalf("incorrect peer address suggested. expected %v, got %v", expAddr, binStr(addr))
}
if depth != expDepth {
tk.t.Fatalf("incorrect saturation depth suggested. expected %v, got %v", expDepth, depth)
}
if changed != expChanged {
tk.t.Fatalf("expected depth change = %v, got %v", expChanged, changed)
}
}
func binStr(a *BzzAddr) string {
if a == nil {
return "<nil>"
}
return pot.ToBin(a.Address())[:8]
}
func TestSuggestPeerFindPeers(t *testing.T) {
tk := newTestKademlia(t, "00000000")
tk.On("00100000")
tk.checkSuggestPeer("<nil>", 0, false)
tk.On("00010000")
tk.checkSuggestPeer("<nil>", 0, false)
tk.On("10000000", "10000001")
tk.checkSuggestPeer("<nil>", 0, false)
tk.On("01000000")
tk.Off("10000001")
tk.checkSuggestPeer("10000001", 0, true)
tk.On("00100001")
tk.Off("01000000")
tk.checkSuggestPeer("01000000", 0, false)
// second time disconnected peer not callable
// with reasonably set Interval
tk.checkSuggestPeer("<nil>", 0, false)
// on and off again, peer callable again
tk.On("01000000")
tk.Off("01000000")
tk.checkSuggestPeer("01000000", 0, false)
tk.On("01000000", "10000001")
tk.checkSuggestPeer("<nil>", 0, false)
tk.Register("00010001")
tk.checkSuggestPeer("00010001", 0, false)
tk.On("00010001")
tk.Off("01000000")
tk.checkSuggestPeer("01000000", 0, false)
tk.On("01000000")
tk.checkSuggestPeer("<nil>", 0, false)
tk.Register("01000001")
tk.checkSuggestPeer("01000001", 0, false)
tk.On("01000001")
tk.checkSuggestPeer("<nil>", 0, false)
tk.Register("10000010", "01000010", "00100010")
tk.checkSuggestPeer("<nil>", 0, false)
tk.Register("00010010")
tk.checkSuggestPeer("00010010", 0, false)
tk.Off("00100001")
tk.checkSuggestPeer("00100010", 2, true)
tk.Off("01000001")
tk.checkSuggestPeer("01000010", 1, true)
tk.checkSuggestPeer("01000001", 0, false)
tk.checkSuggestPeer("00100001", 0, false)
tk.checkSuggestPeer("<nil>", 0, false)
tk.On("01000001", "00100001")
tk.Register("10000100", "01000100", "00100100")
tk.Register("00000100", "00000101", "00000110")
tk.Register("00000010", "00000011", "00000001")
tk.checkSuggestPeer("00000110", 0, false)
tk.checkSuggestPeer("00000101", 0, false)
tk.checkSuggestPeer("00000100", 0, false)
tk.checkSuggestPeer("00000011", 0, false)
tk.checkSuggestPeer("00000010", 0, false)
tk.checkSuggestPeer("00000001", 0, false)
tk.checkSuggestPeer("<nil>", 0, false)
}
// a node should stay in the address book if it's removed from the kademlia
func TestOffEffectingAddressBookNormalNode(t *testing.T) {
tk := newTestKademlia(t, "00000000")
// peer added to kademlia
tk.On("01000000")
// peer should be in the address book
if tk.addrs.Size() != 1 {
t.Fatal("known peer addresses should contain 1 entry")
}
// peer should be among live connections
if tk.conns.Size() != 1 {
t.Fatal("live peers should contain 1 entry")
}
// remove peer from kademlia
tk.Off("01000000")
// peer should be in the address book
if tk.addrs.Size() != 1 {
t.Fatal("known peer addresses should contain 1 entry")
}
// peer should not be among live connections
if tk.conns.Size() != 0 {
t.Fatal("live peers should contain 0 entry")
}
}
// a light node should not be in the address book
func TestOffEffectingAddressBookLightNode(t *testing.T) {
tk := newTestKademlia(t, "00000000")
// light node peer added to kademlia
tk.Kademlia.On(tk.newTestKadPeer("01000000", true))
// peer should not be in the address book
if tk.addrs.Size() != 0 {
t.Fatal("known peer addresses should contain 0 entry")
}
// peer should be among live connections
if tk.conns.Size() != 1 {
t.Fatal("live peers should contain 1 entry")
}
// remove peer from kademlia
tk.Kademlia.Off(tk.newTestKadPeer("01000000", true))
// peer should not be in the address book
if tk.addrs.Size() != 0 {
t.Fatal("known peer addresses should contain 0 entry")
}
// peer should not be among live connections
if tk.conns.Size() != 0 {
t.Fatal("live peers should contain 0 entry")
}
}
func TestSuggestPeerRetries(t *testing.T) {
tk := newTestKademlia(t, "00000000")
tk.RetryInterval = int64(300 * time.Millisecond) // cycle
tk.MaxRetries = 50
tk.RetryExponent = 2
sleep := func(n int) {
ts := tk.RetryInterval
for i := 1; i < n; i++ {
ts *= int64(tk.RetryExponent)
}
time.Sleep(time.Duration(ts))
}
tk.Register("01000000")
tk.On("00000001", "00000010")
tk.checkSuggestPeer("01000000", 0, false)
tk.checkSuggestPeer("<nil>", 0, false)
sleep(1)
tk.checkSuggestPeer("01000000", 0, false)
tk.checkSuggestPeer("<nil>", 0, false)
sleep(1)
tk.checkSuggestPeer("01000000", 0, false)
tk.checkSuggestPeer("<nil>", 0, false)
sleep(2)
tk.checkSuggestPeer("01000000", 0, false)
tk.checkSuggestPeer("<nil>", 0, false)
sleep(2)
tk.checkSuggestPeer("<nil>", 0, false)
}
func TestKademliaHiveString(t *testing.T) {
tk := newTestKademlia(t, "00000000")
tk.On("01000000", "00100000")
tk.Register("10000000", "10000001")
tk.MaxProxDisplay = 8
h := tk.String()
expH := "\n=========================================================================\nMon Feb 27 12:10:28 UTC 2017 KΛÐΞMLIΛ hive: queen's address: 0000000000000000000000000000000000000000000000000000000000000000\npopulation: 2 (4), NeighbourhoodSize: 2, MinBinSize: 2, MaxBinSize: 4\n============ DEPTH: 0 ==========================================\n000 0 | 2 8100 (0) 8000 (0)\n001 1 4000 | 1 4000 (0)\n002 1 2000 | 1 2000 (0)\n003 0 | 0\n004 0 | 0\n005 0 | 0\n006 0 | 0\n007 0 | 0\n========================================================================="
if expH[104:] != h[104:] {
t.Fatalf("incorrect hive output. expected %v, got %v", expH, h)
}
}
func newTestDiscoveryPeer(addr pot.Address, kad *Kademlia) *Peer {
rw := &p2p.MsgPipeRW{}
p := p2p.NewPeer(enode.ID{}, "foo", []p2p.Cap{})
pp := protocols.NewPeer(p, rw, &protocols.Spec{})
bp := &BzzPeer{
Peer: pp,
BzzAddr: &BzzAddr{
OAddr: addr.Bytes(),
UAddr: []byte(fmt.Sprintf("%x", addr[:])),
},
}
return NewPeer(bp, kad)
}
// TestKademlia_SubscribeToNeighbourhoodDepthChange checks if correct
// signaling over SubscribeToNeighbourhoodDepthChange channels are made
// when neighbourhood depth is changed.
func TestKademlia_SubscribeToNeighbourhoodDepthChange(t *testing.T) {
testSignal := func(t *testing.T, k *testKademlia, prevDepth int, c <-chan struct{}) (newDepth int) {
t.Helper()
select {
case _, ok := <-c:
if !ok {
t.Error("closed signal channel")
}
newDepth = k.NeighbourhoodDepth()
if prevDepth == newDepth {
t.Error("depth not changed")
}
return newDepth
case <-time.After(2 * time.Second):
t.Error("timeout")
}
return newDepth
}
t.Run("single subscription", func(t *testing.T) {
k := newTestKademlia(t, "00000000")
c, u := k.SubscribeToNeighbourhoodDepthChange()
defer u()
depth := k.NeighbourhoodDepth()
k.On("11111101", "01000000", "10000000", "00000010")
testSignal(t, k, depth, c)
})
t.Run("multiple subscriptions", func(t *testing.T) {
k := newTestKademlia(t, "00000000")
c1, u1 := k.SubscribeToNeighbourhoodDepthChange()
defer u1()
c2, u2 := k.SubscribeToNeighbourhoodDepthChange()
defer u2()
depth := k.NeighbourhoodDepth()
k.On("11111101", "01000000", "10000000", "00000010")
testSignal(t, k, depth, c1)
testSignal(t, k, depth, c2)
})
t.Run("multiple changes", func(t *testing.T) {
k := newTestKademlia(t, "00000000")
c, u := k.SubscribeToNeighbourhoodDepthChange()
defer u()
depth := k.NeighbourhoodDepth()
k.On("11111101", "01000000", "10000000", "00000010")
depth = testSignal(t, k, depth, c)
k.On("11111101", "01000010", "10000010", "00000110")
testSignal(t, k, depth, c)
})
t.Run("no depth change", func(t *testing.T) {
k := newTestKademlia(t, "00000000")
c, u := k.SubscribeToNeighbourhoodDepthChange()
defer u()
// does not trigger the depth change
k.On("11111101")
select {
case _, ok := <-c:
if !ok {
t.Error("closed signal channel")
}
t.Error("signal received")
case <-time.After(1 * time.Second):
// all fine
}
})
t.Run("no new peers", func(t *testing.T) {
k := newTestKademlia(t, "00000000")
changeC, unsubscribe := k.SubscribeToNeighbourhoodDepthChange()
defer unsubscribe()
select {
case _, ok := <-changeC:
if !ok {
t.Error("closed signal channel")
}
t.Error("signal received")
case <-time.After(1 * time.Second):
// all fine
}
})
}

105
network/network.go Normal file
View File

@ -0,0 +1,105 @@
package network
import (
"crypto/ecdsa"
"fmt"
"net"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
)
// BzzAddr implements the PeerAddr interface
type BzzAddr struct {
OAddr []byte
UAddr []byte
}
// Address implements OverlayPeer interface to be used in Overlay.
func (a *BzzAddr) Address() []byte {
return a.OAddr
}
// Over returns the overlay address.
func (a *BzzAddr) Over() []byte {
return a.OAddr
}
// Under returns the underlay address.
func (a *BzzAddr) Under() []byte {
return a.UAddr
}
// ID returns the node identifier in the underlay.
func (a *BzzAddr) ID() enode.ID {
n, err := enode.ParseV4(string(a.UAddr))
if err != nil {
return enode.ID{}
}
return n.ID()
}
// Update updates the underlay address of a peer record
func (a *BzzAddr) Update(na *BzzAddr) *BzzAddr {
return &BzzAddr{a.OAddr, na.UAddr}
}
// String pretty prints the address
func (a *BzzAddr) String() string {
return fmt.Sprintf("%x <%s>", a.OAddr, a.UAddr)
}
// RandomAddr is a utility method generating an address from a public key
func RandomAddr() *BzzAddr {
key, err := crypto.GenerateKey()
if err != nil {
panic("unable to generate key")
}
node := enode.NewV4(&key.PublicKey, net.IP{127, 0, 0, 1}, 30303, 30303)
return NewAddr(node)
}
// NewAddr constucts a BzzAddr from a node record.
func NewAddr(node *enode.Node) *BzzAddr {
return &BzzAddr{OAddr: node.ID().Bytes(), UAddr: []byte(node.String())}
}
func PrivateKeyToBzzKey(prvKey *ecdsa.PrivateKey) []byte {
pubkeyBytes := crypto.FromECDSAPub(&prvKey.PublicKey)
return crypto.Keccak256Hash(pubkeyBytes).Bytes()
}
type EnodeParams struct {
PrivateKey *ecdsa.PrivateKey
EnodeKey *ecdsa.PrivateKey
Lightnode bool
Bootnode bool
}
func NewEnodeRecord(params *EnodeParams) (*enr.Record, error) {
if params.PrivateKey == nil {
return nil, fmt.Errorf("all param private keys must be defined")
}
bzzkeybytes := PrivateKeyToBzzKey(params.PrivateKey)
var record enr.Record
record.Set(NewENRAddrEntry(bzzkeybytes))
record.Set(ENRLightNodeEntry(params.Lightnode))
record.Set(ENRBootNodeEntry(params.Bootnode))
return &record, nil
}
func NewEnode(params *EnodeParams) (*enode.Node, error) {
record, err := NewEnodeRecord(params)
if err != nil {
return nil, err
}
err = enode.SignV4(record, params.EnodeKey)
if err != nil {
return nil, fmt.Errorf("ENR create fail: %v", err)
}
return enode.New(enode.V4ID{}, record)
}

263
network/networkid_test.go Normal file
View File

@ -0,0 +1,263 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"bytes"
"context"
"flag"
"fmt"
"math/rand"
"strings"
"testing"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethereum/go-ethereum/rpc"
)
var (
currentNetworkID int
cnt int
nodeMap map[int][]enode.ID
kademlias map[enode.ID]*Kademlia
)
const (
NumberOfNets = 4
MaxTimeout = 15 * time.Second
)
func init() {
flag.Parse()
rand.Seed(time.Now().Unix())
}
/*
Run the network ID test.
The test creates one simulations.Network instance,
a number of nodes, then connects nodes with each other in this network.
Each node gets a network ID assigned according to the number of networks.
Having more network IDs is just arbitrary in order to exclude
false positives.
Nodes should only connect with other nodes with the same network ID.
After the setup phase, the test checks on each node if it has the
expected node connections (excluding those not sharing the network ID).
*/
func TestNetworkID(t *testing.T) {
log.Debug("Start test")
//arbitrarily set the number of nodes. It could be any number
numNodes := 24
//the nodeMap maps all nodes (slice value) with the same network ID (key)
nodeMap = make(map[int][]enode.ID)
//set up the network and connect nodes
net, err := setupNetwork(numNodes)
if err != nil {
t.Fatalf("Error setting up network: %v", err)
}
//let's sleep to ensure all nodes are connected
time.Sleep(1 * time.Second)
// shutdown the the network to avoid race conditions
// on accessing kademlias global map while network nodes
// are accepting messages
net.Shutdown()
//for each group sharing the same network ID...
for _, netIDGroup := range nodeMap {
log.Trace("netIDGroup size", "size", len(netIDGroup))
//...check that their size of the kademlia is of the expected size
//the assumption is that it should be the size of the group minus 1 (the node itself)
for _, node := range netIDGroup {
if kademlias[node].addrs.Size() != len(netIDGroup)-1 {
t.Fatalf("Kademlia size has not expected peer size. Kademlia size: %d, expected size: %d", kademlias[node].addrs.Size(), len(netIDGroup)-1)
}
kademlias[node].EachAddr(nil, 0, func(addr *BzzAddr, _ int) bool {
found := false
for _, nd := range netIDGroup {
if bytes.Equal(kademlias[nd].BaseAddr(), addr.Address()) {
found = true
}
}
if !found {
t.Fatalf("Expected node not found for node %s", node.String())
}
return true
})
}
}
log.Info("Test terminated successfully")
}
// setup simulated network with bzz/discovery and pss services.
// connects nodes in a circle
// if allowRaw is set, omission of builtin pss encryption is enabled (see PssParams)
func setupNetwork(numnodes int) (net *simulations.Network, err error) {
log.Debug("Setting up network")
quitC := make(chan struct{})
errc := make(chan error)
nodes := make([]*simulations.Node, numnodes)
if numnodes < 16 {
return nil, fmt.Errorf("Minimum sixteen nodes in network")
}
adapter := adapters.NewSimAdapter(newServices())
//create the network
net = simulations.NewNetwork(adapter, &simulations.NetworkConfig{
ID: "NetworkIdTestNet",
DefaultService: "bzz",
})
log.Debug("Creating networks and nodes")
var connCount int
//create nodes and connect them to each other
for i := 0; i < numnodes; i++ {
log.Trace("iteration: ", "i", i)
nodeconf := adapters.RandomNodeConfig()
nodes[i], err = net.NewNodeWithConfig(nodeconf)
if err != nil {
return nil, fmt.Errorf("error creating node %d: %v", i, err)
}
err = net.Start(nodes[i].ID())
if err != nil {
return nil, fmt.Errorf("error starting node %d: %v", i, err)
}
client, err := nodes[i].Client()
if err != nil {
return nil, fmt.Errorf("create node %d rpc client fail: %v", i, err)
}
//now setup and start event watching in order to know when we can upload
ctx, watchCancel := context.WithTimeout(context.Background(), MaxTimeout)
defer watchCancel()
watchSubscriptionEvents(ctx, nodes[i].ID(), client, errc, quitC)
//on every iteration we connect to all previous ones
for k := i - 1; k >= 0; k-- {
connCount++
log.Debug(fmt.Sprintf("Connecting node %d with node %d; connection count is %d", i, k, connCount))
err = net.Connect(nodes[i].ID(), nodes[k].ID())
if err != nil {
if !strings.Contains(err.Error(), "already connected") {
return nil, fmt.Errorf("error connecting nodes: %v", err)
}
}
}
}
//now wait until the number of expected subscriptions has been finished
//`watchSubscriptionEvents` will write with a `nil` value to errc
for err := range errc {
if err != nil {
return nil, err
}
//`nil` received, decrement count
connCount--
log.Trace("count down", "cnt", connCount)
//all subscriptions received
if connCount == 0 {
close(quitC)
break
}
}
log.Debug("Network setup phase terminated")
return net, nil
}
func newServices() adapters.Services {
kademlias = make(map[enode.ID]*Kademlia)
kademlia := func(id enode.ID) *Kademlia {
if k, ok := kademlias[id]; ok {
return k
}
params := NewKadParams()
params.NeighbourhoodSize = 2
params.MaxBinSize = 3
params.MinBinSize = 1
params.MaxRetries = 1000
params.RetryExponent = 2
params.RetryInterval = 1000000
kademlias[id] = NewKademlia(id[:], params)
return kademlias[id]
}
return adapters.Services{
"bzz": func(ctx *adapters.ServiceContext) (node.Service, error) {
addr := NewAddr(ctx.Config.Node())
hp := NewHiveParams()
hp.Discovery = false
cnt++
//assign the network ID
currentNetworkID = cnt % NumberOfNets
if ok := nodeMap[currentNetworkID]; ok == nil {
nodeMap[currentNetworkID] = make([]enode.ID, 0)
}
//add this node to the group sharing the same network ID
nodeMap[currentNetworkID] = append(nodeMap[currentNetworkID], ctx.Config.ID)
log.Debug("current network ID:", "id", currentNetworkID)
config := &BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
NetworkID: uint64(currentNetworkID),
}
return NewBzz(config, kademlia(ctx.Config.ID), nil, nil, nil), nil
},
}
}
func watchSubscriptionEvents(ctx context.Context, id enode.ID, client *rpc.Client, errc chan error, quitC chan struct{}) {
events := make(chan *p2p.PeerEvent)
sub, err := client.Subscribe(context.Background(), "admin", events, "peerEvents")
if err != nil {
log.Error(err.Error())
errc <- fmt.Errorf("error getting peer events for node %v: %s", id, err)
return
}
go func() {
defer func() {
sub.Unsubscribe()
log.Trace("watch subscription events: unsubscribe", "id", id)
}()
for {
select {
case <-quitC:
return
case <-ctx.Done():
select {
case errc <- ctx.Err():
case <-quitC:
}
return
case e := <-events:
if e.Type == p2p.PeerEventTypeAdd {
errc <- nil
}
case err := <-sub.Err():
if err != nil {
select {
case errc <- fmt.Errorf("error getting peer events for node %v: %v", id, err):
case <-quitC:
}
return
}
}
}
}()
}

View File

@ -0,0 +1,118 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// package priority_queue implement a channel based priority queue
// over arbitrary types. It provides an
// an autopop loop applying a function to the items always respecting
// their priority. The structure is only quasi consistent ie., if a lower
// priority item is autopopped, it is guaranteed that there was a point
// when no higher priority item was present, ie. it is not guaranteed
// that there was any point where the lower priority item was present
// but the higher was not
package priorityqueue
import (
"context"
"errors"
"time"
"github.com/ethereum/go-ethereum/metrics"
)
var (
ErrContention = errors.New("contention")
errBadPriority = errors.New("bad priority")
wakey = struct{}{}
)
// PriorityQueue is the basic structure
type PriorityQueue struct {
Queues []chan interface{}
wakeup chan struct{}
}
// New is the constructor for PriorityQueue
func New(n int, l int) *PriorityQueue {
var queues = make([]chan interface{}, n)
for i := range queues {
queues[i] = make(chan interface{}, l)
}
return &PriorityQueue{
Queues: queues,
wakeup: make(chan struct{}, 1),
}
}
// Run is a forever loop popping items from the queues
func (pq *PriorityQueue) Run(ctx context.Context, f func(interface{})) {
top := len(pq.Queues) - 1
p := top
READ:
for {
q := pq.Queues[p]
select {
case <-ctx.Done():
return
case x := <-q:
val := x.(struct {
v interface{}
t time.Time
})
f(val.v)
metrics.GetOrRegisterResettingTimer("pq.run", nil).UpdateSince(val.t)
p = top
default:
if p > 0 {
p--
continue READ
}
p = top
select {
case <-ctx.Done():
return
case <-pq.wakeup:
}
}
}
}
// Push pushes an item to the appropriate queue specified in the priority argument
// if context is given it waits until either the item is pushed or the Context aborts
func (pq *PriorityQueue) Push(x interface{}, p int) error {
if p < 0 || p >= len(pq.Queues) {
return errBadPriority
}
val := struct {
v interface{}
t time.Time
}{
x,
time.Now(),
}
select {
case pq.Queues[p] <- val:
default:
return ErrContention
}
select {
case pq.wakeup <- wakey:
default:
}
return nil
}

View File

@ -0,0 +1,97 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package priorityqueue
import (
"context"
"sync"
"testing"
)
func TestPriorityQueue(t *testing.T) {
var results []string
wg := sync.WaitGroup{}
pq := New(3, 2)
wg.Add(1)
go pq.Run(context.Background(), func(v interface{}) {
results = append(results, v.(string))
wg.Done()
})
pq.Push("2.0", 2)
wg.Wait()
if results[0] != "2.0" {
t.Errorf("expected first result %q, got %q", "2.0", results[0])
}
Loop:
for i, tc := range []struct {
priorities []int
values []string
results []string
errors []error
}{
{
priorities: []int{0},
values: []string{""},
results: []string{""},
},
{
priorities: []int{0, 1},
values: []string{"0.0", "1.0"},
results: []string{"1.0", "0.0"},
},
{
priorities: []int{1, 0},
values: []string{"1.0", "0.0"},
results: []string{"1.0", "0.0"},
},
{
priorities: []int{0, 1, 1},
values: []string{"0.0", "1.0", "1.1"},
results: []string{"1.0", "1.1", "0.0"},
},
{
priorities: []int{0, 0, 0},
values: []string{"0.0", "0.0", "0.1"},
errors: []error{nil, nil, ErrContention},
},
} {
var results []string
wg := sync.WaitGroup{}
pq := New(3, 2)
wg.Add(len(tc.values))
for j, value := range tc.values {
err := pq.Push(value, tc.priorities[j])
if tc.errors != nil && err != tc.errors[j] {
t.Errorf("expected push error %v, got %v", tc.errors[j], err)
continue Loop
}
if err != nil {
continue Loop
}
}
go pq.Run(context.Background(), func(v interface{}) {
results = append(results, v.(string))
wg.Done()
})
wg.Wait()
for k, result := range tc.results {
if results[k] != result {
t.Errorf("test case %v: expected %v element %q, got %q", i, k, result, results[k])
}
}
}
}

335
network/protocol.go Normal file
View File

@ -0,0 +1,335 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"context"
"errors"
"fmt"
"math/rand"
"sync"
"time"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/protocols"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/state"
)
const (
DefaultNetworkID = 4
// timeout for waiting
bzzHandshakeTimeout = 3000 * time.Millisecond
)
var DefaultTestNetworkID = rand.Uint64()
// BzzSpec is the spec of the generic swarm handshake
var BzzSpec = &protocols.Spec{
Name: "bzz",
Version: 9,
MaxMsgSize: 10 * 1024 * 1024,
Messages: []interface{}{
HandshakeMsg{},
},
}
// DiscoverySpec is the spec for the bzz discovery subprotocols
var DiscoverySpec = &protocols.Spec{
Name: "hive",
Version: 8,
MaxMsgSize: 10 * 1024 * 1024,
Messages: []interface{}{
peersMsg{},
subPeersMsg{},
},
}
// BzzConfig captures the config params used by the hive
type BzzConfig struct {
OverlayAddr []byte // base address of the overlay network
UnderlayAddr []byte // node's underlay address
HiveParams *HiveParams
NetworkID uint64
LightNode bool
BootnodeMode bool
}
// Bzz is the swarm protocol bundle
type Bzz struct {
*Hive
NetworkID uint64
LightNode bool
localAddr *BzzAddr
mtx sync.Mutex
handshakes map[enode.ID]*HandshakeMsg
streamerSpec *protocols.Spec
streamerRun func(*BzzPeer) error
}
// NewBzz is the swarm protocol constructor
// arguments
// * bzz config
// * overlay driver
// * peer store
func NewBzz(config *BzzConfig, kad *Kademlia, store state.Store, streamerSpec *protocols.Spec, streamerRun func(*BzzPeer) error) *Bzz {
bzz := &Bzz{
Hive: NewHive(config.HiveParams, kad, store),
NetworkID: config.NetworkID,
LightNode: config.LightNode,
localAddr: &BzzAddr{config.OverlayAddr, config.UnderlayAddr},
handshakes: make(map[enode.ID]*HandshakeMsg),
streamerRun: streamerRun,
streamerSpec: streamerSpec,
}
if config.BootnodeMode {
bzz.streamerRun = nil
bzz.streamerSpec = nil
}
return bzz
}
// UpdateLocalAddr updates underlayaddress of the running node
func (b *Bzz) UpdateLocalAddr(byteaddr []byte) *BzzAddr {
b.localAddr = b.localAddr.Update(&BzzAddr{
UAddr: byteaddr,
OAddr: b.localAddr.OAddr,
})
return b.localAddr
}
// NodeInfo returns the node's overlay address
func (b *Bzz) NodeInfo() interface{} {
return b.localAddr.Address()
}
// Protocols return the protocols swarm offers
// Bzz implements the node.Service interface
// * handshake/hive
// * discovery
func (b *Bzz) Protocols() []p2p.Protocol {
protocol := []p2p.Protocol{
{
Name: BzzSpec.Name,
Version: BzzSpec.Version,
Length: BzzSpec.Length(),
Run: b.runBzz,
NodeInfo: b.NodeInfo,
},
{
Name: DiscoverySpec.Name,
Version: DiscoverySpec.Version,
Length: DiscoverySpec.Length(),
Run: b.RunProtocol(DiscoverySpec, b.Hive.Run),
NodeInfo: b.Hive.NodeInfo,
PeerInfo: b.Hive.PeerInfo,
},
}
if b.streamerSpec != nil && b.streamerRun != nil {
protocol = append(protocol, p2p.Protocol{
Name: b.streamerSpec.Name,
Version: b.streamerSpec.Version,
Length: b.streamerSpec.Length(),
Run: b.RunProtocol(b.streamerSpec, b.streamerRun),
})
}
return protocol
}
// APIs returns the APIs offered by bzz
// * hive
// Bzz implements the node.Service interface
func (b *Bzz) APIs() []rpc.API {
return []rpc.API{{
Namespace: "hive",
Version: "3.0",
Service: b.Hive,
}}
}
// RunProtocol is a wrapper for swarm subprotocols
// returns a p2p protocol run function that can be assigned to p2p.Protocol#Run field
// arguments:
// * p2p protocol spec
// * run function taking BzzPeer as argument
// this run function is meant to block for the duration of the protocol session
// on return the session is terminated and the peer is disconnected
// the protocol waits for the bzz handshake is negotiated
// the overlay address on the BzzPeer is set from the remote handshake
func (b *Bzz) RunProtocol(spec *protocols.Spec, run func(*BzzPeer) error) func(*p2p.Peer, p2p.MsgReadWriter) error {
return func(p *p2p.Peer, rw p2p.MsgReadWriter) error {
// wait for the bzz protocol to perform the handshake
handshake, _ := b.GetOrCreateHandshake(p.ID())
defer b.removeHandshake(p.ID())
select {
case <-handshake.done:
case <-time.After(bzzHandshakeTimeout):
return fmt.Errorf("%08x: %s protocol timeout waiting for handshake on %08x", b.BaseAddr()[:4], spec.Name, p.ID().Bytes()[:4])
}
if handshake.err != nil {
return fmt.Errorf("%08x: %s protocol closed: %v", b.BaseAddr()[:4], spec.Name, handshake.err)
}
// the handshake has succeeded so construct the BzzPeer and run the protocol
peer := &BzzPeer{
Peer: protocols.NewPeer(p, rw, spec),
BzzAddr: handshake.peerAddr,
lastActive: time.Now(),
LightNode: handshake.LightNode,
}
log.Debug("peer created", "addr", handshake.peerAddr.String())
return run(peer)
}
}
// performHandshake implements the negotiation of the bzz handshake
// shared among swarm subprotocols
func (b *Bzz) performHandshake(p *protocols.Peer, handshake *HandshakeMsg) error {
ctx, cancel := context.WithTimeout(context.Background(), bzzHandshakeTimeout)
defer func() {
close(handshake.done)
cancel()
}()
rsh, err := p.Handshake(ctx, handshake, b.checkHandshake)
if err != nil {
handshake.err = err
return err
}
handshake.peerAddr = rsh.(*HandshakeMsg).Addr
handshake.LightNode = rsh.(*HandshakeMsg).LightNode
return nil
}
// runBzz is the p2p protocol run function for the bzz base protocol
// that negotiates the bzz handshake
func (b *Bzz) runBzz(p *p2p.Peer, rw p2p.MsgReadWriter) error {
handshake, _ := b.GetOrCreateHandshake(p.ID())
if !<-handshake.init {
return fmt.Errorf("%08x: bzz already started on peer %08x", b.localAddr.Over()[:4], p.ID().Bytes()[:4])
}
close(handshake.init)
defer b.removeHandshake(p.ID())
peer := protocols.NewPeer(p, rw, BzzSpec)
err := b.performHandshake(peer, handshake)
if err != nil {
log.Warn(fmt.Sprintf("%08x: handshake failed with remote peer %08x: %v", b.localAddr.Over()[:4], p.ID().Bytes()[:4], err))
return err
}
// fail if we get another handshake
msg, err := rw.ReadMsg()
if err != nil {
return err
}
msg.Discard()
return errors.New("received multiple handshakes")
}
// BzzPeer is the bzz protocol view of a protocols.Peer (itself an extension of p2p.Peer)
// implements the Peer interface and all interfaces Peer implements: Addr, OverlayPeer
type BzzPeer struct {
*protocols.Peer // represents the connection for online peers
*BzzAddr // remote address -> implements Addr interface = protocols.Peer
lastActive time.Time // time is updated whenever mutexes are releasing
LightNode bool
}
func NewBzzPeer(p *protocols.Peer) *BzzPeer {
return &BzzPeer{Peer: p, BzzAddr: NewAddr(p.Node())}
}
// ID returns the peer's underlay node identifier.
func (p *BzzPeer) ID() enode.ID {
// This is here to resolve a method tie: both protocols.Peer and BzzAddr are embedded
// into the struct and provide ID(). The protocols.Peer version is faster, ensure it
// gets used.
return p.Peer.ID()
}
/*
Handshake
* Version: 8 byte integer version of the protocol
* NetworkID: 8 byte integer network identifier
* Addr: the address advertised by the node including underlay and overlay connecctions
*/
type HandshakeMsg struct {
Version uint64
NetworkID uint64
Addr *BzzAddr
LightNode bool
// peerAddr is the address received in the peer handshake
peerAddr *BzzAddr
init chan bool
done chan struct{}
err error
}
// String pretty prints the handshake
func (bh *HandshakeMsg) String() string {
return fmt.Sprintf("Handshake: Version: %v, NetworkID: %v, Addr: %v, LightNode: %v, peerAddr: %v", bh.Version, bh.NetworkID, bh.Addr, bh.LightNode, bh.peerAddr)
}
// Perform initiates the handshake and validates the remote handshake message
func (b *Bzz) checkHandshake(hs interface{}) error {
rhs := hs.(*HandshakeMsg)
if rhs.NetworkID != b.NetworkID {
return fmt.Errorf("network id mismatch %d (!= %d)", rhs.NetworkID, b.NetworkID)
}
if rhs.Version != uint64(BzzSpec.Version) {
return fmt.Errorf("version mismatch %d (!= %d)", rhs.Version, BzzSpec.Version)
}
return nil
}
// removeHandshake removes handshake for peer with peerID
// from the bzz handshake store
func (b *Bzz) removeHandshake(peerID enode.ID) {
b.mtx.Lock()
defer b.mtx.Unlock()
delete(b.handshakes, peerID)
}
// GetHandshake returns the bzz handhake that the remote peer with peerID sent
func (b *Bzz) GetOrCreateHandshake(peerID enode.ID) (*HandshakeMsg, bool) {
b.mtx.Lock()
defer b.mtx.Unlock()
handshake, found := b.handshakes[peerID]
if !found {
handshake = &HandshakeMsg{
Version: uint64(BzzSpec.Version),
NetworkID: b.NetworkID,
Addr: b.localAddr,
LightNode: b.LightNode,
init: make(chan bool, 1),
done: make(chan struct{}),
}
// when handhsake is first created for a remote peer
// it is initialised with the init
handshake.init <- true
b.handshakes[peerID] = handshake
}
return handshake, found
}

343
network/protocol_test.go Normal file
View File

@ -0,0 +1,343 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package network
import (
"crypto/ecdsa"
"flag"
"fmt"
"os"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/ethereum/go-ethereum/p2p/protocols"
p2ptest "github.com/ethereum/go-ethereum/p2p/testing"
"github.com/ethersphere/swarm/pot"
)
const (
TestProtocolVersion = 9
)
var TestProtocolNetworkID = DefaultTestNetworkID
var (
loglevel = flag.Int("loglevel", 2, "verbosity of logs")
)
func init() {
flag.Parse()
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(os.Stderr, log.TerminalFormat(true))))
}
func HandshakeMsgExchange(lhs, rhs *HandshakeMsg, id enode.ID) []p2ptest.Exchange {
return []p2ptest.Exchange{
{
Expects: []p2ptest.Expect{
{
Code: 0,
Msg: lhs,
Peer: id,
},
},
},
{
Triggers: []p2ptest.Trigger{
{
Code: 0,
Msg: rhs,
Peer: id,
},
},
},
}
}
func newBzzBaseTester(n int, prvkey *ecdsa.PrivateKey, spec *protocols.Spec, run func(*BzzPeer) error) (*bzzTester, error) {
var addrs [][]byte
for i := 0; i < n; i++ {
addr := pot.RandomAddress()
addrs = append(addrs, addr[:])
}
pt, _, err := newBzzBaseTesterWithAddrs(prvkey, addrs, spec, run)
return pt, err
}
func newBzzBaseTesterWithAddrs(prvkey *ecdsa.PrivateKey, addrs [][]byte, spec *protocols.Spec, run func(*BzzPeer) error) (*bzzTester, [][]byte, error) {
n := len(addrs)
cs := make(map[enode.ID]chan bool)
var csMu sync.Mutex
srv := func(p *BzzPeer) error {
defer func() {
csMu.Lock()
defer csMu.Unlock()
if cs[p.ID()] != nil {
close(cs[p.ID()])
}
}()
return run(p)
}
mu := &sync.Mutex{}
nodeToAddr := make(map[enode.ID][]byte)
protocol := func(p *p2p.Peer, rw p2p.MsgReadWriter) error {
mu.Lock()
nodeToAddr[p.ID()] = addrs[0]
mu.Unlock()
bzzAddr := &BzzAddr{addrs[0], []byte(p.Node().String())}
addrs = addrs[1:]
return srv(&BzzPeer{Peer: protocols.NewPeer(p, rw, spec), BzzAddr: bzzAddr})
}
s := p2ptest.NewProtocolTester(prvkey, n, protocol)
var record enr.Record
bzzKey := PrivateKeyToBzzKey(prvkey)
record.Set(NewENRAddrEntry(bzzKey))
err := enode.SignV4(&record, prvkey)
if err != nil {
return nil, nil, fmt.Errorf("unable to generate ENR: %v", err)
}
nod, err := enode.New(enode.V4ID{}, &record)
if err != nil {
return nil, nil, fmt.Errorf("unable to create enode: %v", err)
}
addr := getENRBzzAddr(nod)
csMu.Lock()
for _, node := range s.Nodes {
log.Warn("node", "node", node)
cs[node.ID()] = make(chan bool)
}
csMu.Unlock()
var nodeAddrs [][]byte
pt := &bzzTester{
addr: addr,
ProtocolTester: s,
cs: cs,
}
mu.Lock()
for _, n := range pt.Nodes {
nodeAddrs = append(nodeAddrs, nodeToAddr[n.ID()])
}
mu.Unlock()
return pt, nodeAddrs, nil
}
type bzzTester struct {
*p2ptest.ProtocolTester
addr *BzzAddr
cs map[enode.ID]chan bool
bzz *Bzz
}
func newBzz(addr *BzzAddr, lightNode bool) *Bzz {
config := &BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: NewHiveParams(),
NetworkID: DefaultTestNetworkID,
LightNode: lightNode,
}
kad := NewKademlia(addr.OAddr, NewKadParams())
bzz := NewBzz(config, kad, nil, nil, nil)
return bzz
}
func newBzzHandshakeTester(n int, prvkey *ecdsa.PrivateKey, lightNode bool) (*bzzTester, error) {
var record enr.Record
bzzkey := PrivateKeyToBzzKey(prvkey)
record.Set(NewENRAddrEntry(bzzkey))
record.Set(ENRLightNodeEntry(lightNode))
err := enode.SignV4(&record, prvkey)
if err != nil {
return nil, err
}
nod, err := enode.New(enode.V4ID{}, &record)
addr := getENRBzzAddr(nod)
bzz := newBzz(addr, lightNode)
pt := p2ptest.NewProtocolTester(prvkey, n, bzz.runBzz)
return &bzzTester{
addr: addr,
ProtocolTester: pt,
bzz: bzz,
}, nil
}
// should test handshakes in one exchange? parallelisation
func (s *bzzTester) testHandshake(lhs, rhs *HandshakeMsg, disconnects ...*p2ptest.Disconnect) error {
if err := s.TestExchanges(HandshakeMsgExchange(lhs, rhs, rhs.Addr.ID())...); err != nil {
return err
}
if len(disconnects) > 0 {
return s.TestDisconnected(disconnects...)
}
// If we don't expect disconnect, ensure peers remain connected
err := s.TestDisconnected(&p2ptest.Disconnect{
Peer: s.Nodes[0].ID(),
Error: nil,
})
if err == nil {
return fmt.Errorf("Unexpected peer disconnect")
}
if err.Error() != "timed out waiting for peers to disconnect" {
return err
}
return nil
}
func correctBzzHandshake(addr *BzzAddr, lightNode bool) *HandshakeMsg {
return &HandshakeMsg{
Version: TestProtocolVersion,
NetworkID: TestProtocolNetworkID,
Addr: addr,
LightNode: lightNode,
}
}
func TestBzzHandshakeNetworkIDMismatch(t *testing.T) {
lightNode := false
prvkey, err := crypto.GenerateKey()
if err != nil {
t.Fatal(err)
}
s, err := newBzzHandshakeTester(1, prvkey, lightNode)
if err != nil {
t.Fatal(err)
}
defer s.Stop()
node := s.Nodes[0]
err = s.testHandshake(
correctBzzHandshake(s.addr, lightNode),
&HandshakeMsg{Version: TestProtocolVersion, NetworkID: 321, Addr: NewAddr(node)},
&p2ptest.Disconnect{Peer: node.ID(), Error: fmt.Errorf("Handshake error: Message handler error: (msg code 0): network id mismatch 321 (!= %v)", TestProtocolNetworkID)},
)
if err != nil {
t.Fatal(err)
}
}
func TestBzzHandshakeVersionMismatch(t *testing.T) {
lightNode := false
prvkey, err := crypto.GenerateKey()
if err != nil {
t.Fatal(err)
}
s, err := newBzzHandshakeTester(1, prvkey, lightNode)
if err != nil {
t.Fatal(err)
}
defer s.Stop()
node := s.Nodes[0]
err = s.testHandshake(
correctBzzHandshake(s.addr, lightNode),
&HandshakeMsg{Version: 0, NetworkID: TestProtocolNetworkID, Addr: NewAddr(node)},
&p2ptest.Disconnect{Peer: node.ID(), Error: fmt.Errorf("Handshake error: Message handler error: (msg code 0): version mismatch 0 (!= %d)", TestProtocolVersion)},
)
if err != nil {
t.Fatal(err)
}
}
func TestBzzHandshakeSuccess(t *testing.T) {
lightNode := false
prvkey, err := crypto.GenerateKey()
if err != nil {
t.Fatal(err)
}
s, err := newBzzHandshakeTester(1, prvkey, lightNode)
if err != nil {
t.Fatal(err)
}
defer s.Stop()
node := s.Nodes[0]
err = s.testHandshake(
correctBzzHandshake(s.addr, lightNode),
&HandshakeMsg{Version: TestProtocolVersion, NetworkID: TestProtocolNetworkID, Addr: NewAddr(node)},
)
if err != nil {
t.Fatal(err)
}
}
func TestBzzHandshakeLightNode(t *testing.T) {
var lightNodeTests = []struct {
name string
lightNode bool
}{
{"on", true},
{"off", false},
}
for _, test := range lightNodeTests {
t.Run(test.name, func(t *testing.T) {
prvkey, err := crypto.GenerateKey()
if err != nil {
t.Fatal(err)
}
pt, err := newBzzHandshakeTester(1, prvkey, false)
if err != nil {
t.Fatal(err)
}
defer pt.Stop()
node := pt.Nodes[0]
addr := NewAddr(node)
err = pt.testHandshake(
correctBzzHandshake(pt.addr, false),
&HandshakeMsg{Version: TestProtocolVersion, NetworkID: TestProtocolNetworkID, Addr: addr, LightNode: test.lightNode},
)
if err != nil {
t.Fatal(err)
}
select {
case <-pt.bzz.handshakes[node.ID()].done:
if pt.bzz.handshakes[node.ID()].LightNode != test.lightNode {
t.Fatalf("peer LightNode flag is %v, should be %v", pt.bzz.handshakes[node.ID()].LightNode, test.lightNode)
}
case <-time.After(10 * time.Second):
t.Fatal("test timeout")
}
})
}
}

View File

@ -0,0 +1,79 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import "github.com/ethereum/go-ethereum/p2p/enode"
// BucketKey is the type that should be used for keys in simulation buckets.
type BucketKey string
// NodeItem returns an item set in ServiceFunc function for a particular node.
func (s *Simulation) NodeItem(id enode.ID, key interface{}) (value interface{}, ok bool) {
s.mu.Lock()
defer s.mu.Unlock()
if _, ok := s.buckets[id]; !ok {
return nil, false
}
return s.buckets[id].Load(key)
}
// SetNodeItem sets a new item associated with the node with provided NodeID.
// Buckets should be used to avoid managing separate simulation global state.
func (s *Simulation) SetNodeItem(id enode.ID, key interface{}, value interface{}) {
s.mu.Lock()
defer s.mu.Unlock()
s.buckets[id].Store(key, value)
}
// NodesItems returns a map of items from all nodes that are all set under the
// same BucketKey.
func (s *Simulation) NodesItems(key interface{}) (values map[enode.ID]interface{}) {
s.mu.RLock()
defer s.mu.RUnlock()
ids := s.NodeIDs()
values = make(map[enode.ID]interface{}, len(ids))
for _, id := range ids {
if _, ok := s.buckets[id]; !ok {
continue
}
if v, ok := s.buckets[id].Load(key); ok {
values[id] = v
}
}
return values
}
// UpNodesItems returns a map of items with the same BucketKey from all nodes that are up.
func (s *Simulation) UpNodesItems(key interface{}) (values map[enode.ID]interface{}) {
s.mu.RLock()
defer s.mu.RUnlock()
ids := s.UpNodeIDs()
values = make(map[enode.ID]interface{})
for _, id := range ids {
if _, ok := s.buckets[id]; !ok {
continue
}
if v, ok := s.buckets[id].Load(key); ok {
values[id] = v
}
}
return values
}

View File

@ -0,0 +1,155 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"sync"
"testing"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
)
// TestServiceBucket tests all bucket functionality using subtests.
// It constructs a simulation of two nodes by adding items to their buckets
// in ServiceFunc constructor, then by SetNodeItem. Testing UpNodesItems
// is done by stopping one node and validating availability of its items.
func TestServiceBucket(t *testing.T) {
testKey := "Key"
testValue := "Value"
sim := New(map[string]ServiceFunc{
"noop": func(ctx *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
b.Store(testKey, testValue+ctx.Config.ID.String())
return newNoopService(), nil, nil
},
})
defer sim.Close()
id1, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
id2, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
t.Run("ServiceFunc bucket Store", func(t *testing.T) {
v, ok := sim.NodeItem(id1, testKey)
if !ok {
t.Fatal("bucket item not found")
}
s, ok := v.(string)
if !ok {
t.Fatal("bucket item value is not string")
}
if s != testValue+id1.String() {
t.Fatalf("expected %q, got %q", testValue+id1.String(), s)
}
v, ok = sim.NodeItem(id2, testKey)
if !ok {
t.Fatal("bucket item not found")
}
s, ok = v.(string)
if !ok {
t.Fatal("bucket item value is not string")
}
if s != testValue+id2.String() {
t.Fatalf("expected %q, got %q", testValue+id2.String(), s)
}
})
customKey := "anotherKey"
customValue := "anotherValue"
t.Run("SetNodeItem", func(t *testing.T) {
sim.SetNodeItem(id1, customKey, customValue)
v, ok := sim.NodeItem(id1, customKey)
if !ok {
t.Fatal("bucket item not found")
}
s, ok := v.(string)
if !ok {
t.Fatal("bucket item value is not string")
}
if s != customValue {
t.Fatalf("expected %q, got %q", customValue, s)
}
_, ok = sim.NodeItem(id2, customKey)
if ok {
t.Fatal("bucket item should not be found")
}
})
if err := sim.StopNode(id2); err != nil {
t.Fatal(err)
}
t.Run("UpNodesItems", func(t *testing.T) {
items := sim.UpNodesItems(testKey)
v, ok := items[id1]
if !ok {
t.Errorf("node 1 item not found")
}
s, ok := v.(string)
if !ok {
t.Fatal("node 1 item value is not string")
}
if s != testValue+id1.String() {
t.Fatalf("expected %q, got %q", testValue+id1.String(), s)
}
_, ok = items[id2]
if ok {
t.Errorf("node 2 item should not be found")
}
})
t.Run("NodeItems", func(t *testing.T) {
items := sim.NodesItems(testKey)
v, ok := items[id1]
if !ok {
t.Errorf("node 1 item not found")
}
s, ok := v.(string)
if !ok {
t.Fatal("node 1 item value is not string")
}
if s != testValue+id1.String() {
t.Fatalf("expected %q, got %q", testValue+id1.String(), s)
}
v, ok = items[id2]
if !ok {
t.Errorf("node 2 item not found")
}
s, ok = v.(string)
if !ok {
t.Fatal("node 1 item value is not string")
}
if s != testValue+id2.String() {
t.Fatalf("expected %q, got %q", testValue+id2.String(), s)
}
})
}

View File

@ -0,0 +1,217 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"sync"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
)
// PeerEvent is the type of the channel returned by Simulation.PeerEvents.
type PeerEvent struct {
// NodeID is the ID of node that the event is caught on.
NodeID enode.ID
// PeerID is the ID of the peer node that the event is caught on.
PeerID enode.ID
// Event is the event that is caught.
Event *simulations.Event
// Error is the error that may have happened during event watching.
Error error
}
// PeerEventsFilter defines a filter on PeerEvents to exclude messages with
// defined properties. Use PeerEventsFilter methods to set required options.
type PeerEventsFilter struct {
eventType simulations.EventType
connUp *bool
msgReceive *bool
protocol *string
msgCode *uint64
}
// NewPeerEventsFilter returns a new PeerEventsFilter instance.
func NewPeerEventsFilter() *PeerEventsFilter {
return &PeerEventsFilter{}
}
// Connect sets the filter to events when two nodes connect.
func (f *PeerEventsFilter) Connect() *PeerEventsFilter {
f.eventType = simulations.EventTypeConn
b := true
f.connUp = &b
return f
}
// Drop sets the filter to events when two nodes disconnect.
func (f *PeerEventsFilter) Drop() *PeerEventsFilter {
f.eventType = simulations.EventTypeConn
b := false
f.connUp = &b
return f
}
// ReceivedMessages sets the filter to only messages that are received.
func (f *PeerEventsFilter) ReceivedMessages() *PeerEventsFilter {
f.eventType = simulations.EventTypeMsg
b := true
f.msgReceive = &b
return f
}
// SentMessages sets the filter to only messages that are sent.
func (f *PeerEventsFilter) SentMessages() *PeerEventsFilter {
f.eventType = simulations.EventTypeMsg
b := false
f.msgReceive = &b
return f
}
// Protocol sets the filter to only one message protocol.
func (f *PeerEventsFilter) Protocol(p string) *PeerEventsFilter {
f.eventType = simulations.EventTypeMsg
f.protocol = &p
return f
}
// MsgCode sets the filter to only one msg code.
func (f *PeerEventsFilter) MsgCode(c uint64) *PeerEventsFilter {
f.eventType = simulations.EventTypeMsg
f.msgCode = &c
return f
}
// PeerEvents returns a channel of events that are captured by admin peerEvents
// subscription nodes with provided NodeIDs. Additional filters can be set to ignore
// events that are not relevant.
func (s *Simulation) PeerEvents(ctx context.Context, ids []enode.ID, filters ...*PeerEventsFilter) <-chan PeerEvent {
eventC := make(chan PeerEvent)
// wait group to make sure all subscriptions to admin peerEvents are established
// before this function returns.
var subsWG sync.WaitGroup
for _, id := range ids {
s.shutdownWG.Add(1)
subsWG.Add(1)
go func(id enode.ID) {
defer s.shutdownWG.Done()
events := make(chan *simulations.Event)
sub := s.Net.Events().Subscribe(events)
defer sub.Unsubscribe()
subsWG.Done()
for {
select {
case <-ctx.Done():
if err := ctx.Err(); err != nil {
select {
case eventC <- PeerEvent{NodeID: id, Error: err}:
case <-s.Done():
}
}
return
case <-s.Done():
return
case e := <-events:
// ignore control events
if e.Control {
continue
}
match := len(filters) == 0 // if there are no filters match all events
for _, f := range filters {
if f.eventType == simulations.EventTypeConn && e.Conn != nil {
if *f.connUp != e.Conn.Up {
continue
}
// all connection filter parameters matched, break the loop
match = true
break
}
if f.eventType == simulations.EventTypeMsg && e.Msg != nil {
if f.msgReceive != nil && *f.msgReceive != e.Msg.Received {
continue
}
if f.protocol != nil && *f.protocol != e.Msg.Protocol {
continue
}
if f.msgCode != nil && *f.msgCode != e.Msg.Code {
continue
}
// all message filter parameters matched, break the loop
match = true
break
}
}
var peerID enode.ID
switch e.Type {
case simulations.EventTypeConn:
peerID = e.Conn.One
if peerID == id {
peerID = e.Conn.Other
}
case simulations.EventTypeMsg:
peerID = e.Msg.One
if peerID == id {
peerID = e.Msg.Other
}
}
if match {
select {
case eventC <- PeerEvent{NodeID: id, PeerID: peerID, Event: e}:
case <-ctx.Done():
if err := ctx.Err(); err != nil {
select {
case eventC <- PeerEvent{NodeID: id, PeerID: peerID, Error: err}:
case <-s.Done():
}
}
return
case <-s.Done():
return
}
}
case err := <-sub.Err():
if err != nil {
select {
case eventC <- PeerEvent{NodeID: id, Error: err}:
case <-ctx.Done():
if err := ctx.Err(); err != nil {
select {
case eventC <- PeerEvent{NodeID: id, Error: err}:
case <-s.Done():
}
}
return
case <-s.Done():
return
}
}
}
}
}(id)
}
// wait all subscriptions
subsWG.Wait()
return eventC
}

View File

@ -0,0 +1,107 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"sync"
"testing"
"time"
)
// TestPeerEvents creates simulation, adds two nodes,
// register for peer events, connects nodes in a chain
// and waits for the number of connection events to
// be received.
func TestPeerEvents(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
_, err := sim.AddNodes(2)
if err != nil {
t.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
events := sim.PeerEvents(ctx, sim.NodeIDs())
// two nodes -> two connection events
expectedEventCount := 2
var wg sync.WaitGroup
wg.Add(expectedEventCount)
go func() {
for e := range events {
if e.Error != nil {
if e.Error == context.Canceled {
return
}
t.Error(e.Error)
continue
}
wg.Done()
}
}()
err = sim.Net.ConnectNodesChain(sim.NodeIDs())
if err != nil {
t.Fatal(err)
}
wg.Wait()
}
func TestPeerEventsTimeout(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
_, err := sim.AddNodes(2)
if err != nil {
t.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
events := sim.PeerEvents(ctx, sim.NodeIDs())
done := make(chan struct{})
errC := make(chan error)
go func() {
for e := range events {
if e.Error == context.Canceled {
return
}
if e.Error == context.DeadlineExceeded {
close(done)
return
} else {
errC <- e.Error
}
}
}()
select {
case <-time.After(time.Second):
t.Fatal("no context deadline received")
case err := <-errC:
t.Fatal(err)
case <-done:
// all good, context deadline detected
}
}

View File

@ -0,0 +1,141 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation_test
import (
"context"
"fmt"
"sync"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/network/simulation"
)
// Every node can have a Kademlia associated using the node bucket under
// BucketKeyKademlia key. This allows to use WaitTillHealthy to block until
// all nodes have the their Kademlias healthy.
func ExampleSimulation_WaitTillHealthy() {
sim := simulation.New(map[string]simulation.ServiceFunc{
"bzz": func(ctx *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
addr := network.NewAddr(ctx.Config.Node())
hp := network.NewHiveParams()
hp.Discovery = false
config := &network.BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
}
kad := network.NewKademlia(addr.Over(), network.NewKadParams())
// store kademlia in node's bucket under BucketKeyKademlia
// so that it can be found by WaitTillHealthy method.
b.Store(simulation.BucketKeyKademlia, kad)
return network.NewBzz(config, kad, nil, nil, nil), nil, nil
},
})
defer sim.Close()
_, err := sim.AddNodesAndConnectRing(10)
if err != nil {
// handle error properly...
panic(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
ill, err := sim.WaitTillHealthy(ctx)
if err != nil {
// inspect the latest detected not healthy kademlias
for id, kad := range ill {
fmt.Println("Node", id)
fmt.Println(kad.String())
}
// handle error...
}
// continue with the test
}
// Watch all peer events in the simulation network, buy receiving from a channel.
func ExampleSimulation_PeerEvents() {
sim := simulation.New(nil)
defer sim.Close()
events := sim.PeerEvents(context.Background(), sim.NodeIDs())
go func() {
for e := range events {
if e.Error != nil {
log.Error("peer event", "err", e.Error)
continue
}
log.Info("peer event", "node", e.NodeID, "peer", e.PeerID, "type", e.Event.Type)
}
}()
}
// Detect when a nodes drop a peer.
func ExampleSimulation_PeerEvents_disconnections() {
sim := simulation.New(nil)
defer sim.Close()
disconnections := sim.PeerEvents(
context.Background(),
sim.NodeIDs(),
simulation.NewPeerEventsFilter().Drop(),
)
go func() {
for d := range disconnections {
if d.Error != nil {
log.Error("peer drop", "err", d.Error)
continue
}
log.Warn("peer drop", "node", d.NodeID, "peer", d.PeerID)
}
}()
}
// Watch multiple types of events or messages. In this case, they differ only
// by MsgCode, but filters can be set for different types or protocols, too.
func ExampleSimulation_PeerEvents_multipleFilters() {
sim := simulation.New(nil)
defer sim.Close()
msgs := sim.PeerEvents(
context.Background(),
sim.NodeIDs(),
// Watch when bzz messages 1 and 4 are received.
simulation.NewPeerEventsFilter().ReceivedMessages().Protocol("bzz").MsgCode(1),
simulation.NewPeerEventsFilter().ReceivedMessages().Protocol("bzz").MsgCode(4),
)
go func() {
for m := range msgs {
if m.Error != nil {
log.Error("bzz message", "err", m.Error)
continue
}
log.Info("bzz message", "node", m.NodeID, "peer", m.PeerID)
}
}()
}

View File

@ -0,0 +1,68 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"fmt"
"net/http"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/simulations"
)
// Package defaults.
var (
DefaultHTTPSimAddr = ":8888"
)
//WithServer implements the builder pattern constructor for Simulation to
//start with a HTTP server
func (s *Simulation) WithServer(addr string) *Simulation {
//assign default addr if nothing provided
if addr == "" {
addr = DefaultHTTPSimAddr
}
log.Info(fmt.Sprintf("Initializing simulation server on %s...", addr))
//initialize the HTTP server
s.handler = simulations.NewServer(s.Net)
s.runC = make(chan struct{})
//add swarm specific routes to the HTTP server
s.addSimulationRoutes()
s.httpSrv = &http.Server{
Addr: addr,
Handler: s.handler,
}
go func() {
err := s.httpSrv.ListenAndServe()
if err != nil {
log.Error("Error starting the HTTP server", "error", err)
}
}()
return s
}
//register additional HTTP routes
func (s *Simulation) addSimulationRoutes() {
s.handler.POST("/runsim", s.RunSimulation)
}
// RunSimulation is the actual POST endpoint runner
func (s *Simulation) RunSimulation(w http.ResponseWriter, req *http.Request) {
log.Debug("RunSimulation endpoint running")
s.runC <- struct{}{}
w.WriteHeader(http.StatusOK)
}

View File

@ -0,0 +1,110 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"fmt"
"net/http"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
)
func TestSimulationWithHTTPServer(t *testing.T) {
log.Debug("Init simulation")
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
sim := New(
map[string]ServiceFunc{
"noop": func(_ *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
return newNoopService(), nil, nil
},
}).WithServer(DefaultHTTPSimAddr)
defer sim.Close()
log.Debug("Done.")
_, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
log.Debug("Starting sim round and let it time out...")
//first test that running without sending to the channel will actually
//block the simulation, so let it time out
result := sim.Run(ctx, func(ctx context.Context, sim *Simulation) error {
log.Debug("Just start the sim without any action and wait for the timeout")
//ensure with a Sleep that simulation doesn't terminate before the timeout
time.Sleep(2 * time.Second)
return nil
})
if result.Error != nil {
if result.Error.Error() == "context deadline exceeded" {
log.Debug("Expected timeout error received")
} else {
t.Fatal(result.Error)
}
}
//now run it again and send the expected signal on the waiting channel,
//then close the simulation
log.Debug("Starting sim round and wait for frontend signal...")
//this time the timeout should be long enough so that it doesn't kick in too early
ctx, cancel2 := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel2()
errC := make(chan error, 1)
go triggerSimulationRun(t, errC)
result = sim.Run(ctx, func(ctx context.Context, sim *Simulation) error {
log.Debug("This run waits for the run signal from `frontend`...")
//ensure with a Sleep that simulation doesn't terminate before the signal is received
time.Sleep(2 * time.Second)
return nil
})
if result.Error != nil {
t.Fatal(result.Error)
}
if err := <-errC; err != nil {
t.Fatal(err)
}
log.Debug("Test terminated successfully")
}
func triggerSimulationRun(t *testing.T, errC chan error) {
//We need to first wait for the sim HTTP server to start running...
time.Sleep(2 * time.Second)
//then we can send the signal
log.Debug("Sending run signal to simulation: POST /runsim...")
resp, err := http.Post(fmt.Sprintf("http://localhost%s/runsim", DefaultHTTPSimAddr), "application/json", nil)
if err != nil {
errC <- fmt.Errorf("Request failed: %v", err)
return
}
log.Debug("Signal sent")
if resp.StatusCode != http.StatusOK {
errC <- fmt.Errorf("err %s", resp.Status)
return
}
errC <- resp.Body.Close()
}

View File

@ -0,0 +1,203 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"encoding/binary"
"encoding/hex"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethersphere/swarm/network"
)
// BucketKeyKademlia is the key to be used for storing the kademlia
// instance for particular node, usually inside the ServiceFunc function.
var BucketKeyKademlia BucketKey = "kademlia"
// WaitTillHealthy is blocking until the health of all kademlias is true.
// If error is not nil, a map of kademlia that was found not healthy is returned.
// TODO: Check correctness since change in kademlia depth calculation logic
func (s *Simulation) WaitTillHealthy(ctx context.Context) (ill map[enode.ID]*network.Kademlia, err error) {
// Prepare PeerPot map for checking Kademlia health
var ppmap map[string]*network.PeerPot
kademlias := s.kademlias()
addrs := make([][]byte, 0, len(kademlias))
// TODO verify that all kademlias have same params
for _, k := range kademlias {
addrs = append(addrs, k.BaseAddr())
}
ppmap = network.NewPeerPotMap(s.neighbourhoodSize, addrs)
// Wait for healthy Kademlia on every node before checking files
ticker := time.NewTicker(200 * time.Millisecond)
defer ticker.Stop()
ill = make(map[enode.ID]*network.Kademlia)
for {
select {
case <-ctx.Done():
return ill, ctx.Err()
case <-ticker.C:
for k := range ill {
delete(ill, k)
}
log.Debug("kademlia health check", "addr count", len(addrs), "kad len", len(kademlias))
for id, k := range kademlias {
//PeerPot for this node
addr := common.Bytes2Hex(k.BaseAddr())
pp := ppmap[addr]
//call Healthy RPC
h := k.GetHealthInfo(pp)
//print info
log.Debug(k.String())
log.Debug("kademlia", "connectNN", h.ConnectNN, "knowNN", h.KnowNN)
log.Debug("kademlia", "health", h.ConnectNN && h.KnowNN, "addr", hex.EncodeToString(k.BaseAddr()), "node", id)
log.Debug("kademlia", "ill condition", !h.ConnectNN, "addr", hex.EncodeToString(k.BaseAddr()), "node", id)
if !h.Healthy() {
ill[id] = k
}
}
if len(ill) == 0 {
return nil, nil
}
}
}
}
// kademlias returns all Kademlia instances that are set
// in simulation bucket.
func (s *Simulation) kademlias() (ks map[enode.ID]*network.Kademlia) {
items := s.UpNodesItems(BucketKeyKademlia)
log.Debug("kademlia len items", "len", len(items))
ks = make(map[enode.ID]*network.Kademlia, len(items))
for id, v := range items {
k, ok := v.(*network.Kademlia)
if !ok {
continue
}
ks[id] = k
}
return ks
}
// WaitTillSnapshotRecreated is blocking until all the connections specified
// in the snapshot are registered in the kademlia.
// It differs from WaitTillHealthy, which waits only until all the kademlias are
// healthy (it might happen even before all the connections are established).
func (s *Simulation) WaitTillSnapshotRecreated(ctx context.Context, snap *simulations.Snapshot) error {
expected := getSnapshotConnections(snap.Conns)
ticker := time.NewTicker(150 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-ticker.C:
actual := s.getActualConnections()
if isAllDeployed(expected, actual) {
return nil
}
}
}
}
func (s *Simulation) getActualConnections() (res []uint64) {
kademlias := s.kademlias()
for base, k := range kademlias {
k.EachConn(base[:], 256, func(p *network.Peer, _ int) bool {
res = append(res, getConnectionHash(base, p.ID()))
return true
})
}
// only list those connections that appear twice (both peers should recognize connection as active)
res = removeDuplicatesAndSingletons(res)
return res
}
func getSnapshotConnections(conns []simulations.Conn) (res []uint64) {
for _, c := range conns {
res = append(res, getConnectionHash(c.One, c.Other))
}
return res
}
// returns an integer connection identifier (similar to 8-byte hash)
func getConnectionHash(a, b enode.ID) uint64 {
var h [8]byte
for i := 0; i < 8; i++ {
h[i] = a[i] ^ b[i]
}
res := binary.LittleEndian.Uint64(h[:])
return res
}
// returns true if all connections in expected are listed in actual
func isAllDeployed(expected []uint64, actual []uint64) bool {
if len(expected) == 0 {
return true
}
exp := make([]uint64, len(expected))
copy(exp, expected)
for _, c := range actual {
// remove value c from exp
for i := 0; i < len(exp); i++ {
if exp[i] == c {
exp = removeListElement(exp, i)
if len(exp) == 0 {
return true
}
}
}
}
return len(exp) == 0
}
func removeListElement(arr []uint64, i int) []uint64 {
last := len(arr) - 1
arr[i] = arr[last]
arr = arr[:last]
return arr
}
func removeDuplicatesAndSingletons(arr []uint64) []uint64 {
for i := 0; i < len(arr); {
found := false
for j := i + 1; j < len(arr); j++ {
if arr[i] == arr[j] {
arr = removeListElement(arr, j) // remove duplicate
found = true
break
}
}
if found {
i++
} else {
arr = removeListElement(arr, i) // remove singleton
}
}
return arr
}

View File

@ -0,0 +1,310 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
)
/*
TestWaitTillHealthy tests that we indeed get a healthy network after we wait for it.
For this to be tested, a bit of a snake tail bite needs to happen:
* First we create a first simulation
* Run it as nodes connected in a ring
* Wait until the network is healthy
* Then we create a snapshot
* With this snapshot we create a new simulation
* This simulation is expected to have a healthy configuration, as it uses the snapshot
* Thus we just iterate all nodes and check that their kademlias are healthy
* If all kademlias are healthy, the test succeeded, otherwise it failed
*/
func TestWaitTillHealthy(t *testing.T) {
t.Skip("this test is flaky; disabling till underlying problem is solved")
testNodesNum := 10
// create the first simulation
sim := New(createSimServiceMap(true))
// connect and...
nodeIDs, err := sim.AddNodesAndConnectRing(testNodesNum)
if err != nil {
t.Fatal(err)
}
// array of all overlay addresses
var addrs [][]byte
// iterate once to be able to build the peer map
for _, node := range nodeIDs {
//get the kademlia overlay address from this ID
a := node.Bytes()
//append it to the array of all overlay addresses
addrs = append(addrs, a)
}
// build a PeerPot only once
pp := network.NewPeerPotMap(network.NewKadParams().NeighbourhoodSize, addrs)
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
// ...wait until healthy
ill, err := sim.WaitTillHealthy(ctx)
if err != nil {
for id, kad := range ill {
t.Log("Node", id)
t.Log(kad.String())
}
t.Fatal(err)
}
// now create a snapshot of this network
snap, err := sim.Net.Snapshot()
if err != nil {
t.Fatal(err)
}
// close the initial simulation
sim.Close()
// create a control simulation
controlSim := New(createSimServiceMap(false))
defer controlSim.Close()
// load the snapshot into this control simulation
err = controlSim.Net.Load(snap)
if err != nil {
t.Fatal(err)
}
_, err = controlSim.WaitTillHealthy(ctx)
if err != nil {
t.Fatal(err)
}
for _, node := range nodeIDs {
// ...get its kademlia
item, ok := controlSim.NodeItem(node, BucketKeyKademlia)
if !ok {
t.Fatal("No kademlia bucket item")
}
kad := item.(*network.Kademlia)
// get its base address
kid := common.Bytes2Hex(kad.BaseAddr())
//get the health info
info := kad.GetHealthInfo(pp[kid])
log.Trace("Health info", "info", info)
// check that it is healthy
healthy := info.Healthy()
if !healthy {
t.Fatalf("Expected node %v of control simulation to be healthy, but it is not, unhealthy kademlias: %v", node, kad.String())
}
}
}
// createSimServiceMap returns the services map
// this function will create the sim services with or without discovery enabled
// based on the flag passed
func createSimServiceMap(discovery bool) map[string]ServiceFunc {
return map[string]ServiceFunc{
"bzz": func(ctx *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
addr := network.NewAddr(ctx.Config.Node())
hp := network.NewHiveParams()
hp.Discovery = discovery
config := &network.BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
}
kad := network.NewKademlia(addr.Over(), network.NewKadParams())
// store kademlia in node's bucket under BucketKeyKademlia
// so that it can be found by WaitTillHealthy method.
b.Store(BucketKeyKademlia, kad)
return network.NewBzz(config, kad, nil, nil, nil), nil, nil
},
}
}
// TestWaitTillSnapshotRecreated tests that we indeed have a network
// configuration specified in the snapshot file, after we wait for it.
//
// First we create a first simulation
// Run it as nodes connected in a ring
// Wait until the network is healthy
// Then we create a snapshot
// With this snapshot we create a new simulation
// Call WaitTillSnapshotRecreated() function and wait until it returns
// Iterate the nodes and check if all the connections are successfully recreated
func TestWaitTillSnapshotRecreated(t *testing.T) {
t.Skip("test is flaky. disabling until underlying problem is addressed")
var err error
sim := New(createSimServiceMap(true))
_, err = sim.AddNodesAndConnectRing(16)
if err != nil {
t.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
_, err = sim.WaitTillHealthy(ctx)
if err != nil {
t.Fatal(err)
}
originalConnections := sim.getActualConnections()
snap, err := sim.Net.Snapshot()
sim.Close()
if err != nil {
t.Fatal(err)
}
controlSim := New(createSimServiceMap(false))
defer controlSim.Close()
err = controlSim.Net.Load(snap)
if err != nil {
t.Fatal(err)
}
err = controlSim.WaitTillSnapshotRecreated(ctx, snap)
if err != nil {
t.Fatal(err)
}
controlConnections := controlSim.getActualConnections()
for _, c := range originalConnections {
if !exist(controlConnections, c) {
t.Fatal("connection was not recreated")
}
}
}
// exist returns true if val is found in arr
func exist(arr []uint64, val uint64) bool {
for _, c := range arr {
if c == val {
return true
}
}
return false
}
func TestRemoveDuplicatesAndSingletons(t *testing.T) {
singletons := []uint64{
0x3c127c6f6cb026b0,
0x0f45190d72e71fc5,
0xb0184c02449e0bb6,
0xa85c7b84239c54d3,
0xe3b0c44298fc1c14,
0x9afbf4c8996fb924,
0x27ae41e4649b934c,
0xa495991b7852b855,
}
doubles := []uint64{
0x1b879f878de7fc7a,
0xc6791470521bdab4,
0xdd34b0ee39bbccc6,
0x4d904fbf0f31da10,
0x6403c2560432c8f8,
0x18954e33cf3ad847,
0x90db00e98dc7a8a6,
0x92886b0dfcc1809b,
}
var arr []uint64
arr = append(arr, doubles...)
arr = append(arr, singletons...)
arr = append(arr, doubles...)
arr = removeDuplicatesAndSingletons(arr)
for _, i := range singletons {
if exist(arr, i) {
t.Fatalf("singleton not removed: %d", i)
}
}
for _, i := range doubles {
if !exist(arr, i) {
t.Fatalf("wrong value removed: %d", i)
}
}
for j := 0; j < len(doubles); j++ {
v := doubles[j] + singletons[j]
if exist(arr, v) {
t.Fatalf("non-existing value found, index: %d", j)
}
}
}
func TestIsAllDeployed(t *testing.T) {
a := []uint64{
0x3c127c6f6cb026b0,
0x0f45190d72e71fc5,
0xb0184c02449e0bb6,
0xa85c7b84239c54d3,
0xe3b0c44298fc1c14,
0x9afbf4c8996fb924,
0x27ae41e4649b934c,
0xa495991b7852b855,
}
b := []uint64{
0x1b879f878de7fc7a,
0xc6791470521bdab4,
0xdd34b0ee39bbccc6,
0x4d904fbf0f31da10,
0x6403c2560432c8f8,
0x18954e33cf3ad847,
0x90db00e98dc7a8a6,
0x92886b0dfcc1809b,
}
var c []uint64
c = append(c, a...)
c = append(c, b...)
if !isAllDeployed(a, c) {
t.Fatal("isAllDeployed failed")
}
if !isAllDeployed(b, c) {
t.Fatal("isAllDeployed failed")
}
if isAllDeployed(c, a) {
t.Fatal("isAllDeployed failed: false positive")
}
if isAllDeployed(c, b) {
t.Fatal("isAllDeployed failed: false positive")
}
c = c[2:]
if isAllDeployed(a, c) {
t.Fatal("isAllDeployed failed: false positive")
}
if !isAllDeployed(b, c) {
t.Fatal("isAllDeployed failed")
}
}

341
network/simulation/node.go Normal file
View File

@ -0,0 +1,341 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"bytes"
"context"
"crypto/ecdsa"
"encoding/json"
"errors"
"io/ioutil"
"math/rand"
"os"
"sync"
"time"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
)
var (
BucketKeyBzzPrivateKey BucketKey = "bzzprivkey"
)
// NodeIDs returns NodeIDs for all nodes in the network.
func (s *Simulation) NodeIDs() (ids []enode.ID) {
nodes := s.Net.GetNodes()
ids = make([]enode.ID, len(nodes))
for i, node := range nodes {
ids[i] = node.ID()
}
return ids
}
// UpNodeIDs returns NodeIDs for nodes that are up in the network.
func (s *Simulation) UpNodeIDs() (ids []enode.ID) {
nodes := s.Net.GetNodes()
for _, node := range nodes {
if node.Up() {
ids = append(ids, node.ID())
}
}
return ids
}
// DownNodeIDs returns NodeIDs for nodes that are stopped in the network.
func (s *Simulation) DownNodeIDs() (ids []enode.ID) {
nodes := s.Net.GetNodes()
for _, node := range nodes {
if !node.Up() {
ids = append(ids, node.ID())
}
}
return ids
}
// AddNodeOption defines the option that can be passed
// to Simulation.AddNode method.
type AddNodeOption func(*adapters.NodeConfig)
// AddNodeWithMsgEvents sets the EnableMsgEvents option
// to NodeConfig.
func AddNodeWithMsgEvents(enable bool) AddNodeOption {
return func(o *adapters.NodeConfig) {
o.EnableMsgEvents = enable
}
}
// AddNodeWithService specifies a service that should be
// started on a node. This option can be repeated as variadic
// argument toe AddNode and other add node related methods.
// If AddNodeWithService is not specified, all services will be started.
func AddNodeWithService(serviceName string) AddNodeOption {
return func(o *adapters.NodeConfig) {
o.Services = append(o.Services, serviceName)
}
}
// AddNode creates a new node with random configuration,
// applies provided options to the config and adds the node to network.
// By default all services will be started on a node. If one or more
// AddNodeWithService option are provided, only specified services will be started.
func (s *Simulation) AddNode(opts ...AddNodeOption) (id enode.ID, err error) {
conf := adapters.RandomNodeConfig()
for _, o := range opts {
o(conf)
}
if len(conf.Services) == 0 {
conf.Services = s.serviceNames
}
// add ENR records to the underlying node
// most importantly the bzz overlay address
//
// for now we have no way of setting bootnodes or lightnodes in sims
// so we just let them be set to false
// they should perhaps be possible to override them with AddNodeOption
bzzPrivateKey, err := BzzPrivateKeyFromConfig(conf)
if err != nil {
return enode.ID{}, err
}
enodeParams := &network.EnodeParams{
PrivateKey: bzzPrivateKey,
}
record, err := network.NewEnodeRecord(enodeParams)
conf.Record = *record
// Add the bzz address to the node config
node, err := s.Net.NewNodeWithConfig(conf)
if err != nil {
return id, err
}
s.buckets[node.ID()] = new(sync.Map)
s.SetNodeItem(node.ID(), BucketKeyBzzPrivateKey, bzzPrivateKey)
return node.ID(), s.Net.Start(node.ID())
}
// AddNodes creates new nodes with random configurations,
// applies provided options to the config and adds nodes to network.
func (s *Simulation) AddNodes(count int, opts ...AddNodeOption) (ids []enode.ID, err error) {
ids = make([]enode.ID, 0, count)
for i := 0; i < count; i++ {
id, err := s.AddNode(opts...)
if err != nil {
return nil, err
}
ids = append(ids, id)
}
return ids, nil
}
// AddNodesAndConnectFull is a helpper method that combines
// AddNodes and ConnectNodesFull. Only new nodes will be connected.
func (s *Simulation) AddNodesAndConnectFull(count int, opts ...AddNodeOption) (ids []enode.ID, err error) {
if count < 2 {
return nil, errors.New("count of nodes must be at least 2")
}
ids, err = s.AddNodes(count, opts...)
if err != nil {
return nil, err
}
err = s.Net.ConnectNodesFull(ids)
if err != nil {
return nil, err
}
return ids, nil
}
// AddNodesAndConnectChain is a helpper method that combines
// AddNodes and ConnectNodesChain. The chain will be continued from the last
// added node, if there is one in simulation using ConnectToLastNode method.
func (s *Simulation) AddNodesAndConnectChain(count int, opts ...AddNodeOption) (ids []enode.ID, err error) {
if count < 2 {
return nil, errors.New("count of nodes must be at least 2")
}
id, err := s.AddNode(opts...)
if err != nil {
return nil, err
}
err = s.Net.ConnectToLastNode(id)
if err != nil {
return nil, err
}
ids, err = s.AddNodes(count-1, opts...)
if err != nil {
return nil, err
}
ids = append([]enode.ID{id}, ids...)
err = s.Net.ConnectNodesChain(ids)
if err != nil {
return nil, err
}
return ids, nil
}
// AddNodesAndConnectRing is a helpper method that combines
// AddNodes and ConnectNodesRing.
func (s *Simulation) AddNodesAndConnectRing(count int, opts ...AddNodeOption) (ids []enode.ID, err error) {
if count < 2 {
return nil, errors.New("count of nodes must be at least 2")
}
ids, err = s.AddNodes(count, opts...)
if err != nil {
return nil, err
}
err = s.Net.ConnectNodesRing(ids)
if err != nil {
return nil, err
}
return ids, nil
}
// AddNodesAndConnectStar is a helpper method that combines
// AddNodes and ConnectNodesStar.
func (s *Simulation) AddNodesAndConnectStar(count int, opts ...AddNodeOption) (ids []enode.ID, err error) {
if count < 2 {
return nil, errors.New("count of nodes must be at least 2")
}
ids, err = s.AddNodes(count, opts...)
if err != nil {
return nil, err
}
err = s.Net.ConnectNodesStar(ids[1:], ids[0])
if err != nil {
return nil, err
}
return ids, nil
}
// UploadSnapshot uploads a snapshot to the simulation
// This method tries to open the json file provided, applies the config to all nodes
// and then loads the snapshot into the Simulation network
func (s *Simulation) UploadSnapshot(ctx context.Context, snapshotFile string, opts ...AddNodeOption) error {
f, err := os.Open(snapshotFile)
if err != nil {
return err
}
jsonbyte, err := ioutil.ReadAll(f)
f.Close()
if err != nil {
return err
}
var snap simulations.Snapshot
if err := json.Unmarshal(jsonbyte, &snap); err != nil {
return err
}
//the snapshot probably has the property EnableMsgEvents not set
//set it to true (we need this to wait for messages before uploading)
for i := range snap.Nodes {
snap.Nodes[i].Node.Config.EnableMsgEvents = true
snap.Nodes[i].Node.Config.Services = s.serviceNames
for _, o := range opts {
o(snap.Nodes[i].Node.Config)
}
}
if err := s.Net.Load(&snap); err != nil {
return err
}
return s.WaitTillSnapshotRecreated(ctx, &snap)
}
// StartNode starts a node by NodeID.
func (s *Simulation) StartNode(id enode.ID) (err error) {
return s.Net.Start(id)
}
// StartRandomNode starts a random node.
func (s *Simulation) StartRandomNode() (id enode.ID, err error) {
n := s.Net.GetRandomDownNode()
if n == nil {
return id, ErrNodeNotFound
}
return n.ID(), s.Net.Start(n.ID())
}
// StartRandomNodes starts random nodes.
func (s *Simulation) StartRandomNodes(count int) (ids []enode.ID, err error) {
ids = make([]enode.ID, 0, count)
for i := 0; i < count; i++ {
n := s.Net.GetRandomDownNode()
if n == nil {
return nil, ErrNodeNotFound
}
err = s.Net.Start(n.ID())
if err != nil {
return nil, err
}
ids = append(ids, n.ID())
}
return ids, nil
}
// StopNode stops a node by NodeID.
func (s *Simulation) StopNode(id enode.ID) (err error) {
return s.Net.Stop(id)
}
// StopRandomNode stops a random node.
func (s *Simulation) StopRandomNode() (id enode.ID, err error) {
n := s.Net.GetRandomUpNode()
if n == nil {
return id, ErrNodeNotFound
}
return n.ID(), s.Net.Stop(n.ID())
}
// StopRandomNodes stops random nodes.
func (s *Simulation) StopRandomNodes(count int) (ids []enode.ID, err error) {
ids = make([]enode.ID, 0, count)
for i := 0; i < count; i++ {
n := s.Net.GetRandomUpNode()
if n == nil {
return nil, ErrNodeNotFound
}
err = s.Net.Stop(n.ID())
if err != nil {
return nil, err
}
ids = append(ids, n.ID())
}
return ids, nil
}
// seed the random generator for Simulation.randomNode.
func init() {
rand.Seed(time.Now().UnixNano())
}
// derive a private key for swarm for the node key
// returns the private key used to generate the bzz key
func BzzPrivateKeyFromConfig(conf *adapters.NodeConfig) (*ecdsa.PrivateKey, error) {
// pad the seed key some arbitrary data as ecdsa.GenerateKey takes 40 bytes seed data
privKeyBuf := append(crypto.FromECDSA(conf.PrivateKey), []byte{0x62, 0x7a, 0x7a, 0x62, 0x7a, 0x7a, 0x62, 0x7a}...)
bzzPrivateKey, err := ecdsa.GenerateKey(crypto.S256(), bytes.NewReader(privKeyBuf))
if err != nil {
return nil, err
}
return bzzPrivateKey, nil
}

View File

@ -0,0 +1,446 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"fmt"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
)
func TestUpDownNodeIDs(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
ids, err := sim.AddNodes(10)
if err != nil {
t.Fatal(err)
}
gotIDs := sim.NodeIDs()
if !equalNodeIDs(ids, gotIDs) {
t.Error("returned nodes are not equal to added ones")
}
stoppedIDs, err := sim.StopRandomNodes(3)
if err != nil {
t.Fatal(err)
}
gotIDs = sim.UpNodeIDs()
for _, id := range gotIDs {
if !sim.Net.GetNode(id).Up() {
t.Errorf("node %s should not be down", id)
}
}
if !equalNodeIDs(ids, append(gotIDs, stoppedIDs...)) {
t.Error("returned nodes are not equal to added ones")
}
gotIDs = sim.DownNodeIDs()
for _, id := range gotIDs {
if sim.Net.GetNode(id).Up() {
t.Errorf("node %s should not be up", id)
}
}
if !equalNodeIDs(stoppedIDs, gotIDs) {
t.Error("returned nodes are not equal to the stopped ones")
}
}
func equalNodeIDs(one, other []enode.ID) bool {
if len(one) != len(other) {
return false
}
var count int
for _, a := range one {
var found bool
for _, b := range other {
if a == b {
found = true
break
}
}
if found {
count++
} else {
return false
}
}
return count == len(one)
}
func TestAddNode(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
id, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
n := sim.Net.GetNode(id)
if n == nil {
t.Fatal("node not found")
}
if !n.Up() {
t.Error("node not started")
}
}
func TestAddNodeWithMsgEvents(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
id, err := sim.AddNode(AddNodeWithMsgEvents(true))
if err != nil {
t.Fatal(err)
}
if !sim.Net.GetNode(id).Config.EnableMsgEvents {
t.Error("EnableMsgEvents is false")
}
id, err = sim.AddNode(AddNodeWithMsgEvents(false))
if err != nil {
t.Fatal(err)
}
if sim.Net.GetNode(id).Config.EnableMsgEvents {
t.Error("EnableMsgEvents is true")
}
}
func TestAddNodeWithService(t *testing.T) {
sim := New(map[string]ServiceFunc{
"noop1": noopServiceFunc,
"noop2": noopServiceFunc,
})
defer sim.Close()
id, err := sim.AddNode(AddNodeWithService("noop1"))
if err != nil {
t.Fatal(err)
}
n := sim.Net.GetNode(id).Node.(*adapters.SimNode)
if n.Service("noop1") == nil {
t.Error("service noop1 not found on node")
}
if n.Service("noop2") != nil {
t.Error("service noop2 should not be found on node")
}
}
func TestAddNodeMultipleServices(t *testing.T) {
sim := New(map[string]ServiceFunc{
"noop1": noopServiceFunc,
"noop2": noopService2Func,
})
defer sim.Close()
id, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
n := sim.Net.GetNode(id).Node.(*adapters.SimNode)
if n.Service("noop1") == nil {
t.Error("service noop1 not found on node")
}
if n.Service("noop2") == nil {
t.Error("service noop2 not found on node")
}
}
func TestAddNodeDuplicateServiceError(t *testing.T) {
sim := New(map[string]ServiceFunc{
"noop1": noopServiceFunc,
"noop2": noopServiceFunc,
})
defer sim.Close()
wantErr := "duplicate service: *simulation.noopService"
_, err := sim.AddNode()
if err.Error() != wantErr {
t.Errorf("got error %q, want %q", err, wantErr)
}
}
func TestAddNodes(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
nodesCount := 12
ids, err := sim.AddNodes(nodesCount)
if err != nil {
t.Fatal(err)
}
count := len(ids)
if count != nodesCount {
t.Errorf("expected %v nodes, got %v", nodesCount, count)
}
count = len(sim.Net.GetNodes())
if count != nodesCount {
t.Errorf("expected %v nodes, got %v", nodesCount, count)
}
}
func TestAddNodesAndConnectFull(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
n := 12
ids, err := sim.AddNodesAndConnectFull(n)
if err != nil {
t.Fatal(err)
}
simulations.VerifyFull(t, sim.Net, ids)
}
func TestAddNodesAndConnectChain(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
_, err := sim.AddNodesAndConnectChain(12)
if err != nil {
t.Fatal(err)
}
// add another set of nodes to test
// if two chains are connected
_, err = sim.AddNodesAndConnectChain(7)
if err != nil {
t.Fatal(err)
}
simulations.VerifyChain(t, sim.Net, sim.UpNodeIDs())
}
func TestAddNodesAndConnectRing(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
ids, err := sim.AddNodesAndConnectRing(12)
if err != nil {
t.Fatal(err)
}
simulations.VerifyRing(t, sim.Net, ids)
}
func TestAddNodesAndConnectStar(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
ids, err := sim.AddNodesAndConnectStar(12)
if err != nil {
t.Fatal(err)
}
simulations.VerifyStar(t, sim.Net, ids, 0)
}
//To test that uploading a snapshot works
func TestUploadSnapshot(t *testing.T) {
log.Debug("Creating simulation")
s := New(map[string]ServiceFunc{
"bzz": func(ctx *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
addr := network.NewAddr(ctx.Config.Node())
hp := network.NewHiveParams()
hp.Discovery = false
config := &network.BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
}
kad := network.NewKademlia(addr.Over(), network.NewKadParams())
b.Store(BucketKeyKademlia, kad)
return network.NewBzz(config, kad, nil, nil, nil), nil, nil
},
})
defer s.Close()
nodeCount := 16
log.Debug("Uploading snapshot")
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
err := s.UploadSnapshot(ctx, fmt.Sprintf("../stream/testing/snapshot_%d.json", nodeCount))
if err != nil {
t.Fatalf("Error uploading snapshot to simulation network: %v", err)
}
log.Debug("Starting simulation...")
s.Run(ctx, func(ctx context.Context, sim *Simulation) error {
log.Debug("Checking")
nodes := sim.UpNodeIDs()
if len(nodes) != nodeCount {
t.Fatal("Simulation network node number doesn't match snapshot node number")
}
return nil
})
log.Debug("Done.")
}
func TestStartStopNode(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
id, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
n := sim.Net.GetNode(id)
if n == nil {
t.Fatal("node not found")
}
if !n.Up() {
t.Error("node not started")
}
err = sim.StopNode(id)
if err != nil {
t.Fatal(err)
}
if n.Up() {
t.Error("node not stopped")
}
waitForPeerEventPropagation()
err = sim.StartNode(id)
if err != nil {
t.Fatal(err)
}
if !n.Up() {
t.Error("node not started")
}
}
func TestStartStopRandomNode(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
_, err := sim.AddNodes(3)
if err != nil {
t.Fatal(err)
}
id, err := sim.StopRandomNode()
if err != nil {
t.Fatal(err)
}
n := sim.Net.GetNode(id)
if n == nil {
t.Fatal("node not found")
}
if n.Up() {
t.Error("node not stopped")
}
id2, err := sim.StopRandomNode()
if err != nil {
t.Fatal(err)
}
waitForPeerEventPropagation()
idStarted, err := sim.StartRandomNode()
if err != nil {
t.Fatal(err)
}
if idStarted != id && idStarted != id2 {
t.Error("unexpected started node ID")
}
}
func TestStartStopRandomNodes(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
_, err := sim.AddNodes(10)
if err != nil {
t.Fatal(err)
}
ids, err := sim.StopRandomNodes(3)
if err != nil {
t.Fatal(err)
}
for _, id := range ids {
n := sim.Net.GetNode(id)
if n == nil {
t.Fatal("node not found")
}
if n.Up() {
t.Error("node not stopped")
}
}
waitForPeerEventPropagation()
ids, err = sim.StartRandomNodes(2)
if err != nil {
t.Fatal(err)
}
for _, id := range ids {
n := sim.Net.GetNode(id)
if n == nil {
t.Fatal("node not found")
}
if !n.Up() {
t.Error("node not started")
}
}
}
func waitForPeerEventPropagation() {
// Sleep here to ensure that Network.watchPeerEvents defer function
// has set the `node.Up() = false` before we start the node again.
//
// The same node is stopped and started again, and upon start
// watchPeerEvents is started in a goroutine. If the node is stopped
// and then very quickly started, that goroutine may be scheduled later
// then start and force `node.Up() = false` in its defer function.
// This will make this test unreliable.
time.Sleep(1 * time.Second)
}

View File

@ -0,0 +1,65 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
)
// Service returns a single Service by name on a particular node
// with provided id.
func (s *Simulation) Service(name string, id enode.ID) node.Service {
simNode, ok := s.Net.GetNode(id).Node.(*adapters.SimNode)
if !ok {
return nil
}
services := simNode.ServiceMap()
if len(services) == 0 {
return nil
}
return services[name]
}
// RandomService returns a single Service by name on a
// randomly chosen node that is up.
func (s *Simulation) RandomService(name string) node.Service {
n := s.Net.GetRandomUpNode().Node.(*adapters.SimNode)
if n == nil {
return nil
}
return n.Service(name)
}
// Services returns all services with a provided name
// from nodes that are up.
func (s *Simulation) Services(name string) (services map[enode.ID]node.Service) {
nodes := s.Net.GetNodes()
services = make(map[enode.ID]node.Service)
for _, node := range nodes {
if !node.Up() {
continue
}
simNode, ok := node.Node.(*adapters.SimNode)
if !ok {
continue
}
services[node.ID()] = simNode.Service(name)
}
return services
}

View File

@ -0,0 +1,46 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"testing"
)
func TestService(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
id, err := sim.AddNode()
if err != nil {
t.Fatal(err)
}
_, ok := sim.Service("noop", id).(*noopService)
if !ok {
t.Fatalf("service is not of %T type", &noopService{})
}
_, ok = sim.RandomService("noop").(*noopService)
if !ok {
t.Fatalf("service is not of %T type", &noopService{})
}
_, ok = sim.Services("noop")[id].(*noopService)
if !ok {
t.Fatalf("service is not of %T type", &noopService{})
}
}

View File

@ -0,0 +1,218 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"errors"
"net/http"
"sync"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
)
// Common errors that are returned by functions in this package.
var (
ErrNodeNotFound = errors.New("node not found")
)
// Simulation provides methods on network, nodes and services
// to manage them.
type Simulation struct {
// Net is exposed as a way to access lower level functionalities
// of p2p/simulations.Network.
Net *simulations.Network
serviceNames []string
cleanupFuncs []func()
buckets map[enode.ID]*sync.Map
shutdownWG sync.WaitGroup
done chan struct{}
mu sync.RWMutex
neighbourhoodSize int
httpSrv *http.Server //attach a HTTP server via SimulationOptions
handler *simulations.Server //HTTP handler for the server
runC chan struct{} //channel where frontend signals it is ready
}
// ServiceFunc is used in New to declare new service constructor.
// The first argument provides ServiceContext from the adapters package
// giving for example the access to NodeID. Second argument is the sync.Map
// where all "global" state related to the service should be kept.
// All cleanups needed for constructed service and any other constructed
// objects should ne provided in a single returned cleanup function.
// Returned cleanup function will be called by Close function
// after network shutdown.
type ServiceFunc func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error)
// New creates a new simulation instance
// Services map must have unique keys as service names and
// every ServiceFunc must return a node.Service of the unique type.
// This restriction is required by node.Node.Start() function
// which is used to start node.Service returned by ServiceFunc.
func New(services map[string]ServiceFunc) (s *Simulation) {
s = &Simulation{
buckets: make(map[enode.ID]*sync.Map),
done: make(chan struct{}),
neighbourhoodSize: network.NewKadParams().NeighbourhoodSize,
}
adapterServices := make(map[string]adapters.ServiceFunc, len(services))
for name, serviceFunc := range services {
// Scope this variables correctly
// as they will be in the adapterServices[name] function accessed later.
name, serviceFunc := name, serviceFunc
s.serviceNames = append(s.serviceNames, name)
adapterServices[name] = func(ctx *adapters.ServiceContext) (node.Service, error) {
s.mu.Lock()
defer s.mu.Unlock()
b, ok := s.buckets[ctx.Config.ID]
if !ok {
b = new(sync.Map)
}
service, cleanup, err := serviceFunc(ctx, b)
if err != nil {
return nil, err
}
if cleanup != nil {
s.cleanupFuncs = append(s.cleanupFuncs, cleanup)
}
s.buckets[ctx.Config.ID] = b
return service, nil
}
}
s.Net = simulations.NewNetwork(
adapters.NewTCPAdapter(adapterServices),
&simulations.NetworkConfig{ID: "0"},
)
return s
}
// RunFunc is the function that will be called
// on Simulation.Run method call.
type RunFunc func(context.Context, *Simulation) error
// Result is the returned value of Simulation.Run method.
type Result struct {
Duration time.Duration
Error error
}
// Run calls the RunFunc function while taking care of
// cancellation provided through the Context.
func (s *Simulation) Run(ctx context.Context, f RunFunc) (r Result) {
//if the option is set to run a HTTP server with the simulation,
//init the server and start it
start := time.Now()
if s.httpSrv != nil {
log.Info("Waiting for frontend to be ready...(send POST /runsim to HTTP server)")
//wait for the frontend to connect
select {
case <-s.runC:
case <-ctx.Done():
return Result{
Duration: time.Since(start),
Error: ctx.Err(),
}
}
log.Info("Received signal from frontend - starting simulation run.")
}
errc := make(chan error)
quit := make(chan struct{})
defer close(quit)
go func() {
select {
case errc <- f(ctx, s):
case <-quit:
}
}()
var err error
select {
case <-ctx.Done():
err = ctx.Err()
case err = <-errc:
}
return Result{
Duration: time.Since(start),
Error: err,
}
}
// Maximal number of parallel calls to cleanup functions on
// Simulation.Close.
var maxParallelCleanups = 10
// Close calls all cleanup functions that are returned by
// ServiceFunc, waits for all of them to finish and other
// functions that explicitly block shutdownWG
// (like Simulation.PeerEvents) and shuts down the network
// at the end. It is used to clean all resources from the
// simulation.
func (s *Simulation) Close() {
close(s.done)
sem := make(chan struct{}, maxParallelCleanups)
s.mu.RLock()
cleanupFuncs := make([]func(), len(s.cleanupFuncs))
for i, f := range s.cleanupFuncs {
if f != nil {
cleanupFuncs[i] = f
}
}
s.mu.RUnlock()
var cleanupWG sync.WaitGroup
for _, cleanup := range cleanupFuncs {
cleanupWG.Add(1)
sem <- struct{}{}
go func(cleanup func()) {
defer cleanupWG.Done()
defer func() { <-sem }()
cleanup()
}(cleanup)
}
cleanupWG.Wait()
if s.httpSrv != nil {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
err := s.httpSrv.Shutdown(ctx)
if err != nil {
log.Error("Error shutting down HTTP server!", "err", err)
}
close(s.runC)
}
s.shutdownWG.Wait()
s.Net.Shutdown()
}
// Done returns a channel that is closed when the simulation
// is closed by Close method. It is useful for signaling termination
// of all possible goroutines that are created within the test.
func (s *Simulation) Done() <-chan struct{} {
return s.done
}

View File

@ -0,0 +1,203 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package simulation
import (
"context"
"errors"
"flag"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/mattn/go-colorable"
)
var (
loglevel = flag.Int("loglevel", 2, "verbosity of logs")
)
func init() {
flag.Parse()
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
}
// TestRun tests if Run method calls RunFunc and if it handles context properly.
func TestRun(t *testing.T) {
sim := New(noopServiceFuncMap)
defer sim.Close()
t.Run("call", func(t *testing.T) {
expect := "something"
var got string
r := sim.Run(context.Background(), func(ctx context.Context, sim *Simulation) error {
got = expect
return nil
})
if r.Error != nil {
t.Errorf("unexpected error: %v", r.Error)
}
if got != expect {
t.Errorf("expected %q, got %q", expect, got)
}
})
t.Run("cancellation", func(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
r := sim.Run(ctx, func(ctx context.Context, sim *Simulation) error {
time.Sleep(time.Second)
return nil
})
if r.Error != context.DeadlineExceeded {
t.Errorf("unexpected error: %v", r.Error)
}
})
t.Run("context value and duration", func(t *testing.T) {
ctx := context.WithValue(context.Background(), "hey", "there")
sleep := 50 * time.Millisecond
r := sim.Run(ctx, func(ctx context.Context, sim *Simulation) error {
if ctx.Value("hey") != "there" {
return errors.New("expected context value not passed")
}
time.Sleep(sleep)
return nil
})
if r.Error != nil {
t.Errorf("unexpected error: %v", r.Error)
}
if r.Duration < sleep {
t.Errorf("reported run duration less then expected: %s", r.Duration)
}
})
}
// TestClose tests are Close method triggers all close functions and are all nodes not up anymore.
func TestClose(t *testing.T) {
var mu sync.Mutex
var cleanupCount int
sleep := 50 * time.Millisecond
sim := New(map[string]ServiceFunc{
"noop": func(ctx *adapters.ServiceContext, b *sync.Map) (node.Service, func(), error) {
return newNoopService(), func() {
time.Sleep(sleep)
mu.Lock()
defer mu.Unlock()
cleanupCount++
}, nil
},
})
nodeCount := 30
_, err := sim.AddNodes(nodeCount)
if err != nil {
t.Fatal(err)
}
var upNodeCount int
for _, n := range sim.Net.GetNodes() {
if n.Up() {
upNodeCount++
}
}
if upNodeCount != nodeCount {
t.Errorf("all nodes should be up, insted only %v are up", upNodeCount)
}
sim.Close()
if cleanupCount != nodeCount {
t.Errorf("number of cleanups expected %v, got %v", nodeCount, cleanupCount)
}
upNodeCount = 0
for _, n := range sim.Net.GetNodes() {
if n.Up() {
upNodeCount++
}
}
if upNodeCount != 0 {
t.Errorf("all nodes should be down, insted %v are up", upNodeCount)
}
}
// TestDone checks if Close method triggers the closing of done channel.
func TestDone(t *testing.T) {
sim := New(noopServiceFuncMap)
sleep := 50 * time.Millisecond
timeout := 2 * time.Second
start := time.Now()
go func() {
time.Sleep(sleep)
sim.Close()
}()
select {
case <-time.After(timeout):
t.Error("done channel closing timed out")
case <-sim.Done():
if d := time.Since(start); d < sleep {
t.Errorf("done channel closed sooner then expected: %s", d)
}
}
}
// a helper map for usual services that do not do anything
var noopServiceFuncMap = map[string]ServiceFunc{
"noop": noopServiceFunc,
}
// a helper function for most basic noop service
func noopServiceFunc(_ *adapters.ServiceContext, _ *sync.Map) (node.Service, func(), error) {
return newNoopService(), nil, nil
}
func newNoopService() node.Service {
return &noopService{}
}
// a helper function for most basic Noop service
// of a different type then NoopService to test
// multiple services on one node.
func noopService2Func(_ *adapters.ServiceContext, _ *sync.Map) (node.Service, func(), error) {
return new(noopService2), nil, nil
}
// NoopService2 is the service that does not do anything
// but implements node.Service interface.
type noopService2 struct {
simulations.NoopService
}
type noopService struct {
simulations.NoopService
}

View File

@ -0,0 +1,17 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package discovery

View File

@ -0,0 +1,536 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package discovery
import (
"context"
"flag"
"fmt"
"io/ioutil"
"os"
"path"
"strings"
"testing"
"time"
"github.com/ethersphere/swarm/testutil"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/state"
colorable "github.com/mattn/go-colorable"
)
// serviceName is used with the exec adapter so the exec'd binary knows which
// service to execute
const serviceName = "discovery"
const testNeighbourhoodSize = 2
const discoveryPersistenceDatadir = "discovery_persistence_test_store"
var discoveryPersistencePath = path.Join(os.TempDir(), discoveryPersistenceDatadir)
var discoveryEnabled = true
var persistenceEnabled = false
var services = adapters.Services{
serviceName: newService,
}
func cleanDbStores() error {
entries, err := ioutil.ReadDir(os.TempDir())
if err != nil {
return err
}
for _, f := range entries {
if strings.HasPrefix(f.Name(), discoveryPersistenceDatadir) {
os.RemoveAll(path.Join(os.TempDir(), f.Name()))
}
}
return nil
}
func getDbStore(nodeID string) (*state.DBStore, error) {
if _, err := os.Stat(discoveryPersistencePath + "_" + nodeID); os.IsNotExist(err) {
log.Info(fmt.Sprintf("directory for nodeID %s does not exist. creating...", nodeID))
ioutil.TempDir("", discoveryPersistencePath+"_"+nodeID)
}
log.Info(fmt.Sprintf("opening storage directory for nodeID %s", nodeID))
store, err := state.NewDBStore(discoveryPersistencePath + "_" + nodeID)
if err != nil {
return nil, err
}
return store, nil
}
var (
nodeCount = flag.Int("nodes", defaultNodeCount(), "number of nodes to create (default 32)")
initCount = flag.Int("conns", 1, "number of originally connected peers (default 1)")
loglevel = flag.Int("loglevel", 3, "verbosity of logs")
rawlog = flag.Bool("rawlog", false, "remove terminal formatting from logs")
)
func defaultNodeCount() int {
if testutil.RaceEnabled {
return 8
}
return 32
}
func init() {
flag.Parse()
// register the discovery service which will run as a devp2p
// protocol when using the exec adapter
adapters.RegisterServices(services)
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(!*rawlog))))
}
// Benchmarks to test the average time it takes for an N-node ring
// to full a healthy kademlia topology
func BenchmarkDiscovery_8_1(b *testing.B) { benchmarkDiscovery(b, 8, 1) }
func BenchmarkDiscovery_16_1(b *testing.B) { benchmarkDiscovery(b, 16, 1) }
func BenchmarkDiscovery_32_1(b *testing.B) { benchmarkDiscovery(b, 32, 1) }
func BenchmarkDiscovery_64_1(b *testing.B) { benchmarkDiscovery(b, 64, 1) }
func BenchmarkDiscovery_128_1(b *testing.B) { benchmarkDiscovery(b, 128, 1) }
func BenchmarkDiscovery_256_1(b *testing.B) { benchmarkDiscovery(b, 256, 1) }
func BenchmarkDiscovery_8_2(b *testing.B) { benchmarkDiscovery(b, 8, 2) }
func BenchmarkDiscovery_16_2(b *testing.B) { benchmarkDiscovery(b, 16, 2) }
func BenchmarkDiscovery_32_2(b *testing.B) { benchmarkDiscovery(b, 32, 2) }
func BenchmarkDiscovery_64_2(b *testing.B) { benchmarkDiscovery(b, 64, 2) }
func BenchmarkDiscovery_128_2(b *testing.B) { benchmarkDiscovery(b, 128, 2) }
func BenchmarkDiscovery_256_2(b *testing.B) { benchmarkDiscovery(b, 256, 2) }
func BenchmarkDiscovery_8_4(b *testing.B) { benchmarkDiscovery(b, 8, 4) }
func BenchmarkDiscovery_16_4(b *testing.B) { benchmarkDiscovery(b, 16, 4) }
func BenchmarkDiscovery_32_4(b *testing.B) { benchmarkDiscovery(b, 32, 4) }
func BenchmarkDiscovery_64_4(b *testing.B) { benchmarkDiscovery(b, 64, 4) }
func BenchmarkDiscovery_128_4(b *testing.B) { benchmarkDiscovery(b, 128, 4) }
func BenchmarkDiscovery_256_4(b *testing.B) { benchmarkDiscovery(b, 256, 4) }
func TestDiscoverySimulationExecAdapter(t *testing.T) {
testDiscoverySimulationExecAdapter(t, *nodeCount, *initCount)
}
func testDiscoverySimulationExecAdapter(t *testing.T, nodes, conns int) {
baseDir, err := ioutil.TempDir("", "swarm-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(baseDir)
testDiscoverySimulation(t, nodes, conns, adapters.NewExecAdapter(baseDir))
}
func TestDiscoverySimulationSimAdapter(t *testing.T) {
testDiscoverySimulationSimAdapter(t, *nodeCount, *initCount)
}
func TestDiscoveryPersistenceSimulationSimAdapter(t *testing.T) {
testDiscoveryPersistenceSimulationSimAdapter(t, *nodeCount, *initCount)
}
func testDiscoveryPersistenceSimulationSimAdapter(t *testing.T, nodes, conns int) {
testDiscoveryPersistenceSimulation(t, nodes, conns, adapters.NewSimAdapter(services))
}
func testDiscoverySimulationSimAdapter(t *testing.T, nodes, conns int) {
testDiscoverySimulation(t, nodes, conns, adapters.NewSimAdapter(services))
}
func testDiscoverySimulation(t *testing.T, nodes, conns int, adapter adapters.NodeAdapter) {
startedAt := time.Now()
result, err := discoverySimulation(nodes, conns, adapter)
if err != nil {
t.Fatalf("Setting up simulation failed: %v", err)
}
if result.Error != nil {
t.Fatalf("Simulation failed: %s", result.Error)
}
t.Logf("Simulation with %d nodes passed in %s", nodes, result.FinishedAt.Sub(result.StartedAt))
var min, max time.Duration
var sum int
for _, pass := range result.Passes {
duration := pass.Sub(result.StartedAt)
if sum == 0 || duration < min {
min = duration
}
if duration > max {
max = duration
}
sum += int(duration.Nanoseconds())
}
t.Logf("Min: %s, Max: %s, Average: %s", min, max, time.Duration(sum/len(result.Passes))*time.Nanosecond)
finishedAt := time.Now()
t.Logf("Setup: %s, shutdown: %s", result.StartedAt.Sub(startedAt), finishedAt.Sub(result.FinishedAt))
}
func testDiscoveryPersistenceSimulation(t *testing.T, nodes, conns int, adapter adapters.NodeAdapter) map[int][]byte {
persistenceEnabled = true
discoveryEnabled = true
result, err := discoveryPersistenceSimulation(nodes, conns, adapter)
if err != nil {
t.Fatalf("Setting up simulation failed: %v", err)
}
if result.Error != nil {
t.Fatalf("Simulation failed: %s", result.Error)
}
t.Logf("Simulation with %d nodes passed in %s", nodes, result.FinishedAt.Sub(result.StartedAt))
// set the discovery and persistence flags again to default so other
// tests will not be affected
discoveryEnabled = true
persistenceEnabled = false
return nil
}
func benchmarkDiscovery(b *testing.B, nodes, conns int) {
for i := 0; i < b.N; i++ {
result, err := discoverySimulation(nodes, conns, adapters.NewSimAdapter(services))
if err != nil {
b.Fatalf("setting up simulation failed: %v", err)
}
if result.Error != nil {
b.Logf("simulation failed: %s", result.Error)
}
}
}
func discoverySimulation(nodes, conns int, adapter adapters.NodeAdapter) (*simulations.StepResult, error) {
// create network
net := simulations.NewNetwork(adapter, &simulations.NetworkConfig{
ID: "0",
DefaultService: serviceName,
})
defer net.Shutdown()
trigger := make(chan enode.ID)
ids := make([]enode.ID, nodes)
for i := 0; i < nodes; i++ {
conf := adapters.RandomNodeConfig()
node, err := net.NewNodeWithConfig(conf)
if err != nil {
return nil, fmt.Errorf("error starting node: %s", err)
}
if err := net.Start(node.ID()); err != nil {
return nil, fmt.Errorf("error starting node %s: %s", node.ID().TerminalString(), err)
}
if err := triggerChecks(trigger, net, node.ID()); err != nil {
return nil, fmt.Errorf("error triggering checks for node %s: %s", node.ID().TerminalString(), err)
}
ids[i] = node.ID()
}
// run a simulation which connects the 10 nodes in a ring and waits
// for full peer discovery
var addrs [][]byte
action := func(ctx context.Context) error {
return nil
}
for i := range ids {
// collect the overlay addresses, to
addrs = append(addrs, ids[i].Bytes())
}
err := net.ConnectNodesChain(nil)
if err != nil {
return nil, err
}
log.Debug(fmt.Sprintf("nodes: %v", len(addrs)))
// construct the peer pot, so that kademlia health can be checked
ppmap := network.NewPeerPotMap(network.NewKadParams().NeighbourhoodSize, addrs)
check := func(ctx context.Context, id enode.ID) (bool, error) {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
node := net.GetNode(id)
if node == nil {
return false, fmt.Errorf("unknown node: %s", id)
}
client, err := node.Client()
if err != nil {
return false, fmt.Errorf("error getting node client: %s", err)
}
healthy := &network.Health{}
if err := client.Call(&healthy, "hive_getHealthInfo", ppmap[common.Bytes2Hex(id.Bytes())]); err != nil {
return false, fmt.Errorf("error getting node health: %s", err)
}
log.Debug(fmt.Sprintf("node %4s healthy: connected nearest neighbours: %v, know nearest neighbours: %v,\n\n%v", id, healthy.ConnectNN, healthy.KnowNN, healthy.Hive))
return healthy.KnowNN && healthy.ConnectNN, nil
}
// 64 nodes ~ 1min
// 128 nodes ~
timeout := 300 * time.Second
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
result := simulations.NewSimulation(net).Run(ctx, &simulations.Step{
Action: action,
Trigger: trigger,
Expect: &simulations.Expectation{
Nodes: ids,
Check: check,
},
})
if result.Error != nil {
return result, nil
}
return result, nil
}
func discoveryPersistenceSimulation(nodes, conns int, adapter adapters.NodeAdapter) (*simulations.StepResult, error) {
cleanDbStores()
defer cleanDbStores()
// create network
net := simulations.NewNetwork(adapter, &simulations.NetworkConfig{
ID: "0",
DefaultService: serviceName,
})
defer net.Shutdown()
trigger := make(chan enode.ID)
ids := make([]enode.ID, nodes)
var addrs [][]byte
for i := 0; i < nodes; i++ {
conf := adapters.RandomNodeConfig()
node, err := net.NewNodeWithConfig(conf)
if err != nil {
panic(err)
}
if err != nil {
return nil, fmt.Errorf("error starting node: %s", err)
}
if err := net.Start(node.ID()); err != nil {
return nil, fmt.Errorf("error starting node %s: %s", node.ID().TerminalString(), err)
}
if err := triggerChecks(trigger, net, node.ID()); err != nil {
return nil, fmt.Errorf("error triggering checks for node %s: %s", node.ID().TerminalString(), err)
}
// TODO we shouldn't be equating underaddr and overaddr like this, as they are not the same in production
ids[i] = node.ID()
a := ids[i].Bytes()
addrs = append(addrs, a)
}
// run a simulation which connects the 10 nodes in a ring and waits
// for full peer discovery
var restartTime time.Time
action := func(ctx context.Context) error {
ticker := time.NewTicker(500 * time.Millisecond)
for range ticker.C {
isHealthy := true
for _, id := range ids {
//call Healthy RPC
node := net.GetNode(id)
if node == nil {
return fmt.Errorf("unknown node: %s", id)
}
client, err := node.Client()
if err != nil {
return fmt.Errorf("error getting node client: %s", err)
}
healthy := &network.Health{}
addr := id.String()
ppmap := network.NewPeerPotMap(network.NewKadParams().NeighbourhoodSize, addrs)
if err := client.Call(&healthy, "hive_getHealthInfo", ppmap[common.Bytes2Hex(id.Bytes())]); err != nil {
return fmt.Errorf("error getting node health: %s", err)
}
log.Info(fmt.Sprintf("NODE: %s, IS HEALTHY: %t", addr, healthy.ConnectNN && healthy.KnowNN && healthy.CountKnowNN > 0))
var nodeStr string
if err := client.Call(&nodeStr, "hive_string"); err != nil {
return fmt.Errorf("error getting node string %s", err)
}
log.Info(nodeStr)
if !healthy.ConnectNN || healthy.CountKnowNN == 0 {
isHealthy = false
break
}
}
if isHealthy {
break
}
}
ticker.Stop()
log.Info("reached healthy kademlia. starting to shutdown nodes.")
shutdownStarted := time.Now()
// stop all ids, then start them again
for _, id := range ids {
node := net.GetNode(id)
if err := net.Stop(node.ID()); err != nil {
return fmt.Errorf("error stopping node %s: %s", node.ID().TerminalString(), err)
}
}
log.Info(fmt.Sprintf("shutting down nodes took: %s", time.Since(shutdownStarted)))
persistenceEnabled = true
discoveryEnabled = false
restartTime = time.Now()
for _, id := range ids {
node := net.GetNode(id)
if err := net.Start(node.ID()); err != nil {
return fmt.Errorf("error starting node %s: %s", node.ID().TerminalString(), err)
}
if err := triggerChecks(trigger, net, node.ID()); err != nil {
return fmt.Errorf("error triggering checks for node %s: %s", node.ID().TerminalString(), err)
}
}
log.Info(fmt.Sprintf("restarting nodes took: %s", time.Since(restartTime)))
return nil
}
net.ConnectNodesChain(nil)
log.Debug(fmt.Sprintf("nodes: %v", len(addrs)))
// construct the peer pot, so that kademlia health can be checked
check := func(ctx context.Context, id enode.ID) (bool, error) {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
node := net.GetNode(id)
if node == nil {
return false, fmt.Errorf("unknown node: %s", id)
}
client, err := node.Client()
if err != nil {
return false, fmt.Errorf("error getting node client: %s", err)
}
healthy := &network.Health{}
ppmap := network.NewPeerPotMap(network.NewKadParams().NeighbourhoodSize, addrs)
if err := client.Call(&healthy, "hive_getHealthInfo", ppmap[common.Bytes2Hex(id.Bytes())]); err != nil {
return false, fmt.Errorf("error getting node health: %s", err)
}
log.Info(fmt.Sprintf("node %4s healthy: got nearest neighbours: %v, know nearest neighbours: %v", id, healthy.ConnectNN, healthy.KnowNN))
return healthy.KnowNN && healthy.ConnectNN, nil
}
// 64 nodes ~ 1min
// 128 nodes ~
timeout := 300 * time.Second
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
result := simulations.NewSimulation(net).Run(ctx, &simulations.Step{
Action: action,
Trigger: trigger,
Expect: &simulations.Expectation{
Nodes: ids,
Check: check,
},
})
if result.Error != nil {
return result, nil
}
return result, nil
}
// triggerChecks triggers a simulation step check whenever a peer is added or
// removed from the given node, and also every second to avoid a race between
// peer events and kademlia becoming healthy
func triggerChecks(trigger chan enode.ID, net *simulations.Network, id enode.ID) error {
node := net.GetNode(id)
if node == nil {
return fmt.Errorf("unknown node: %s", id)
}
client, err := node.Client()
if err != nil {
return err
}
events := make(chan *p2p.PeerEvent)
sub, err := client.Subscribe(context.Background(), "admin", events, "peerEvents")
if err != nil {
return fmt.Errorf("error getting peer events for node %v: %s", id, err)
}
go func() {
defer sub.Unsubscribe()
tick := time.NewTicker(time.Second)
defer tick.Stop()
for {
select {
case <-events:
trigger <- id
case <-tick.C:
trigger <- id
case err := <-sub.Err():
if err != nil {
log.Error(fmt.Sprintf("error getting peer events for node %v", id), "err", err)
}
return
}
}
}()
return nil
}
func newService(ctx *adapters.ServiceContext) (node.Service, error) {
addr := network.NewAddr(ctx.Config.Node())
kp := network.NewKadParams()
kp.NeighbourhoodSize = testNeighbourhoodSize
if ctx.Config.Reachable != nil {
kp.Reachable = func(o *network.BzzAddr) bool {
return ctx.Config.Reachable(o.ID())
}
}
kad := network.NewKademlia(addr.Over(), kp)
hp := network.NewHiveParams()
hp.KeepAliveInterval = time.Duration(200) * time.Millisecond
hp.Discovery = discoveryEnabled
log.Info(fmt.Sprintf("discovery for nodeID %s is %t", ctx.Config.ID.String(), hp.Discovery))
config := &network.BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
}
if persistenceEnabled {
log.Info(fmt.Sprintf("persistence enabled for nodeID %s", ctx.Config.ID.String()))
store, err := getDbStore(ctx.Config.ID.String())
if err != nil {
return nil, err
}
return network.NewBzz(config, kad, store, nil, nil), nil
}
return network.NewBzz(config, kad, nil, nil, nil), nil
}

View File

@ -0,0 +1 @@
{"nodes":[{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}},{"node":{"config":null,"up":false}}],"conns":[{"one":"c04a0c47cb0c522ecf28d8841e93721e73f58790b30e92382816a4b453be2988","other":"d9283e5247a18d6564b3581217e9f4d9c93a4359944894c00bb2b22c690faadc","up":true},{"one":"dd99c11abe2abae112d64d902b96fe0c75243ea67eca759a2769058a30cc0e77","other":"c04a0c47cb0c522ecf28d8841e93721e73f58790b30e92382816a4b453be2988","up":true},{"one":"4f5dad2aa4f26ac5a23d4fbcc807296b474eab77761db6594debd60ef4287aed","other":"dd99c11abe2abae112d64d902b96fe0c75243ea67eca759a2769058a30cc0e77","up":true},{"one":"4f47f4e176d1c9f78d9a7e19723689ffe2a0603004a3d4506a2349e55a56fc17","other":"4f5dad2aa4f26ac5a23d4fbcc807296b474eab77761db6594debd60ef4287aed","up":true},{"one":"20b6a1be2cb8f966151682350e029d4f8da8ee92de10a2a1cb1727d110acebfa","other":"4f47f4e176d1c9f78d9a7e19723689ffe2a0603004a3d4506a2349e55a56fc17","up":true},{"one":"50cb92e77710582fa9cbee7a54cf25c95fd27d8d54b13ba5520a50139c309a22","other":"20b6a1be2cb8f966151682350e029d4f8da8ee92de10a2a1cb1727d110acebfa","up":true},{"one":"319dc901f99940f1339c540bc36fbabb10a96d326b13b9d7f53e7496980e2996","other":"50cb92e77710582fa9cbee7a54cf25c95fd27d8d54b13ba5520a50139c309a22","up":true},{"one":"dc285b6436a8bfd4d2e586d478b18d3fe7b705ce0b4fb27a651adcf6d27984f1","other":"319dc901f99940f1339c540bc36fbabb10a96d326b13b9d7f53e7496980e2996","up":true},{"one":"974dbe511377280f945a53a194b4bb397875b10b1ecb119a92425bbb16db68f1","other":"dc285b6436a8bfd4d2e586d478b18d3fe7b705ce0b4fb27a651adcf6d27984f1","up":true},{"one":"d9283e5247a18d6564b3581217e9f4d9c93a4359944894c00bb2b22c690faadc","other":"974dbe511377280f945a53a194b4bb397875b10b1ecb119a92425bbb16db68f1","up":true}]}

View File

@ -0,0 +1,144 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// You can run this simulation using
//
// go run ./swarm/network/simulations/overlay.go
package main
import (
"flag"
"fmt"
"net/http"
"runtime"
"sync"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/state"
colorable "github.com/mattn/go-colorable"
)
var (
noDiscovery = flag.Bool("no-discovery", false, "disable discovery (useful if you want to load a snapshot)")
vmodule = flag.String("vmodule", "", "log filters for logger via Vmodule")
verbosity = flag.Int("verbosity", 0, "log filters for logger via Vmodule")
httpSimPort = 8888
)
func init() {
flag.Parse()
//initialize the logger
//this is a demonstration on how to use Vmodule for filtering logs
//provide -vmodule as param, and comma-separated values, e.g.:
//-vmodule overlay_test.go=4,simulations=3
//above examples sets overlay_test.go logs to level 4, while packages ending with "simulations" to 3
if *vmodule != "" {
//only enable the pattern matching handler if the flag has been provided
glogger := log.NewGlogHandler(log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true)))
if *verbosity > 0 {
glogger.Verbosity(log.Lvl(*verbosity))
}
glogger.Vmodule(*vmodule)
log.Root().SetHandler(glogger)
}
}
type Simulation struct {
mtx sync.Mutex
stores map[enode.ID]state.Store
}
func NewSimulation() *Simulation {
return &Simulation{
stores: make(map[enode.ID]state.Store),
}
}
func (s *Simulation) NewService(ctx *adapters.ServiceContext) (node.Service, error) {
node := ctx.Config.Node()
s.mtx.Lock()
store, ok := s.stores[node.ID()]
if !ok {
store = state.NewInmemoryStore()
s.stores[node.ID()] = store
}
s.mtx.Unlock()
addr := network.NewAddr(node)
kp := network.NewKadParams()
kp.NeighbourhoodSize = 2
kp.MaxBinSize = 4
kp.MinBinSize = 1
kp.MaxRetries = 1000
kp.RetryExponent = 2
kp.RetryInterval = 1000000
kad := network.NewKademlia(addr.Over(), kp)
hp := network.NewHiveParams()
hp.Discovery = !*noDiscovery
hp.KeepAliveInterval = 300 * time.Millisecond
config := &network.BzzConfig{
OverlayAddr: addr.Over(),
UnderlayAddr: addr.Under(),
HiveParams: hp,
}
return network.NewBzz(config, kad, store, nil, nil), nil
}
//create the simulation network
func newSimulationNetwork() *simulations.Network {
s := NewSimulation()
services := adapters.Services{
"overlay": s.NewService,
}
adapter := adapters.NewSimAdapter(services)
simNetwork := simulations.NewNetwork(adapter, &simulations.NetworkConfig{
DefaultService: "overlay",
})
return simNetwork
}
//return a new http server
func newOverlaySim(sim *simulations.Network) *simulations.Server {
return simulations.NewServer(sim)
}
// var server
func main() {
//cpu optimization
runtime.GOMAXPROCS(runtime.NumCPU())
//run the sim
runOverlaySim()
}
func runOverlaySim() {
//create the simulation network
net := newSimulationNetwork()
//create a http server with it
sim := newOverlaySim(net)
log.Info(fmt.Sprintf("starting simulation server on 0.0.0.0:%d...", httpSimPort))
//start the HTTP server
http.ListenAndServe(fmt.Sprintf(":%d", httpSimPort), sim)
}

View File

@ -0,0 +1,194 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethersphere/swarm/log"
)
var (
nodeCount = 10
)
//This test is used to test the overlay simulation.
//As the simulation is executed via a main, it is easily missed on changes
//An automated test will prevent that
//The test just connects to the simulations, starts the network,
//starts the mocker, gets the number of nodes, and stops it again.
//It also provides a documentation on the steps needed by frontends
//to use the simulations
func TestOverlaySim(t *testing.T) {
//start the simulation
log.Info("Start simulation backend")
//get the simulation networ; needed to subscribe for up events
net := newSimulationNetwork()
//create the overlay simulation
sim := newOverlaySim(net)
//create a http test server with it
srv := httptest.NewServer(sim)
defer srv.Close()
log.Debug("Http simulation server started. Start simulation network")
//start the simulation network (initialization of simulation)
resp, err := http.Post(srv.URL+"/start", "application/json", nil)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Fatalf("Expected Status Code %d, got %d", http.StatusOK, resp.StatusCode)
}
log.Debug("Start mocker")
//start the mocker, needs a node count and an ID
resp, err = http.PostForm(srv.URL+"/mocker/start",
url.Values{
"node-count": {fmt.Sprintf("%d", nodeCount)},
"mocker-type": {simulations.GetMockerList()[0]},
})
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
reason, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatal(err)
}
t.Fatalf("Expected Status Code %d, got %d, response body %s", http.StatusOK, resp.StatusCode, string(reason))
}
//variables needed to wait for nodes being up
var upCount int
trigger := make(chan enode.ID)
//wait for all nodes to be up
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
//start watching node up events...
go watchSimEvents(net, ctx, trigger)
//...and wait until all expected up events (nodeCount) have been received
LOOP:
for {
select {
case <-trigger:
//new node up event received, increase counter
upCount++
//all expected node up events received
if upCount == nodeCount {
break LOOP
}
case <-ctx.Done():
t.Fatalf("Timed out waiting for up events")
}
}
//at this point we can query the server
log.Info("Get number of nodes")
//get the number of nodes
resp, err = http.Get(srv.URL + "/nodes")
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Fatalf("err %s", resp.Status)
}
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatal(err)
}
//unmarshal number of nodes from JSON response
var nodesArr []simulations.Node
err = json.Unmarshal(b, &nodesArr)
if err != nil {
t.Fatal(err)
}
//check if number of nodes received is same as sent
if len(nodesArr) != nodeCount {
t.Fatal(fmt.Errorf("Expected %d number of nodes, got %d", nodeCount, len(nodesArr)))
}
//need to let it run for a little while, otherwise stopping it immediately can crash due running nodes
//wanting to connect to already stopped nodes
time.Sleep(1 * time.Second)
log.Info("Stop the network")
//stop the network
resp, err = http.Post(srv.URL+"/stop", "application/json", nil)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Fatalf("err %s", resp.Status)
}
log.Info("Reset the network")
//reset the network (removes all nodes and connections)
resp, err = http.Post(srv.URL+"/reset", "application/json", nil)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Fatalf("err %s", resp.Status)
}
}
//watch for events so we know when all nodes are up
func watchSimEvents(net *simulations.Network, ctx context.Context, trigger chan enode.ID) {
events := make(chan *simulations.Event)
sub := net.Events().Subscribe(events)
defer sub.Unsubscribe()
for {
select {
case ev := <-events:
//only catch node up events
if ev.Type == simulations.EventTypeNode {
if ev.Node.Up() {
log.Debug("got node up event", "event", ev, "node", ev.Node.Config.ID)
select {
case trigger <- ev.Node.Config.ID:
case <-ctx.Done():
return
}
}
}
case <-ctx.Done():
return
}
}
}

View File

@ -0,0 +1,401 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"errors"
"flag"
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
p2ptest "github.com/ethereum/go-ethereum/p2p/testing"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/storage/localstore"
"github.com/ethersphere/swarm/storage/mock"
"github.com/ethersphere/swarm/testutil"
colorable "github.com/mattn/go-colorable"
)
var (
loglevel = flag.Int("loglevel", 2, "verbosity of logs")
nodes = flag.Int("nodes", 0, "number of nodes")
chunks = flag.Int("chunks", 0, "number of chunks")
useMockStore = flag.Bool("mockstore", false, "disabled mock store (default: enabled)")
longrunning = flag.Bool("longrunning", false, "do run long-running tests")
bucketKeyStore = simulation.BucketKey("store")
bucketKeyFileStore = simulation.BucketKey("filestore")
bucketKeyNetStore = simulation.BucketKey("netstore")
bucketKeyDelivery = simulation.BucketKey("delivery")
bucketKeyRegistry = simulation.BucketKey("registry")
chunkSize = 4096
pof = network.Pof
)
func init() {
flag.Parse()
rand.Seed(time.Now().UnixNano())
log.PrintOrigins(true)
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
}
// newNetStoreAndDelivery is a default constructor for BzzAddr, NetStore and Delivery, used in Simulations
func newNetStoreAndDelivery(ctx *adapters.ServiceContext, bucket *sync.Map) (*network.BzzAddr, *storage.NetStore, *Delivery, func(), error) {
addr := network.NewAddr(ctx.Config.Node())
netStore, delivery, cleanup, err := netStoreAndDeliveryWithAddr(ctx, bucket, addr)
if err != nil {
return nil, nil, nil, nil, err
}
netStore.NewNetFetcherFunc = network.NewFetcherFactory(delivery.RequestFromPeers, true).New
return addr, netStore, delivery, cleanup, nil
}
// newNetStoreAndDeliveryWithBzzAddr is a constructor for NetStore and Delivery, used in Simulations, accepting any BzzAddr
func newNetStoreAndDeliveryWithBzzAddr(ctx *adapters.ServiceContext, bucket *sync.Map, addr *network.BzzAddr) (*storage.NetStore, *Delivery, func(), error) {
netStore, delivery, cleanup, err := netStoreAndDeliveryWithAddr(ctx, bucket, addr)
if err != nil {
return nil, nil, nil, err
}
netStore.NewNetFetcherFunc = network.NewFetcherFactory(delivery.RequestFromPeers, true).New
return netStore, delivery, cleanup, nil
}
// newNetStoreAndDeliveryWithRequestFunc is a constructor for NetStore and Delivery, used in Simulations, accepting any NetStore.RequestFunc
func newNetStoreAndDeliveryWithRequestFunc(ctx *adapters.ServiceContext, bucket *sync.Map, rf network.RequestFunc) (*network.BzzAddr, *storage.NetStore, *Delivery, func(), error) {
addr := network.NewAddr(ctx.Config.Node())
netStore, delivery, cleanup, err := netStoreAndDeliveryWithAddr(ctx, bucket, addr)
if err != nil {
return nil, nil, nil, nil, err
}
netStore.NewNetFetcherFunc = network.NewFetcherFactory(rf, true).New
return addr, netStore, delivery, cleanup, nil
}
func netStoreAndDeliveryWithAddr(ctx *adapters.ServiceContext, bucket *sync.Map, addr *network.BzzAddr) (*storage.NetStore, *Delivery, func(), error) {
n := ctx.Config.Node()
localStore, localStoreCleanup, err := newTestLocalStore(n.ID(), addr, nil)
if err != nil {
return nil, nil, nil, err
}
netStore, err := storage.NewNetStore(localStore, nil)
if err != nil {
localStore.Close()
localStoreCleanup()
return nil, nil, nil, err
}
fileStore := storage.NewFileStore(netStore, storage.NewFileStoreParams(), chunk.NewTags())
kad := network.NewKademlia(addr.Over(), network.NewKadParams())
delivery := NewDelivery(kad, netStore)
bucket.Store(bucketKeyStore, localStore)
bucket.Store(bucketKeyDelivery, delivery)
bucket.Store(bucketKeyFileStore, fileStore)
// for the kademlia object, we use the global key from the simulation package,
// as the simulation will try to access it in the WaitTillHealthy with that key
bucket.Store(simulation.BucketKeyKademlia, kad)
cleanup := func() {
netStore.Close()
localStoreCleanup()
}
return netStore, delivery, cleanup, nil
}
func newStreamerTester(registryOptions *RegistryOptions) (*p2ptest.ProtocolTester, *Registry, *localstore.DB, func(), error) {
// setup
addr := network.RandomAddr() // tested peers peer address
to := network.NewKademlia(addr.OAddr, network.NewKadParams())
// temp datadir
datadir, err := ioutil.TempDir("", "streamer")
if err != nil {
return nil, nil, nil, nil, err
}
removeDataDir := func() {
os.RemoveAll(datadir)
}
localStore, err := localstore.New(datadir, addr.Over(), nil)
if err != nil {
removeDataDir()
return nil, nil, nil, nil, err
}
netStore, err := storage.NewNetStore(localStore, nil)
if err != nil {
localStore.Close()
removeDataDir()
return nil, nil, nil, nil, err
}
delivery := NewDelivery(to, netStore)
netStore.NewNetFetcherFunc = network.NewFetcherFactory(delivery.RequestFromPeers, true).New
intervalsStore := state.NewInmemoryStore()
streamer := NewRegistry(addr.ID(), delivery, netStore, intervalsStore, registryOptions, nil)
prvkey, err := crypto.GenerateKey()
if err != nil {
removeDataDir()
return nil, nil, nil, nil, err
}
protocolTester := p2ptest.NewProtocolTester(prvkey, 1, streamer.runProtocol)
teardown := func() {
protocolTester.Stop()
streamer.Close()
intervalsStore.Close()
netStore.Close()
removeDataDir()
}
err = waitForPeers(streamer, 10*time.Second, 1)
if err != nil {
teardown()
return nil, nil, nil, nil, errors.New("timeout: peer is not created")
}
return protocolTester, streamer, localStore, teardown, nil
}
func waitForPeers(streamer *Registry, timeout time.Duration, expectedPeers int) error {
ticker := time.NewTicker(10 * time.Millisecond)
timeoutTimer := time.NewTimer(timeout)
for {
select {
case <-ticker.C:
if streamer.peersCount() >= expectedPeers {
return nil
}
case <-timeoutTimer.C:
return errors.New("timeout")
}
}
}
type roundRobinStore struct {
index uint32
stores []storage.ChunkStore
}
func newRoundRobinStore(stores ...storage.ChunkStore) *roundRobinStore {
return &roundRobinStore{
stores: stores,
}
}
// not used in this context, only to fulfill ChunkStore interface
func (rrs *roundRobinStore) Has(_ context.Context, _ storage.Address) (bool, error) {
return false, errors.New("roundRobinStore doesn't support Has")
}
func (rrs *roundRobinStore) Get(_ context.Context, _ chunk.ModeGet, _ storage.Address) (storage.Chunk, error) {
return nil, errors.New("roundRobinStore doesn't support Get")
}
func (rrs *roundRobinStore) Put(ctx context.Context, mode chunk.ModePut, ch storage.Chunk) (bool, error) {
i := atomic.AddUint32(&rrs.index, 1)
idx := int(i) % len(rrs.stores)
return rrs.stores[idx].Put(ctx, mode, ch)
}
func (rrs *roundRobinStore) Set(ctx context.Context, mode chunk.ModeSet, addr chunk.Address) (err error) {
return errors.New("roundRobinStore doesn't support Set")
}
func (rrs *roundRobinStore) LastPullSubscriptionBinID(bin uint8) (id uint64, err error) {
return 0, errors.New("roundRobinStore doesn't support LastPullSubscriptionBinID")
}
func (rrs *roundRobinStore) SubscribePull(ctx context.Context, bin uint8, since, until uint64) (c <-chan chunk.Descriptor, stop func()) {
return nil, nil
}
func (rrs *roundRobinStore) Close() error {
for _, store := range rrs.stores {
store.Close()
}
return nil
}
func readAll(fileStore *storage.FileStore, hash []byte) (int64, error) {
r, _ := fileStore.Retrieve(context.TODO(), hash)
buf := make([]byte, 1024)
var n int
var total int64
var err error
for (total == 0 || n > 0) && err == nil {
n, err = r.ReadAt(buf, total)
total += int64(n)
}
if err != nil && err != io.EOF {
return total, err
}
return total, nil
}
func uploadFilesToNodes(sim *simulation.Simulation) ([]storage.Address, []string, error) {
nodes := sim.UpNodeIDs()
nodeCnt := len(nodes)
log.Debug(fmt.Sprintf("Uploading %d files to nodes", nodeCnt))
//array holding generated files
rfiles := make([]string, nodeCnt)
//array holding the root hashes of the files
rootAddrs := make([]storage.Address, nodeCnt)
var err error
//for every node, generate a file and upload
for i, id := range nodes {
item, ok := sim.NodeItem(id, bucketKeyFileStore)
if !ok {
return nil, nil, fmt.Errorf("Error accessing localstore")
}
fileStore := item.(*storage.FileStore)
//generate a file
rfiles[i], err = generateRandomFile()
if err != nil {
return nil, nil, err
}
//store it (upload it) on the FileStore
ctx := context.TODO()
rk, wait, err := fileStore.Store(ctx, strings.NewReader(rfiles[i]), int64(len(rfiles[i])), false)
log.Debug("Uploaded random string file to node")
if err != nil {
return nil, nil, err
}
err = wait(ctx)
if err != nil {
return nil, nil, err
}
rootAddrs[i] = rk
}
return rootAddrs, rfiles, nil
}
//generate a random file (string)
func generateRandomFile() (string, error) {
//generate a random file size between minFileSize and maxFileSize
fileSize := rand.Intn(maxFileSize-minFileSize) + minFileSize
log.Debug(fmt.Sprintf("Generated file with filesize %d kB", fileSize))
b := testutil.RandomBytes(1, fileSize*1024)
return string(b), nil
}
func newTestLocalStore(id enode.ID, addr *network.BzzAddr, globalStore mock.GlobalStorer) (localStore *localstore.DB, cleanup func(), err error) {
dir, err := ioutil.TempDir("", "swarm-stream-")
if err != nil {
return nil, nil, err
}
cleanup = func() {
os.RemoveAll(dir)
}
var mockStore *mock.NodeStore
if globalStore != nil {
mockStore = globalStore.NewNodeStore(common.BytesToAddress(id.Bytes()))
}
localStore, err = localstore.New(dir, addr.Over(), &localstore.Options{
MockStore: mockStore,
})
if err != nil {
cleanup()
return nil, nil, err
}
return localStore, cleanup, nil
}
// watchDisconnections receives simulation peer events in a new goroutine and sets atomic value
// disconnected to true in case of a disconnect event.
func watchDisconnections(ctx context.Context, sim *simulation.Simulation) (disconnected *boolean) {
log.Debug("Watching for disconnections")
disconnections := sim.PeerEvents(
ctx,
sim.NodeIDs(),
simulation.NewPeerEventsFilter().Drop(),
)
disconnected = new(boolean)
go func() {
for {
select {
case <-ctx.Done():
return
case d := <-disconnections:
if d.Error != nil {
log.Error("peer drop event error", "node", d.NodeID, "peer", d.PeerID, "err", d.Error)
} else {
log.Error("peer drop", "node", d.NodeID, "peer", d.PeerID)
}
disconnected.set(true)
}
}
}()
return disconnected
}
// boolean is used to concurrently set
// and read a boolean value.
type boolean struct {
v bool
mu sync.RWMutex
}
// set sets the value.
func (b *boolean) set(v bool) {
b.mu.Lock()
defer b.mu.Unlock()
b.v = v
}
// bool reads the value.
func (b *boolean) bool() bool {
b.mu.RLock()
defer b.mu.RUnlock()
return b.v
}

245
network/stream/delivery.go Normal file
View File

@ -0,0 +1,245 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"errors"
"fmt"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/spancontext"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/tracing"
opentracing "github.com/opentracing/opentracing-go"
olog "github.com/opentracing/opentracing-go/log"
)
var (
processReceivedChunksCount = metrics.NewRegisteredCounter("network.stream.received_chunks.count", nil)
handleRetrieveRequestMsgCount = metrics.NewRegisteredCounter("network.stream.handle_retrieve_request_msg.count", nil)
retrieveChunkFail = metrics.NewRegisteredCounter("network.stream.retrieve_chunks_fail.count", nil)
requestFromPeersCount = metrics.NewRegisteredCounter("network.stream.request_from_peers.count", nil)
requestFromPeersEachCount = metrics.NewRegisteredCounter("network.stream.request_from_peers_each.count", nil)
lastReceivedChunksMsg = metrics.GetOrRegisterGauge("network.stream.received_chunks", nil)
)
type Delivery struct {
netStore *storage.NetStore
kad *network.Kademlia
getPeer func(enode.ID) *Peer
quit chan struct{}
}
func NewDelivery(kad *network.Kademlia, netStore *storage.NetStore) *Delivery {
return &Delivery{
netStore: netStore,
kad: kad,
quit: make(chan struct{}),
}
}
// RetrieveRequestMsg is the protocol msg for chunk retrieve requests
type RetrieveRequestMsg struct {
Addr storage.Address
SkipCheck bool
HopCount uint8
}
func (d *Delivery) handleRetrieveRequestMsg(ctx context.Context, sp *Peer, req *RetrieveRequestMsg) error {
log.Trace("received request", "peer", sp.ID(), "hash", req.Addr)
handleRetrieveRequestMsgCount.Inc(1)
var osp opentracing.Span
ctx, osp = spancontext.StartSpan(
ctx,
"stream.handle.retrieve")
osp.LogFields(olog.String("ref", req.Addr.String()))
var cancel func()
// TODO: do something with this hardcoded timeout, maybe use TTL in the future
ctx = context.WithValue(ctx, "peer", sp.ID().String())
ctx = context.WithValue(ctx, "hopcount", req.HopCount)
ctx, cancel = context.WithTimeout(ctx, network.RequestTimeout)
go func() {
select {
case <-ctx.Done():
case <-d.quit:
}
cancel()
}()
go func() {
defer osp.Finish()
ch, err := d.netStore.Get(ctx, chunk.ModeGetRequest, req.Addr)
if err != nil {
retrieveChunkFail.Inc(1)
log.Debug("ChunkStore.Get can not retrieve chunk", "peer", sp.ID().String(), "addr", req.Addr, "hopcount", req.HopCount, "err", err)
return
}
syncing := false
err = sp.Deliver(ctx, ch, Top, syncing)
if err != nil {
log.Warn("ERROR in handleRetrieveRequestMsg", "err", err)
}
osp.LogFields(olog.Bool("delivered", true))
}()
return nil
}
//Chunk delivery always uses the same message type....
type ChunkDeliveryMsg struct {
Addr storage.Address
SData []byte // the stored chunk Data (incl size)
peer *Peer // set in handleChunkDeliveryMsg
}
//...but swap accounting needs to disambiguate if it is a delivery for syncing or for retrieval
//as it decides based on message type if it needs to account for this message or not
//defines a chunk delivery for retrieval (with accounting)
type ChunkDeliveryMsgRetrieval ChunkDeliveryMsg
//defines a chunk delivery for syncing (without accounting)
type ChunkDeliveryMsgSyncing ChunkDeliveryMsg
// chunk delivery msg is response to retrieverequest msg
func (d *Delivery) handleChunkDeliveryMsg(ctx context.Context, sp *Peer, req interface{}) error {
var osp opentracing.Span
ctx, osp = spancontext.StartSpan(
ctx,
"handle.chunk.delivery")
processReceivedChunksCount.Inc(1)
// record the last time we received a chunk delivery message
lastReceivedChunksMsg.Update(time.Now().UnixNano())
var msg *ChunkDeliveryMsg
var mode chunk.ModePut
switch r := req.(type) {
case *ChunkDeliveryMsgRetrieval:
msg = (*ChunkDeliveryMsg)(r)
peerPO := chunk.Proximity(sp.BzzAddr.Over(), msg.Addr)
po := chunk.Proximity(d.kad.BaseAddr(), msg.Addr)
depth := d.kad.NeighbourhoodDepth()
// chunks within the area of responsibility should always sync
// https://github.com/ethersphere/go-ethereum/pull/1282#discussion_r269406125
if po >= depth || peerPO < po {
mode = chunk.ModePutSync
} else {
// do not sync if peer that is sending us a chunk is closer to the chunk then we are
mode = chunk.ModePutRequest
}
case *ChunkDeliveryMsgSyncing:
msg = (*ChunkDeliveryMsg)(r)
mode = chunk.ModePutSync
case *ChunkDeliveryMsg:
msg = r
mode = chunk.ModePutSync
}
log.Trace("handle.chunk.delivery", "ref", msg.Addr, "from peer", sp.ID())
go func() {
defer osp.Finish()
msg.peer = sp
log.Trace("handle.chunk.delivery", "put", msg.Addr)
_, err := d.netStore.Put(ctx, mode, storage.NewChunk(msg.Addr, msg.SData))
if err != nil {
if err == storage.ErrChunkInvalid {
// we removed this log because it spams the logs
// TODO: Enable this log line
// log.Warn("invalid chunk delivered", "peer", sp.ID(), "chunk", msg.Addr, )
msg.peer.Drop()
}
}
log.Trace("handle.chunk.delivery", "done put", msg.Addr, "err", err)
}()
return nil
}
func (d *Delivery) Close() {
close(d.quit)
}
// RequestFromPeers sends a chunk retrieve request to a peer
// The most eligible peer that hasn't already been sent to is chosen
// TODO: define "eligible"
func (d *Delivery) RequestFromPeers(ctx context.Context, req *network.Request) (*enode.ID, chan struct{}, error) {
requestFromPeersCount.Inc(1)
var sp *Peer
spID := req.Source
if spID != nil {
sp = d.getPeer(*spID)
if sp == nil {
return nil, nil, fmt.Errorf("source peer %v not found", spID.String())
}
} else {
d.kad.EachConn(req.Addr[:], 255, func(p *network.Peer, po int) bool {
id := p.ID()
if p.LightNode {
// skip light nodes
return true
}
if req.SkipPeer(id.String()) {
log.Trace("Delivery.RequestFromPeers: skip peer", "peer id", id)
return true
}
sp = d.getPeer(id)
// sp is nil, when we encounter a peer that is not registered for delivery, i.e. doesn't support the `stream` protocol
if sp == nil {
return true
}
spID = &id
return false
})
if sp == nil {
return nil, nil, errors.New("no peer found")
}
}
// setting this value in the context creates a new span that can persist across the sendpriority queue and the network roundtrip
// this span will finish only when delivery is handled (or times out)
ctx = context.WithValue(ctx, tracing.StoreLabelId, "stream.send.request")
ctx = context.WithValue(ctx, tracing.StoreLabelMeta, fmt.Sprintf("%v.%v", sp.ID(), req.Addr))
log.Trace("request.from.peers", "peer", sp.ID(), "ref", req.Addr)
err := sp.SendPriority(ctx, &RetrieveRequestMsg{
Addr: req.Addr,
SkipCheck: req.SkipCheck,
HopCount: req.HopCount,
}, Top)
if err != nil {
return nil, nil, err
}
requestFromPeersEachCount.Inc(1)
return spID, sp.quit, nil
}

View File

@ -0,0 +1,593 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"bytes"
"context"
"errors"
"fmt"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/protocols"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
p2ptest "github.com/ethereum/go-ethereum/p2p/testing"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network"
pq "github.com/ethersphere/swarm/network/priorityqueue"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/testutil"
)
//Test requesting a chunk from a peer then issuing a "empty" OfferedHashesMsg (no hashes available yet)
//Should time out as the peer does not have the chunk (no syncing happened previously)
func TestStreamerUpstreamRetrieveRequestMsgExchangeWithoutStore(t *testing.T) {
tester, _, _, teardown, err := newStreamerTester(&RegistryOptions{
Syncing: SyncingDisabled, //do no syncing
})
if err != nil {
t.Fatal(err)
}
defer teardown()
node := tester.Nodes[0]
chunk := storage.NewChunk(storage.Address(hash0[:]), nil)
//test the exchange
err = tester.TestExchanges(p2ptest.Exchange{
Label: "RetrieveRequestMsg",
Triggers: []p2ptest.Trigger{
{ //then the actual RETRIEVE_REQUEST....
Code: 5,
Msg: &RetrieveRequestMsg{
Addr: chunk.Address()[:],
},
Peer: node.ID(),
},
},
Expects: []p2ptest.Expect{
{ //to which the peer responds with offered hashes
Code: 1,
Msg: &OfferedHashesMsg{
HandoverProof: nil,
Hashes: nil,
From: 0,
To: 0,
},
Peer: node.ID(),
},
},
})
//should fail with a timeout as the peer we are requesting
//the chunk from does not have the chunk
expectedError := `exchange #0 "RetrieveRequestMsg": timed out`
if err == nil || err.Error() != expectedError {
t.Fatalf("Expected error %v, got %v", expectedError, err)
}
}
// upstream request server receives a retrieve Request and responds with
// offered hashes or delivery if skipHash is set to true
func TestStreamerUpstreamRetrieveRequestMsgExchange(t *testing.T) {
tester, _, localStore, teardown, err := newStreamerTester(&RegistryOptions{
Syncing: SyncingDisabled,
})
if err != nil {
t.Fatal(err)
}
defer teardown()
node := tester.Nodes[0]
hash := storage.Address(hash1[:])
ch := storage.NewChunk(hash, hash1[:])
_, err = localStore.Put(context.TODO(), chunk.ModePutUpload, ch)
if err != nil {
t.Fatalf("Expected no err got %v", err)
}
err = tester.TestExchanges(p2ptest.Exchange{
Label: "RetrieveRequestMsg",
Triggers: []p2ptest.Trigger{
{
Code: 5,
Msg: &RetrieveRequestMsg{
Addr: hash,
},
Peer: node.ID(),
},
},
Expects: []p2ptest.Expect{
{
Code: 6,
Msg: &ChunkDeliveryMsg{
Addr: ch.Address(),
SData: ch.Data(),
},
Peer: node.ID(),
},
},
})
if err != nil {
t.Fatal(err)
}
}
// if there is one peer in the Kademlia, RequestFromPeers should return it
func TestRequestFromPeers(t *testing.T) {
dummyPeerID := enode.HexID("3431c3939e1ee2a6345e976a8234f9870152d64879f30bc272a074f6859e75e8")
addr := network.RandomAddr()
to := network.NewKademlia(addr.OAddr, network.NewKadParams())
delivery := NewDelivery(to, nil)
protocolsPeer := protocols.NewPeer(p2p.NewPeer(dummyPeerID, "dummy", nil), nil, nil)
peer := network.NewPeer(&network.BzzPeer{
BzzAddr: network.RandomAddr(),
LightNode: false,
Peer: protocolsPeer,
}, to)
to.On(peer)
r := NewRegistry(addr.ID(), delivery, nil, nil, nil, nil)
// an empty priorityQueue has to be created to prevent a goroutine being called after the test has finished
sp := &Peer{
BzzPeer: &network.BzzPeer{Peer: protocolsPeer, BzzAddr: addr},
pq: pq.New(int(PriorityQueue), PriorityQueueCap),
streamer: r,
}
r.setPeer(sp)
req := network.NewRequest(
storage.Address(hash0[:]),
true,
&sync.Map{},
)
ctx := context.Background()
id, _, err := delivery.RequestFromPeers(ctx, req)
if err != nil {
t.Fatal(err)
}
if *id != dummyPeerID {
t.Fatalf("Expected an id, got %v", id)
}
}
// RequestFromPeers should not return light nodes
func TestRequestFromPeersWithLightNode(t *testing.T) {
dummyPeerID := enode.HexID("3431c3939e1ee2a6345e976a8234f9870152d64879f30bc272a074f6859e75e8")
addr := network.RandomAddr()
to := network.NewKademlia(addr.OAddr, network.NewKadParams())
delivery := NewDelivery(to, nil)
protocolsPeer := protocols.NewPeer(p2p.NewPeer(dummyPeerID, "dummy", nil), nil, nil)
// setting up a lightnode
peer := network.NewPeer(&network.BzzPeer{
BzzAddr: network.RandomAddr(),
LightNode: true,
Peer: protocolsPeer,
}, to)
to.On(peer)
r := NewRegistry(addr.ID(), delivery, nil, nil, nil, nil)
// an empty priorityQueue has to be created to prevent a goroutine being called after the test has finished
sp := &Peer{
BzzPeer: &network.BzzPeer{Peer: protocolsPeer, BzzAddr: addr},
pq: pq.New(int(PriorityQueue), PriorityQueueCap),
streamer: r,
}
r.setPeer(sp)
req := network.NewRequest(
storage.Address(hash0[:]),
true,
&sync.Map{},
)
ctx := context.Background()
// making a request which should return with "no peer found"
_, _, err := delivery.RequestFromPeers(ctx, req)
expectedError := "no peer found"
if err.Error() != expectedError {
t.Fatalf("expected '%v', got %v", expectedError, err)
}
}
func TestStreamerDownstreamChunkDeliveryMsgExchange(t *testing.T) {
tester, streamer, localStore, teardown, err := newStreamerTester(&RegistryOptions{
Syncing: SyncingDisabled,
})
if err != nil {
t.Fatal(err)
}
defer teardown()
streamer.RegisterClientFunc("foo", func(p *Peer, t string, live bool) (Client, error) {
return &testClient{
t: t,
}, nil
})
node := tester.Nodes[0]
//subscribe to custom stream
stream := NewStream("foo", "", true)
err = streamer.Subscribe(node.ID(), stream, NewRange(5, 8), Top)
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
chunkKey := hash0[:]
chunkData := hash1[:]
err = tester.TestExchanges(p2ptest.Exchange{
Label: "Subscribe message",
Expects: []p2ptest.Expect{
{ //first expect subscription to the custom stream...
Code: 4,
Msg: &SubscribeMsg{
Stream: stream,
History: NewRange(5, 8),
Priority: Top,
},
Peer: node.ID(),
},
},
},
p2ptest.Exchange{
Label: "ChunkDelivery message",
Triggers: []p2ptest.Trigger{
{ //...then trigger a chunk delivery for the given chunk from peer in order for
//local node to get the chunk delivered
Code: 6,
Msg: &ChunkDeliveryMsg{
Addr: chunkKey,
SData: chunkData,
},
Peer: node.ID(),
},
},
})
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
// wait for the chunk to get stored
storedChunk, err := localStore.Get(ctx, chunk.ModeGetRequest, chunkKey)
for err != nil {
select {
case <-ctx.Done():
t.Fatalf("Chunk is not in localstore after timeout, err: %v", err)
default:
}
storedChunk, err = localStore.Get(ctx, chunk.ModeGetRequest, chunkKey)
time.Sleep(50 * time.Millisecond)
}
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
if !bytes.Equal(storedChunk.Data(), chunkData) {
t.Fatal("Retrieved chunk has different data than original")
}
}
func TestDeliveryFromNodes(t *testing.T) {
testDeliveryFromNodes(t, 2, dataChunkCount, true)
testDeliveryFromNodes(t, 2, dataChunkCount, false)
testDeliveryFromNodes(t, 4, dataChunkCount, true)
testDeliveryFromNodes(t, 4, dataChunkCount, false)
if testutil.RaceEnabled {
// Travis cannot handle more nodes with -race; would time out.
return
}
testDeliveryFromNodes(t, 8, dataChunkCount, true)
testDeliveryFromNodes(t, 8, dataChunkCount, false)
testDeliveryFromNodes(t, 16, dataChunkCount, true)
testDeliveryFromNodes(t, 16, dataChunkCount, false)
}
func testDeliveryFromNodes(t *testing.T, nodes, chunkCount int, skipCheck bool) {
t.Helper()
t.Run(fmt.Sprintf("testDeliveryFromNodes_%d_%d_skipCheck_%t", nodes, chunkCount, skipCheck), func(t *testing.T) {
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
SkipCheck: skipCheck,
Syncing: SyncingDisabled,
}, nil)
bucket.Store(bucketKeyRegistry, r)
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
})
defer sim.Close()
log.Info("Adding nodes to simulation")
_, err := sim.AddNodesAndConnectChain(nodes)
if err != nil {
t.Fatal(err)
}
log.Info("Starting simulation")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) (err error) {
nodeIDs := sim.UpNodeIDs()
//determine the pivot node to be the first node of the simulation
pivot := nodeIDs[0]
//distribute chunks of a random file into Stores of nodes 1 to nodes
//we will do this by creating a file store with an underlying round-robin store:
//the file store will create a hash for the uploaded file, but every chunk will be
//distributed to different nodes via round-robin scheduling
log.Debug("Writing file to round-robin file store")
//to do this, we create an array for chunkstores (length minus one, the pivot node)
stores := make([]storage.ChunkStore, len(nodeIDs)-1)
//we then need to get all stores from the sim....
lStores := sim.NodesItems(bucketKeyStore)
i := 0
//...iterate the buckets...
for id, bucketVal := range lStores {
//...and remove the one which is the pivot node
if id == pivot {
continue
}
//the other ones are added to the array...
stores[i] = bucketVal.(storage.ChunkStore)
i++
}
//...which then gets passed to the round-robin file store
roundRobinFileStore := storage.NewFileStore(newRoundRobinStore(stores...), storage.NewFileStoreParams(), chunk.NewTags())
//now we can actually upload a (random) file to the round-robin store
size := chunkCount * chunkSize
log.Debug("Storing data to file store")
fileHash, wait, err := roundRobinFileStore.Store(ctx, testutil.RandomReader(1, size), int64(size), false)
// wait until all chunks stored
if err != nil {
return err
}
err = wait(ctx)
if err != nil {
return err
}
//get the pivot node's filestore
item, ok := sim.NodeItem(pivot, bucketKeyFileStore)
if !ok {
return fmt.Errorf("No filestore")
}
pivotFileStore := item.(*storage.FileStore)
log.Debug("Starting retrieval routine")
retErrC := make(chan error)
go func() {
// start the retrieval on the pivot node - this will spawn retrieve requests for missing chunks
// we must wait for the peer connections to have started before requesting
n, err := readAll(pivotFileStore, fileHash)
log.Info(fmt.Sprintf("retrieved %v", fileHash), "read", n, "err", err)
retErrC <- err
}()
disconnected := watchDisconnections(ctx, sim)
defer func() {
if err != nil && disconnected.bool() {
err = errors.New("disconnect events received")
}
}()
//finally check that the pivot node gets all chunks via the root hash
log.Debug("Check retrieval")
success := true
var total int64
total, err = readAll(pivotFileStore, fileHash)
if err != nil {
return err
}
log.Info(fmt.Sprintf("check if %08x is available locally: number of bytes read %v/%v (error: %v)", fileHash, total, size, err))
if err != nil || total != int64(size) {
success = false
}
if !success {
return fmt.Errorf("Test failed, chunks not available on all nodes")
}
if err := <-retErrC; err != nil {
return fmt.Errorf("requesting chunks: %v", err)
}
log.Debug("Test terminated successfully")
return nil
})
if result.Error != nil {
t.Fatal(result.Error)
}
})
}
func BenchmarkDeliveryFromNodesWithoutCheck(b *testing.B) {
for chunks := 32; chunks <= 128; chunks *= 2 {
for i := 2; i < 32; i *= 2 {
b.Run(
fmt.Sprintf("nodes=%v,chunks=%v", i, chunks),
func(b *testing.B) {
benchmarkDeliveryFromNodes(b, i, chunks, true)
},
)
}
}
}
func BenchmarkDeliveryFromNodesWithCheck(b *testing.B) {
for chunks := 32; chunks <= 128; chunks *= 2 {
for i := 2; i < 32; i *= 2 {
b.Run(
fmt.Sprintf("nodes=%v,chunks=%v", i, chunks),
func(b *testing.B) {
benchmarkDeliveryFromNodes(b, i, chunks, false)
},
)
}
}
}
func benchmarkDeliveryFromNodes(b *testing.B, nodes, chunkCount int, skipCheck bool) {
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
SkipCheck: skipCheck,
Syncing: SyncingDisabled,
SyncUpdateDelay: 0,
}, nil)
bucket.Store(bucketKeyRegistry, r)
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
})
defer sim.Close()
log.Info("Initializing test config")
_, err := sim.AddNodesAndConnectChain(nodes)
if err != nil {
b.Fatal(err)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) (err error) {
nodeIDs := sim.UpNodeIDs()
node := nodeIDs[len(nodeIDs)-1]
item, ok := sim.NodeItem(node, bucketKeyFileStore)
if !ok {
return errors.New("No filestore")
}
remoteFileStore := item.(*storage.FileStore)
pivotNode := nodeIDs[0]
item, ok = sim.NodeItem(pivotNode, bucketKeyNetStore)
if !ok {
return errors.New("No filestore")
}
netStore := item.(*storage.NetStore)
if _, err := sim.WaitTillHealthy(ctx); err != nil {
return err
}
disconnected := watchDisconnections(ctx, sim)
defer func() {
if err != nil && disconnected.bool() {
err = errors.New("disconnect events received")
}
}()
// benchmark loop
b.ResetTimer()
b.StopTimer()
Loop:
for i := 0; i < b.N; i++ {
// uploading chunkCount random chunks to the last node
hashes := make([]storage.Address, chunkCount)
for i := 0; i < chunkCount; i++ {
// create actual size real chunks
ctx := context.TODO()
hash, wait, err := remoteFileStore.Store(ctx, testutil.RandomReader(i, chunkSize), int64(chunkSize), false)
if err != nil {
return fmt.Errorf("store: %v", err)
}
// wait until all chunks stored
err = wait(ctx)
if err != nil {
return fmt.Errorf("wait store: %v", err)
}
// collect the hashes
hashes[i] = hash
}
// now benchmark the actual retrieval
// netstore.Get is called for each hash in a go routine and errors are collected
b.StartTimer()
errs := make(chan error)
for _, hash := range hashes {
go func(h storage.Address) {
_, err := netStore.Get(ctx, chunk.ModeGetRequest, h)
log.Warn("test check netstore get", "hash", h, "err", err)
errs <- err
}(hash)
}
// count and report retrieval errors
// if there are misses then chunk timeout is too low for the distance and volume (?)
var total, misses int
for err := range errs {
if err != nil {
log.Warn(err.Error())
misses++
}
total++
if total == chunkCount {
break
}
}
b.StopTimer()
if misses > 0 {
err = fmt.Errorf("%v chunk not found out of %v", misses, total)
break Loop
}
}
return err
})
if result.Error != nil {
b.Fatal(result.Error)
}
}

View File

@ -0,0 +1,42 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package intervals
import (
"io/ioutil"
"os"
"testing"
"github.com/ethersphere/swarm/state"
)
// TestDBStore tests basic functionality of DBStore.
func TestDBStore(t *testing.T) {
dir, err := ioutil.TempDir("", "intervals_test_db_store")
if err != nil {
panic(err)
}
defer os.RemoveAll(dir)
store, err := state.NewDBStore(dir)
if err != nil {
t.Fatal(err)
}
defer store.Close()
testStore(t, store)
}

View File

@ -0,0 +1,206 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package intervals
import (
"bytes"
"fmt"
"strconv"
"sync"
)
// Intervals store a list of intervals. Its purpose is to provide
// methods to add new intervals and retrieve missing intervals that
// need to be added.
// It may be used in synchronization of streaming data to persist
// retrieved data ranges between sessions.
type Intervals struct {
start uint64
ranges [][2]uint64
mu sync.RWMutex
}
// New creates a new instance of Intervals.
// Start argument limits the lower bound of intervals.
// No range bellow start bound will be added by Add method or
// returned by Next method. This limit may be used for
// tracking "live" synchronization, where the sync session
// starts from a specific value, and if "live" sync intervals
// need to be merged with historical ones, it can be safely done.
func NewIntervals(start uint64) *Intervals {
return &Intervals{
start: start,
}
}
// Add adds a new range to intervals. Range start and end are values
// are both inclusive.
func (i *Intervals) Add(start, end uint64) {
i.mu.Lock()
defer i.mu.Unlock()
i.add(start, end)
}
func (i *Intervals) add(start, end uint64) {
if start < i.start {
start = i.start
}
if end < i.start {
return
}
minStartJ := -1
maxEndJ := -1
j := 0
for ; j < len(i.ranges); j++ {
if minStartJ < 0 {
if (start <= i.ranges[j][0] && end+1 >= i.ranges[j][0]) || (start <= i.ranges[j][1]+1 && end+1 >= i.ranges[j][1]) {
if i.ranges[j][0] < start {
start = i.ranges[j][0]
}
minStartJ = j
}
}
if (start <= i.ranges[j][1] && end+1 >= i.ranges[j][1]) || (start <= i.ranges[j][0] && end+1 >= i.ranges[j][0]) {
if i.ranges[j][1] > end {
end = i.ranges[j][1]
}
maxEndJ = j
}
if end+1 <= i.ranges[j][0] {
break
}
}
if minStartJ < 0 && maxEndJ < 0 {
i.ranges = append(i.ranges[:j], append([][2]uint64{{start, end}}, i.ranges[j:]...)...)
return
}
if minStartJ >= 0 {
i.ranges[minStartJ][0] = start
}
if maxEndJ >= 0 {
i.ranges[maxEndJ][1] = end
}
if minStartJ >= 0 && maxEndJ >= 0 && minStartJ != maxEndJ {
i.ranges[maxEndJ][0] = start
i.ranges = append(i.ranges[:minStartJ], i.ranges[maxEndJ:]...)
}
}
// Merge adds all the intervals from the m Interval to current one.
func (i *Intervals) Merge(m *Intervals) {
m.mu.RLock()
defer m.mu.RUnlock()
i.mu.Lock()
defer i.mu.Unlock()
for _, r := range m.ranges {
i.add(r[0], r[1])
}
}
// Next returns the first range interval that is not fulfilled. Returned
// start and end values are both inclusive, meaning that the whole range
// including start and end need to be added in order to full the gap
// in intervals.
// Returned value for end is 0 if the next interval is after the whole
// range that is stored in Intervals. Zero end value represents no limit
// on the next interval length.
func (i *Intervals) Next() (start, end uint64) {
i.mu.RLock()
defer i.mu.RUnlock()
l := len(i.ranges)
if l == 0 {
return i.start, 0
}
if i.ranges[0][0] != i.start {
return i.start, i.ranges[0][0] - 1
}
if l == 1 {
return i.ranges[0][1] + 1, 0
}
return i.ranges[0][1] + 1, i.ranges[1][0] - 1
}
// Last returns the value that is at the end of the last interval.
func (i *Intervals) Last() (end uint64) {
i.mu.RLock()
defer i.mu.RUnlock()
l := len(i.ranges)
if l == 0 {
return 0
}
return i.ranges[l-1][1]
}
// String returns a descriptive representation of range intervals
// in [] notation, as a list of two element vectors.
func (i *Intervals) String() string {
return fmt.Sprint(i.ranges)
}
// MarshalBinary encodes Intervals parameters into a semicolon separated list.
// The first element in the list is base36-encoded start value. The following
// elements are two base36-encoded value ranges separated by comma.
func (i *Intervals) MarshalBinary() (data []byte, err error) {
d := make([][]byte, len(i.ranges)+1)
d[0] = []byte(strconv.FormatUint(i.start, 36))
for j := range i.ranges {
r := i.ranges[j]
d[j+1] = []byte(strconv.FormatUint(r[0], 36) + "," + strconv.FormatUint(r[1], 36))
}
return bytes.Join(d, []byte(";")), nil
}
// UnmarshalBinary decodes data according to the Intervals.MarshalBinary format.
func (i *Intervals) UnmarshalBinary(data []byte) (err error) {
d := bytes.Split(data, []byte(";"))
l := len(d)
if l == 0 {
return nil
}
if l >= 1 {
i.start, err = strconv.ParseUint(string(d[0]), 36, 64)
if err != nil {
return err
}
}
if l == 1 {
return nil
}
i.ranges = make([][2]uint64, 0, l-1)
for j := 1; j < l; j++ {
r := bytes.SplitN(d[j], []byte(","), 2)
if len(r) < 2 {
return fmt.Errorf("range %d has less then 2 elements", j)
}
start, err := strconv.ParseUint(string(r[0]), 36, 64)
if err != nil {
return fmt.Errorf("parsing the first element in range %d: %v", j, err)
}
end, err := strconv.ParseUint(string(r[1]), 36, 64)
if err != nil {
return fmt.Errorf("parsing the second element in range %d: %v", j, err)
}
i.ranges = append(i.ranges, [2]uint64{start, end})
}
return nil
}

View File

@ -0,0 +1,395 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package intervals
import "testing"
// Test tests Interval methods Add, Next and Last for various
// initial state.
func Test(t *testing.T) {
for i, tc := range []struct {
startLimit uint64
initial [][2]uint64
start uint64
end uint64
expected string
nextStart uint64
nextEnd uint64
last uint64
}{
{
initial: nil,
start: 0,
end: 0,
expected: "[[0 0]]",
nextStart: 1,
nextEnd: 0,
last: 0,
},
{
initial: nil,
start: 0,
end: 10,
expected: "[[0 10]]",
nextStart: 11,
nextEnd: 0,
last: 10,
},
{
initial: nil,
start: 5,
end: 15,
expected: "[[5 15]]",
nextStart: 0,
nextEnd: 4,
last: 15,
},
{
initial: [][2]uint64{{0, 0}},
start: 0,
end: 0,
expected: "[[0 0]]",
nextStart: 1,
nextEnd: 0,
last: 0,
},
{
initial: [][2]uint64{{0, 0}},
start: 5,
end: 15,
expected: "[[0 0] [5 15]]",
nextStart: 1,
nextEnd: 4,
last: 15,
},
{
initial: [][2]uint64{{5, 15}},
start: 5,
end: 15,
expected: "[[5 15]]",
nextStart: 0,
nextEnd: 4,
last: 15,
},
{
initial: [][2]uint64{{5, 15}},
start: 5,
end: 20,
expected: "[[5 20]]",
nextStart: 0,
nextEnd: 4,
last: 20,
},
{
initial: [][2]uint64{{5, 15}},
start: 10,
end: 20,
expected: "[[5 20]]",
nextStart: 0,
nextEnd: 4,
last: 20,
},
{
initial: [][2]uint64{{5, 15}},
start: 0,
end: 20,
expected: "[[0 20]]",
nextStart: 21,
nextEnd: 0,
last: 20,
},
{
initial: [][2]uint64{{5, 15}},
start: 2,
end: 10,
expected: "[[2 15]]",
nextStart: 0,
nextEnd: 1,
last: 15,
},
{
initial: [][2]uint64{{5, 15}},
start: 2,
end: 4,
expected: "[[2 15]]",
nextStart: 0,
nextEnd: 1,
last: 15,
},
{
initial: [][2]uint64{{5, 15}},
start: 2,
end: 5,
expected: "[[2 15]]",
nextStart: 0,
nextEnd: 1,
last: 15,
},
{
initial: [][2]uint64{{5, 15}},
start: 2,
end: 3,
expected: "[[2 3] [5 15]]",
nextStart: 0,
nextEnd: 1,
last: 15,
},
{
initial: [][2]uint64{{5, 15}},
start: 2,
end: 4,
expected: "[[2 15]]",
nextStart: 0,
nextEnd: 1,
last: 15,
},
{
initial: [][2]uint64{{0, 1}, {5, 15}},
start: 2,
end: 4,
expected: "[[0 15]]",
nextStart: 16,
nextEnd: 0,
last: 15,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}},
start: 2,
end: 10,
expected: "[[0 10] [15 20]]",
nextStart: 11,
nextEnd: 14,
last: 20,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}},
start: 8,
end: 18,
expected: "[[0 5] [8 20]]",
nextStart: 6,
nextEnd: 7,
last: 20,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}},
start: 2,
end: 17,
expected: "[[0 20]]",
nextStart: 21,
nextEnd: 0,
last: 20,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}},
start: 2,
end: 25,
expected: "[[0 25]]",
nextStart: 26,
nextEnd: 0,
last: 25,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}},
start: 5,
end: 14,
expected: "[[0 20]]",
nextStart: 21,
nextEnd: 0,
last: 20,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}},
start: 6,
end: 14,
expected: "[[0 20]]",
nextStart: 21,
nextEnd: 0,
last: 20,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}, {30, 40}},
start: 6,
end: 29,
expected: "[[0 40]]",
nextStart: 41,
nextEnd: 0,
last: 40,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}, {30, 40}, {50, 60}},
start: 3,
end: 55,
expected: "[[0 60]]",
nextStart: 61,
nextEnd: 0,
last: 60,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}, {30, 40}, {50, 60}},
start: 21,
end: 49,
expected: "[[0 5] [15 60]]",
nextStart: 6,
nextEnd: 14,
last: 60,
},
{
initial: [][2]uint64{{0, 5}, {15, 20}, {30, 40}, {50, 60}},
start: 0,
end: 100,
expected: "[[0 100]]",
nextStart: 101,
nextEnd: 0,
last: 100,
},
{
startLimit: 100,
initial: nil,
start: 0,
end: 0,
expected: "[]",
nextStart: 100,
nextEnd: 0,
last: 0,
},
{
startLimit: 100,
initial: nil,
start: 20,
end: 30,
expected: "[]",
nextStart: 100,
nextEnd: 0,
last: 0,
},
{
startLimit: 100,
initial: nil,
start: 50,
end: 100,
expected: "[[100 100]]",
nextStart: 101,
nextEnd: 0,
last: 100,
},
{
startLimit: 100,
initial: nil,
start: 50,
end: 110,
expected: "[[100 110]]",
nextStart: 111,
nextEnd: 0,
last: 110,
},
{
startLimit: 100,
initial: nil,
start: 120,
end: 130,
expected: "[[120 130]]",
nextStart: 100,
nextEnd: 119,
last: 130,
},
{
startLimit: 100,
initial: nil,
start: 120,
end: 130,
expected: "[[120 130]]",
nextStart: 100,
nextEnd: 119,
last: 130,
},
} {
intervals := NewIntervals(tc.startLimit)
intervals.ranges = tc.initial
intervals.Add(tc.start, tc.end)
got := intervals.String()
if got != tc.expected {
t.Errorf("interval #%d: expected %s, got %s", i, tc.expected, got)
}
nextStart, nextEnd := intervals.Next()
if nextStart != tc.nextStart {
t.Errorf("interval #%d, expected next start %d, got %d", i, tc.nextStart, nextStart)
}
if nextEnd != tc.nextEnd {
t.Errorf("interval #%d, expected next end %d, got %d", i, tc.nextEnd, nextEnd)
}
last := intervals.Last()
if last != tc.last {
t.Errorf("interval #%d, expected last %d, got %d", i, tc.last, last)
}
}
}
func TestMerge(t *testing.T) {
for i, tc := range []struct {
initial [][2]uint64
merge [][2]uint64
expected string
}{
{
initial: nil,
merge: nil,
expected: "[]",
},
{
initial: [][2]uint64{{10, 20}},
merge: nil,
expected: "[[10 20]]",
},
{
initial: nil,
merge: [][2]uint64{{15, 25}},
expected: "[[15 25]]",
},
{
initial: [][2]uint64{{0, 100}},
merge: [][2]uint64{{150, 250}},
expected: "[[0 100] [150 250]]",
},
{
initial: [][2]uint64{{0, 100}},
merge: [][2]uint64{{101, 250}},
expected: "[[0 250]]",
},
{
initial: [][2]uint64{{0, 10}, {30, 40}},
merge: [][2]uint64{{20, 25}, {41, 50}},
expected: "[[0 10] [20 25] [30 50]]",
},
{
initial: [][2]uint64{{0, 5}, {15, 20}, {30, 40}, {50, 60}},
merge: [][2]uint64{{6, 25}},
expected: "[[0 25] [30 40] [50 60]]",
},
} {
intervals := NewIntervals(0)
intervals.ranges = tc.initial
m := NewIntervals(0)
m.ranges = tc.merge
intervals.Merge(m)
got := intervals.String()
if got != tc.expected {
t.Errorf("interval #%d: expected %s, got %s", i, tc.expected, got)
}
}
}

View File

@ -0,0 +1,77 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package intervals
import (
"testing"
"github.com/ethersphere/swarm/state"
)
// TestInmemoryStore tests basic functionality of InmemoryStore.
func TestInmemoryStore(t *testing.T) {
testStore(t, state.NewInmemoryStore())
}
// testStore is a helper function to test various Store implementations.
func testStore(t *testing.T, s state.Store) {
key1 := "key1"
i1 := NewIntervals(0)
i1.Add(10, 20)
if err := s.Put(key1, i1); err != nil {
t.Fatal(err)
}
i := &Intervals{}
err := s.Get(key1, i)
if err != nil {
t.Fatal(err)
}
if i.String() != i1.String() {
t.Errorf("expected interval %s, got %s", i1, i)
}
key2 := "key2"
i2 := NewIntervals(0)
i2.Add(10, 20)
if err := s.Put(key2, i2); err != nil {
t.Fatal(err)
}
err = s.Get(key2, i)
if err != nil {
t.Fatal(err)
}
if i.String() != i2.String() {
t.Errorf("expected interval %s, got %s", i2, i)
}
if err := s.Delete(key1); err != nil {
t.Fatal(err)
}
if err := s.Get(key1, i); err != state.ErrNotFound {
t.Errorf("expected error %v, got %s", state.ErrNotFound, err)
}
if err := s.Get(key2, i); err != nil {
t.Errorf("expected error %v, got %s", nil, err)
}
if err := s.Delete(key2); err != nil {
t.Fatal(err)
}
if err := s.Get(key2, i); err != state.ErrNotFound {
t.Errorf("expected error %v, got %s", state.ErrNotFound, err)
}
}

View File

@ -0,0 +1,361 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"encoding/binary"
"errors"
"fmt"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/testutil"
)
func TestIntervalsLive(t *testing.T) {
testIntervals(t, true, nil, false)
testIntervals(t, true, nil, true)
}
func TestIntervalsHistory(t *testing.T) {
testIntervals(t, false, NewRange(9, 26), false)
testIntervals(t, false, NewRange(9, 26), true)
}
func TestIntervalsLiveAndHistory(t *testing.T) {
testIntervals(t, true, NewRange(9, 26), false)
testIntervals(t, true, NewRange(9, 26), true)
}
func testIntervals(t *testing.T, live bool, history *Range, skipCheck bool) {
nodes := 2
chunkCount := dataChunkCount
externalStreamName := "externalStream"
externalStreamSessionAt := uint64(50)
externalStreamMaxKeys := uint64(100)
sim := simulation.New(map[string]simulation.ServiceFunc{
"intervalsStreamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (node.Service, func(), error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
Syncing: SyncingRegisterOnly,
SkipCheck: skipCheck,
}, nil)
bucket.Store(bucketKeyRegistry, r)
r.RegisterClientFunc(externalStreamName, func(p *Peer, t string, live bool) (Client, error) {
return newTestExternalClient(netStore), nil
})
r.RegisterServerFunc(externalStreamName, func(p *Peer, t string, live bool) (Server, error) {
return newTestExternalServer(t, externalStreamSessionAt, externalStreamMaxKeys, nil), nil
})
cleanup := func() {
r.Close()
clean()
}
return r, cleanup, nil
},
})
defer sim.Close()
log.Info("Adding nodes to simulation")
_, err := sim.AddNodesAndConnectChain(nodes)
if err != nil {
t.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
defer cancel()
if _, err := sim.WaitTillHealthy(ctx); err != nil {
t.Fatal(err)
}
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) (err error) {
nodeIDs := sim.UpNodeIDs()
storer := nodeIDs[0]
checker := nodeIDs[1]
item, ok := sim.NodeItem(storer, bucketKeyFileStore)
if !ok {
return fmt.Errorf("No filestore")
}
fileStore := item.(*storage.FileStore)
size := chunkCount * chunkSize
_, wait, err := fileStore.Store(ctx, testutil.RandomReader(1, size), int64(size), false)
if err != nil {
return fmt.Errorf("store: %v", err)
}
err = wait(ctx)
if err != nil {
return fmt.Errorf("wait store: %v", err)
}
item, ok = sim.NodeItem(checker, bucketKeyRegistry)
if !ok {
return fmt.Errorf("No registry")
}
registry := item.(*Registry)
liveErrC := make(chan error)
historyErrC := make(chan error)
err = registry.Subscribe(storer, NewStream(externalStreamName, "", live), history, Top)
if err != nil {
return err
}
disconnected := watchDisconnections(ctx, sim)
defer func() {
if err != nil && disconnected.bool() {
err = errors.New("disconnect events received")
}
}()
go func() {
if !live {
close(liveErrC)
return
}
var err error
defer func() {
liveErrC <- err
}()
// live stream
var liveHashesChan chan []byte
liveHashesChan, err = getHashes(ctx, registry, storer, NewStream(externalStreamName, "", true))
if err != nil {
log.Error("get hashes", "err", err)
return
}
i := externalStreamSessionAt
// we have subscribed, enable notifications
err = enableNotifications(registry, storer, NewStream(externalStreamName, "", true))
if err != nil {
return
}
for {
select {
case hash := <-liveHashesChan:
h := binary.BigEndian.Uint64(hash)
if h != i {
err = fmt.Errorf("expected live hash %d, got %d", i, h)
return
}
i++
if i > externalStreamMaxKeys {
return
}
case <-ctx.Done():
return
}
}
}()
go func() {
if live && history == nil {
close(historyErrC)
return
}
var err error
defer func() {
historyErrC <- err
}()
// history stream
var historyHashesChan chan []byte
historyHashesChan, err = getHashes(ctx, registry, storer, NewStream(externalStreamName, "", false))
if err != nil {
log.Error("get hashes", "err", err)
return
}
var i uint64
historyTo := externalStreamMaxKeys
if history != nil {
i = history.From
if history.To != 0 {
historyTo = history.To
}
}
// we have subscribed, enable notifications
err = enableNotifications(registry, storer, NewStream(externalStreamName, "", false))
if err != nil {
return
}
for {
select {
case hash := <-historyHashesChan:
h := binary.BigEndian.Uint64(hash)
if h != i {
err = fmt.Errorf("expected history hash %d, got %d", i, h)
return
}
i++
if i > historyTo {
return
}
case <-ctx.Done():
return
}
}
}()
if err := <-liveErrC; err != nil {
return err
}
if err := <-historyErrC; err != nil {
return err
}
return nil
})
if result.Error != nil {
t.Fatal(result.Error)
}
}
func getHashes(ctx context.Context, r *Registry, peerID enode.ID, s Stream) (chan []byte, error) {
peer := r.getPeer(peerID)
client, err := peer.getClient(ctx, s)
if err != nil {
return nil, err
}
c := client.Client.(*testExternalClient)
return c.hashes, nil
}
func enableNotifications(r *Registry, peerID enode.ID, s Stream) error {
peer := r.getPeer(peerID)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
client, err := peer.getClient(ctx, s)
if err != nil {
return err
}
close(client.Client.(*testExternalClient).enableNotificationsC)
return nil
}
type testExternalClient struct {
hashes chan []byte
netStore *storage.NetStore
enableNotificationsC chan struct{}
}
func newTestExternalClient(netStore *storage.NetStore) *testExternalClient {
return &testExternalClient{
hashes: make(chan []byte),
netStore: netStore,
enableNotificationsC: make(chan struct{}),
}
}
func (c *testExternalClient) NeedData(ctx context.Context, hash []byte) func(context.Context) error {
wait := c.netStore.FetchFunc(ctx, storage.Address(hash))
if wait == nil {
return nil
}
select {
case c.hashes <- hash:
case <-ctx.Done():
log.Warn("testExternalClient NeedData context", "err", ctx.Err())
return func(_ context.Context) error {
return ctx.Err()
}
}
return wait
}
func (c *testExternalClient) BatchDone(Stream, uint64, []byte, []byte) func() (*TakeoverProof, error) {
return nil
}
func (c *testExternalClient) Close() {}
type testExternalServer struct {
t string
keyFunc func(key []byte, index uint64)
sessionAt uint64
maxKeys uint64
}
func newTestExternalServer(t string, sessionAt, maxKeys uint64, keyFunc func(key []byte, index uint64)) *testExternalServer {
if keyFunc == nil {
keyFunc = binary.BigEndian.PutUint64
}
return &testExternalServer{
t: t,
keyFunc: keyFunc,
sessionAt: sessionAt,
maxKeys: maxKeys,
}
}
func (s *testExternalServer) SessionIndex() (uint64, error) {
return s.sessionAt, nil
}
func (s *testExternalServer) SetNextBatch(from uint64, to uint64) ([]byte, uint64, uint64, *HandoverProof, error) {
if to > s.maxKeys {
to = s.maxKeys
}
b := make([]byte, HashSize*(to-from+1))
for i := from; i <= to; i++ {
s.keyFunc(b[(i-from)*HashSize:(i-from+1)*HashSize], i)
}
return b, from, to, nil, nil
}
func (s *testExternalServer) GetData(context.Context, []byte) ([]byte, error) {
return make([]byte, 4096), nil
}
func (s *testExternalServer) Close() {}

View File

@ -0,0 +1,129 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"testing"
p2ptest "github.com/ethereum/go-ethereum/p2p/testing"
)
// This test checks the default behavior of the server, that is
// when syncing is enabled.
func TestLigthnodeRequestSubscriptionWithSync(t *testing.T) {
registryOptions := &RegistryOptions{
Syncing: SyncingRegisterOnly,
}
tester, _, _, teardown, err := newStreamerTester(registryOptions)
if err != nil {
t.Fatal(err)
}
defer teardown()
node := tester.Nodes[0]
syncStream := NewStream("SYNC", FormatSyncBinKey(1), false)
err = tester.TestExchanges(
p2ptest.Exchange{
Label: "RequestSubscription",
Triggers: []p2ptest.Trigger{
{
Code: 8,
Msg: &RequestSubscriptionMsg{
Stream: syncStream,
},
Peer: node.ID(),
},
},
Expects: []p2ptest.Expect{
{
Code: 4,
Msg: &SubscribeMsg{
Stream: syncStream,
},
Peer: node.ID(),
},
},
})
if err != nil {
t.Fatalf("Got %v", err)
}
}
// This test checks the Lightnode behavior of the server, that is
// when syncing is disabled.
func TestLigthnodeRequestSubscriptionWithoutSync(t *testing.T) {
registryOptions := &RegistryOptions{
Syncing: SyncingDisabled,
}
tester, _, _, teardown, err := newStreamerTester(registryOptions)
if err != nil {
t.Fatal(err)
}
defer teardown()
node := tester.Nodes[0]
syncStream := NewStream("SYNC", FormatSyncBinKey(1), false)
err = tester.TestExchanges(p2ptest.Exchange{
Label: "RequestSubscription",
Triggers: []p2ptest.Trigger{
{
Code: 8,
Msg: &RequestSubscriptionMsg{
Stream: syncStream,
},
Peer: node.ID(),
},
},
Expects: []p2ptest.Expect{
{
Code: 7,
Msg: &SubscribeErrorMsg{
Error: "stream SYNC not registered",
},
Peer: node.ID(),
},
},
}, p2ptest.Exchange{
Label: "RequestSubscription",
Triggers: []p2ptest.Trigger{
{
Code: 4,
Msg: &SubscribeMsg{
Stream: syncStream,
},
Peer: node.ID(),
},
},
Expects: []p2ptest.Expect{
{
Code: 7,
Msg: &SubscribeErrorMsg{
Error: "stream SYNC not registered",
},
Peer: node.ID(),
},
},
})
if err != nil {
t.Fatalf("Got %v", err)
}
}

417
network/stream/messages.go Normal file
View File

@ -0,0 +1,417 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/log"
bv "github.com/ethersphere/swarm/network/bitvector"
"github.com/ethersphere/swarm/storage"
)
var syncBatchTimeout = 30 * time.Second
// Stream defines a unique stream identifier.
type Stream struct {
// Name is used for Client and Server functions identification.
Name string
// Key is the name of specific stream data.
Key string
// Live defines whether the stream delivers only new data
// for the specific stream.
Live bool
}
func NewStream(name string, key string, live bool) Stream {
return Stream{
Name: name,
Key: key,
Live: live,
}
}
// String return a stream id based on all Stream fields.
func (s Stream) String() string {
t := "h"
if s.Live {
t = "l"
}
return fmt.Sprintf("%s|%s|%s", s.Name, s.Key, t)
}
// SubcribeMsg is the protocol msg for requesting a stream(section)
type SubscribeMsg struct {
Stream Stream
History *Range `rlp:"nil"`
Priority uint8 // delivered on priority channel
}
// RequestSubscriptionMsg is the protocol msg for a node to request subscription to a
// specific stream
type RequestSubscriptionMsg struct {
Stream Stream
History *Range `rlp:"nil"`
Priority uint8 // delivered on priority channel
}
func (p *Peer) handleRequestSubscription(ctx context.Context, req *RequestSubscriptionMsg) (err error) {
log.Debug(fmt.Sprintf("handleRequestSubscription: streamer %s to subscribe to %s with stream %s", p.streamer.addr, p.ID(), req.Stream))
if err = p.streamer.Subscribe(p.ID(), req.Stream, req.History, req.Priority); err != nil {
// The error will be sent as a subscribe error message
// and will not be returned as it will prevent any new message
// exchange between peers over p2p. Instead, error will be returned
// only if there is one from sending subscribe error message.
err = p.Send(ctx, SubscribeErrorMsg{
Error: err.Error(),
})
}
return err
}
func (p *Peer) handleSubscribeMsg(ctx context.Context, req *SubscribeMsg) (err error) {
metrics.GetOrRegisterCounter("peer.handlesubscribemsg", nil).Inc(1)
defer func() {
if err != nil {
// The error will be sent as a subscribe error message
// and will not be returned as it will prevent any new message
// exchange between peers over p2p. Instead, error will be returned
// only if there is one from sending subscribe error message.
err = p.Send(context.TODO(), SubscribeErrorMsg{
Error: err.Error(),
})
}
}()
log.Debug("received subscription", "from", p.streamer.addr, "peer", p.ID(), "stream", req.Stream, "history", req.History)
f, err := p.streamer.GetServerFunc(req.Stream.Name)
if err != nil {
return err
}
s, err := f(p, req.Stream.Key, req.Stream.Live)
if err != nil {
return err
}
os, err := p.setServer(req.Stream, s, req.Priority)
if err != nil {
return err
}
var from uint64
var to uint64
if !req.Stream.Live && req.History != nil {
from = req.History.From
to = req.History.To
}
go func() {
if err := p.SendOfferedHashes(os, from, to); err != nil {
log.Warn("SendOfferedHashes error", "peer", p.ID().TerminalString(), "err", err)
}
}()
if req.Stream.Live && req.History != nil {
// subscribe to the history stream
s, err := f(p, req.Stream.Key, false)
if err != nil {
return err
}
os, err := p.setServer(getHistoryStream(req.Stream), s, getHistoryPriority(req.Priority))
if err != nil {
return err
}
go func() {
if err := p.SendOfferedHashes(os, req.History.From, req.History.To); err != nil {
log.Warn("SendOfferedHashes error", "peer", p.ID().TerminalString(), "err", err)
}
}()
}
return nil
}
type SubscribeErrorMsg struct {
Error string
}
func (p *Peer) handleSubscribeErrorMsg(req *SubscribeErrorMsg) (err error) {
//TODO the error should be channeled to whoever calls the subscribe
return fmt.Errorf("subscribe to peer %s: %v", p.ID(), req.Error)
}
type UnsubscribeMsg struct {
Stream Stream
}
func (p *Peer) handleUnsubscribeMsg(req *UnsubscribeMsg) error {
return p.removeServer(req.Stream)
}
type QuitMsg struct {
Stream Stream
}
func (p *Peer) handleQuitMsg(req *QuitMsg) error {
err := p.removeClient(req.Stream)
if _, ok := err.(*notFoundError); ok {
return nil
}
return err
}
// OfferedHashesMsg is the protocol msg for offering to hand over a
// stream section
type OfferedHashesMsg struct {
Stream Stream // name of Stream
From, To uint64 // peer and db-specific entry count
Hashes []byte // stream of hashes (128)
*HandoverProof // HandoverProof
}
// String pretty prints OfferedHashesMsg
func (m OfferedHashesMsg) String() string {
return fmt.Sprintf("Stream '%v' [%v-%v] (%v)", m.Stream, m.From, m.To, len(m.Hashes)/HashSize)
}
// handleOfferedHashesMsg protocol msg handler calls the incoming streamer interface
// Filter method
func (p *Peer) handleOfferedHashesMsg(ctx context.Context, req *OfferedHashesMsg) error {
metrics.GetOrRegisterCounter("peer.handleofferedhashes", nil).Inc(1)
c, _, err := p.getOrSetClient(req.Stream, req.From, req.To)
if err != nil {
return err
}
hashes := req.Hashes
lenHashes := len(hashes)
if lenHashes%HashSize != 0 {
return fmt.Errorf("error invalid hashes length (len: %v)", lenHashes)
}
want, err := bv.New(lenHashes / HashSize)
if err != nil {
return fmt.Errorf("error initiaising bitvector of length %v: %v", lenHashes/HashSize, err)
}
var wantDelaySet bool
var wantDelay time.Time
ctr := 0
errC := make(chan error)
ctx, cancel := context.WithTimeout(ctx, syncBatchTimeout)
ctx = context.WithValue(ctx, "source", p.ID().String())
for i := 0; i < lenHashes; i += HashSize {
hash := hashes[i : i+HashSize]
if wait := c.NeedData(ctx, hash); wait != nil {
ctr++
want.Set(i/HashSize, true)
// measure how long it takes before we mark chunks for retrieval, and actually send the request
if !wantDelaySet {
wantDelaySet = true
wantDelay = time.Now()
}
// create request and wait until the chunk data arrives and is stored
go func(w func(context.Context) error) {
select {
case errC <- w(ctx):
case <-ctx.Done():
}
}(wait)
}
}
go func() {
defer cancel()
for i := 0; i < ctr; i++ {
select {
case err := <-errC:
if err != nil {
log.Debug("client.handleOfferedHashesMsg() error waiting for chunk, dropping peer", "peer", p.ID(), "err", err)
p.Drop()
return
}
case <-ctx.Done():
log.Debug("client.handleOfferedHashesMsg() context done", "ctx.Err()", ctx.Err())
return
case <-c.quit:
log.Debug("client.handleOfferedHashesMsg() quit")
return
}
}
select {
case c.next <- c.batchDone(p, req, hashes):
case <-c.quit:
log.Debug("client.handleOfferedHashesMsg() quit")
case <-ctx.Done():
log.Debug("client.handleOfferedHashesMsg() context done", "ctx.Err()", ctx.Err())
}
}()
// only send wantedKeysMsg if all missing chunks of the previous batch arrived
// except
if c.stream.Live {
c.sessionAt = req.From
}
from, to := c.nextBatch(req.To + 1)
log.Trace("set next batch", "peer", p.ID(), "stream", req.Stream, "from", req.From, "to", req.To, "addr", p.streamer.addr)
if from == to {
return nil
}
msg := &WantedHashesMsg{
Stream: req.Stream,
Want: want.Bytes(),
From: from,
To: to,
}
log.Trace("sending want batch", "peer", p.ID(), "stream", msg.Stream, "from", msg.From, "to", msg.To)
select {
case err := <-c.next:
if err != nil {
log.Warn("c.next error dropping peer", "err", err)
p.Drop()
return err
}
case <-c.quit:
log.Debug("client.handleOfferedHashesMsg() quit")
return nil
case <-ctx.Done():
log.Debug("client.handleOfferedHashesMsg() context done", "ctx.Err()", ctx.Err())
return nil
}
log.Trace("sending want batch", "peer", p.ID(), "stream", msg.Stream, "from", msg.From, "to", msg.To)
// record want delay
if wantDelaySet {
metrics.GetOrRegisterResettingTimer("handleoffered.wantdelay", nil).UpdateSince(wantDelay)
}
err = p.SendPriority(ctx, msg, c.priority)
if err != nil {
log.Warn("SendPriority error", "err", err)
}
return nil
}
// WantedHashesMsg is the protocol msg data for signaling which hashes
// offered in OfferedHashesMsg downstream peer actually wants sent over
type WantedHashesMsg struct {
Stream Stream
Want []byte // bitvector indicating which keys of the batch needed
From, To uint64 // next interval offset - empty if not to be continued
}
// String pretty prints WantedHashesMsg
func (m WantedHashesMsg) String() string {
return fmt.Sprintf("Stream '%v', Want: %x, Next: [%v-%v]", m.Stream, m.Want, m.From, m.To)
}
// handleWantedHashesMsg protocol msg handler
// * sends the next batch of unsynced keys
// * sends the actual data chunks as per WantedHashesMsg
func (p *Peer) handleWantedHashesMsg(ctx context.Context, req *WantedHashesMsg) error {
metrics.GetOrRegisterCounter("peer.handlewantedhashesmsg", nil).Inc(1)
log.Trace("received wanted batch", "peer", p.ID(), "stream", req.Stream, "from", req.From, "to", req.To)
s, err := p.getServer(req.Stream)
if err != nil {
return err
}
hashes := s.currentBatch
// launch in go routine since GetBatch blocks until new hashes arrive
go func() {
if err := p.SendOfferedHashes(s, req.From, req.To); err != nil {
log.Warn("SendOfferedHashes error", "peer", p.ID().TerminalString(), "err", err)
}
}()
// go p.SendOfferedHashes(s, req.From, req.To)
l := len(hashes) / HashSize
log.Trace("wanted batch length", "peer", p.ID(), "stream", req.Stream, "from", req.From, "to", req.To, "lenhashes", len(hashes), "l", l)
want, err := bv.NewFromBytes(req.Want, l)
if err != nil {
return fmt.Errorf("error initiaising bitvector of length %v: %v", l, err)
}
for i := 0; i < l; i++ {
if want.Get(i) {
metrics.GetOrRegisterCounter("peer.handlewantedhashesmsg.actualget", nil).Inc(1)
hash := hashes[i*HashSize : (i+1)*HashSize]
data, err := s.GetData(ctx, hash)
if err != nil {
return fmt.Errorf("handleWantedHashesMsg get data %x: %v", hash, err)
}
chunk := storage.NewChunk(hash, data)
syncing := true
if err := p.Deliver(ctx, chunk, s.priority, syncing); err != nil {
return err
}
}
}
return nil
}
// Handover represents a statement that the upstream peer hands over the stream section
type Handover struct {
Stream Stream // name of stream
Start, End uint64 // index of hashes
Root []byte // Root hash for indexed segment inclusion proofs
}
// HandoverProof represents a signed statement that the upstream peer handed over the stream section
type HandoverProof struct {
Sig []byte // Sign(Hash(Serialisation(Handover)))
*Handover
}
// Takeover represents a statement that downstream peer took over (stored all data)
// handed over
type Takeover Handover
// TakeoverProof represents a signed statement that the downstream peer took over
// the stream section
type TakeoverProof struct {
Sig []byte // Sign(Hash(Serialisation(Takeover)))
*Takeover
}
// TakeoverProofMsg is the protocol msg sent by downstream peer
type TakeoverProofMsg TakeoverProof
// String pretty prints TakeoverProofMsg
func (m TakeoverProofMsg) String() string {
return fmt.Sprintf("Stream: '%v' [%v-%v], Root: %x, Sig: %x", m.Stream, m.Start, m.End, m.Root, m.Sig)
}
func (p *Peer) handleTakeoverProofMsg(ctx context.Context, req *TakeoverProofMsg) error {
_, err := p.getServer(req.Stream)
// store the strongest takeoverproof for the stream in streamer
return err
}

588
network/stream/peer.go Normal file
View File

@ -0,0 +1,588 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network"
pq "github.com/ethersphere/swarm/network/priorityqueue"
"github.com/ethersphere/swarm/network/stream/intervals"
"github.com/ethersphere/swarm/spancontext"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/tracing"
opentracing "github.com/opentracing/opentracing-go"
)
type notFoundError struct {
t string
s Stream
}
func newNotFoundError(t string, s Stream) *notFoundError {
return &notFoundError{t: t, s: s}
}
func (e *notFoundError) Error() string {
return fmt.Sprintf("%s not found for stream %q", e.t, e.s)
}
// ErrMaxPeerServers will be returned if peer server limit is reached.
// It will be sent in the SubscribeErrorMsg.
var ErrMaxPeerServers = errors.New("max peer servers")
// Peer is the Peer extension for the streaming protocol
type Peer struct {
*network.BzzPeer
streamer *Registry
pq *pq.PriorityQueue
serverMu sync.RWMutex
clientMu sync.RWMutex // protects both clients and clientParams
servers map[Stream]*server
clients map[Stream]*client
// clientParams map keeps required client arguments
// that are set on Registry.Subscribe and used
// on creating a new client in offered hashes handler.
clientParams map[Stream]*clientParams
quit chan struct{}
}
type WrappedPriorityMsg struct {
Context context.Context
Msg interface{}
}
// NewPeer is the constructor for Peer
func NewPeer(peer *network.BzzPeer, streamer *Registry) *Peer {
p := &Peer{
BzzPeer: peer,
pq: pq.New(int(PriorityQueue), PriorityQueueCap),
streamer: streamer,
servers: make(map[Stream]*server),
clients: make(map[Stream]*client),
clientParams: make(map[Stream]*clientParams),
quit: make(chan struct{}),
}
ctx, cancel := context.WithCancel(context.Background())
go p.pq.Run(ctx, func(i interface{}) {
wmsg := i.(WrappedPriorityMsg)
err := p.Send(wmsg.Context, wmsg.Msg)
if err != nil {
log.Error("Message send error, dropping peer", "peer", p.ID(), "err", err)
p.Drop()
}
})
// basic monitoring for pq contention
go func(pq *pq.PriorityQueue) {
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
var lenMaxi int
var capMaxi int
for k := range pq.Queues {
if lenMaxi < len(pq.Queues[k]) {
lenMaxi = len(pq.Queues[k])
}
if capMaxi < cap(pq.Queues[k]) {
capMaxi = cap(pq.Queues[k])
}
}
metrics.GetOrRegisterGauge(fmt.Sprintf("pq_len_%s", p.ID().TerminalString()), nil).Update(int64(lenMaxi))
metrics.GetOrRegisterGauge(fmt.Sprintf("pq_cap_%s", p.ID().TerminalString()), nil).Update(int64(capMaxi))
case <-p.quit:
return
}
}
}(p.pq)
go func() {
<-p.quit
cancel()
}()
return p
}
// Deliver sends a storeRequestMsg protocol message to the peer
// Depending on the `syncing` parameter we send different message types
func (p *Peer) Deliver(ctx context.Context, chunk storage.Chunk, priority uint8, syncing bool) error {
var msg interface{}
metrics.GetOrRegisterCounter("peer.deliver", nil).Inc(1)
//we send different types of messages if delivery is for syncing or retrievals,
//even if handling and content of the message are the same,
//because swap accounting decides which messages need accounting based on the message type
if syncing {
msg = &ChunkDeliveryMsgSyncing{
Addr: chunk.Address(),
SData: chunk.Data(),
}
} else {
msg = &ChunkDeliveryMsgRetrieval{
Addr: chunk.Address(),
SData: chunk.Data(),
}
}
return p.SendPriority(ctx, msg, priority)
}
// SendPriority sends message to the peer using the outgoing priority queue
func (p *Peer) SendPriority(ctx context.Context, msg interface{}, priority uint8) error {
defer metrics.GetOrRegisterResettingTimer(fmt.Sprintf("peer.sendpriority_t.%d", priority), nil).UpdateSince(time.Now())
ctx = tracing.StartSaveSpan(ctx)
metrics.GetOrRegisterCounter(fmt.Sprintf("peer.sendpriority.%d", priority), nil).Inc(1)
wmsg := WrappedPriorityMsg{
Context: ctx,
Msg: msg,
}
err := p.pq.Push(wmsg, int(priority))
if err != nil {
log.Error("err on p.pq.Push", "err", err, "peer", p.ID())
}
return err
}
// SendOfferedHashes sends OfferedHashesMsg protocol msg
func (p *Peer) SendOfferedHashes(s *server, f, t uint64) error {
var sp opentracing.Span
ctx, sp := spancontext.StartSpan(
context.TODO(),
"send.offered.hashes",
)
defer sp.Finish()
defer metrics.GetOrRegisterResettingTimer("send.offered.hashes", nil).UpdateSince(time.Now())
hashes, from, to, proof, err := s.setNextBatch(f, t)
if err != nil {
return err
}
// true only when quitting
if len(hashes) == 0 {
return nil
}
if proof == nil {
proof = &HandoverProof{
Handover: &Handover{},
}
}
s.currentBatch = hashes
msg := &OfferedHashesMsg{
HandoverProof: proof,
Hashes: hashes,
From: from,
To: to,
Stream: s.stream,
}
log.Trace("Swarm syncer offer batch", "peer", p.ID(), "stream", s.stream, "len", len(hashes), "from", from, "to", to)
ctx = context.WithValue(ctx, "stream_send_tag", "send.offered.hashes")
return p.SendPriority(ctx, msg, s.priority)
}
func (p *Peer) getServer(s Stream) (*server, error) {
p.serverMu.RLock()
defer p.serverMu.RUnlock()
server := p.servers[s]
if server == nil {
return nil, newNotFoundError("server", s)
}
return server, nil
}
func (p *Peer) setServer(s Stream, o Server, priority uint8) (*server, error) {
p.serverMu.Lock()
defer p.serverMu.Unlock()
if p.servers[s] != nil {
return nil, fmt.Errorf("server %s already registered", s)
}
if p.streamer.maxPeerServers > 0 && len(p.servers) >= p.streamer.maxPeerServers {
return nil, ErrMaxPeerServers
}
sessionIndex, err := o.SessionIndex()
if err != nil {
return nil, err
}
os := &server{
Server: o,
stream: s,
priority: priority,
sessionIndex: sessionIndex,
}
p.servers[s] = os
return os, nil
}
func (p *Peer) removeServer(s Stream) error {
p.serverMu.Lock()
defer p.serverMu.Unlock()
server, ok := p.servers[s]
if !ok {
return newNotFoundError("server", s)
}
server.Close()
delete(p.servers, s)
return nil
}
func (p *Peer) getClient(ctx context.Context, s Stream) (c *client, err error) {
var params *clientParams
func() {
p.clientMu.RLock()
defer p.clientMu.RUnlock()
c = p.clients[s]
if c != nil {
return
}
params = p.clientParams[s]
}()
if c != nil {
return c, nil
}
if params != nil {
//debug.PrintStack()
if err := params.waitClient(ctx); err != nil {
return nil, err
}
}
p.clientMu.RLock()
defer p.clientMu.RUnlock()
c = p.clients[s]
if c != nil {
return c, nil
}
return nil, newNotFoundError("client", s)
}
func (p *Peer) getOrSetClient(s Stream, from, to uint64) (c *client, created bool, err error) {
p.clientMu.Lock()
defer p.clientMu.Unlock()
c = p.clients[s]
if c != nil {
return c, false, nil
}
f, err := p.streamer.GetClientFunc(s.Name)
if err != nil {
return nil, false, err
}
is, err := f(p, s.Key, s.Live)
if err != nil {
return nil, false, err
}
cp, err := p.getClientParams(s)
if err != nil {
return nil, false, err
}
defer func() {
if err == nil {
if err := p.removeClientParams(s); err != nil {
log.Error("stream set client: remove client params", "stream", s, "peer", p, "err", err)
}
}
}()
intervalsKey := peerStreamIntervalsKey(p, s)
if s.Live {
// try to find previous history and live intervals and merge live into history
historyKey := peerStreamIntervalsKey(p, NewStream(s.Name, s.Key, false))
historyIntervals := &intervals.Intervals{}
err := p.streamer.intervalsStore.Get(historyKey, historyIntervals)
switch err {
case nil:
liveIntervals := &intervals.Intervals{}
err := p.streamer.intervalsStore.Get(intervalsKey, liveIntervals)
switch err {
case nil:
historyIntervals.Merge(liveIntervals)
if err := p.streamer.intervalsStore.Put(historyKey, historyIntervals); err != nil {
log.Error("stream set client: put history intervals", "stream", s, "peer", p, "err", err)
}
case state.ErrNotFound:
default:
log.Error("stream set client: get live intervals", "stream", s, "peer", p, "err", err)
}
case state.ErrNotFound:
default:
log.Error("stream set client: get history intervals", "stream", s, "peer", p, "err", err)
}
}
if err := p.streamer.intervalsStore.Put(intervalsKey, intervals.NewIntervals(from)); err != nil {
return nil, false, err
}
next := make(chan error, 1)
c = &client{
Client: is,
stream: s,
priority: cp.priority,
to: cp.to,
next: next,
quit: make(chan struct{}),
intervalsStore: p.streamer.intervalsStore,
intervalsKey: intervalsKey,
}
p.clients[s] = c
cp.clientCreated() // unblock all possible getClient calls that are waiting
next <- nil // this is to allow wantedKeysMsg before first batch arrives
return c, true, nil
}
func (p *Peer) removeClient(s Stream) error {
p.clientMu.Lock()
defer p.clientMu.Unlock()
client, ok := p.clients[s]
if !ok {
return newNotFoundError("client", s)
}
client.close()
delete(p.clients, s)
return nil
}
func (p *Peer) setClientParams(s Stream, params *clientParams) error {
p.clientMu.Lock()
defer p.clientMu.Unlock()
if p.clients[s] != nil {
return fmt.Errorf("client %s already exists", s)
}
if p.clientParams[s] != nil {
return fmt.Errorf("client params %s already set", s)
}
p.clientParams[s] = params
return nil
}
func (p *Peer) getClientParams(s Stream) (*clientParams, error) {
params := p.clientParams[s]
if params == nil {
return nil, fmt.Errorf("client params '%v' not provided to peer %v", s, p.ID())
}
return params, nil
}
func (p *Peer) removeClientParams(s Stream) error {
_, ok := p.clientParams[s]
if !ok {
return newNotFoundError("client params", s)
}
delete(p.clientParams, s)
return nil
}
func (p *Peer) close() {
p.serverMu.Lock()
defer p.serverMu.Unlock()
for _, s := range p.servers {
s.Close()
}
p.servers = nil
}
// runUpdateSyncing is a long running function that creates the initial
// syncing subscriptions to the peer and waits for neighbourhood depth change
// to create new ones or quit existing ones based on the new neighbourhood depth
// and if peer enters or leaves nearest neighbourhood by using
// syncSubscriptionsDiff and updateSyncSubscriptions functions.
func (p *Peer) runUpdateSyncing() {
timer := time.NewTimer(p.streamer.syncUpdateDelay)
defer timer.Stop()
select {
case <-timer.C:
case <-p.streamer.quit:
return
}
kad := p.streamer.delivery.kad
po := chunk.Proximity(p.BzzAddr.Over(), kad.BaseAddr())
depth := kad.NeighbourhoodDepth()
log.Debug("update syncing subscriptions: initial", "peer", p.ID(), "po", po, "depth", depth)
// initial subscriptions
p.updateSyncSubscriptions(syncSubscriptionsDiff(po, -1, depth, kad.MaxProxDisplay))
depthChangeSignal, unsubscribeDepthChangeSignal := kad.SubscribeToNeighbourhoodDepthChange()
defer unsubscribeDepthChangeSignal()
prevDepth := depth
for {
select {
case _, ok := <-depthChangeSignal:
if !ok {
return
}
// update subscriptions for this peer when depth changes
depth := kad.NeighbourhoodDepth()
log.Debug("update syncing subscriptions", "peer", p.ID(), "po", po, "depth", depth)
p.updateSyncSubscriptions(syncSubscriptionsDiff(po, prevDepth, depth, kad.MaxProxDisplay))
prevDepth = depth
case <-p.streamer.quit:
return
}
}
log.Debug("update syncing subscriptions: exiting", "peer", p.ID())
}
// updateSyncSubscriptions accepts two slices of integers, the first one
// representing proximity order bins for required syncing subscriptions
// and the second one representing bins for syncing subscriptions that
// need to be removed. This function sends request for subscription
// messages and quit messages for provided bins.
func (p *Peer) updateSyncSubscriptions(subBins, quitBins []int) {
if p.streamer.getPeer(p.ID()) == nil {
log.Debug("update syncing subscriptions", "peer not found", p.ID())
return
}
log.Debug("update syncing subscriptions", "peer", p.ID(), "subscribe", subBins, "quit", quitBins)
for _, po := range subBins {
p.subscribeSync(po)
}
for _, po := range quitBins {
p.quitSync(po)
}
}
// subscribeSync send the request for syncing subscriptions to the peer
// using subscriptionFunc. This function is used to request syncing subscriptions
// when new peer is added to the registry and on neighbourhood depth change.
func (p *Peer) subscribeSync(po int) {
err := subscriptionFunc(p.streamer, p.ID(), uint8(po))
if err != nil {
log.Error("subscription", "err", err)
}
}
// quitSync sends the quit message for live and history syncing streams to the peer.
// This function is used in runUpdateSyncing indirectly over updateSyncSubscriptions
// to remove unneeded syncing subscriptions on neighbourhood depth change.
func (p *Peer) quitSync(po int) {
live := NewStream("SYNC", FormatSyncBinKey(uint8(po)), true)
history := getHistoryStream(live)
err := p.streamer.Quit(p.ID(), live)
if err != nil && err != p2p.ErrShuttingDown {
log.Error("quit", "err", err, "peer", p.ID(), "stream", live)
}
err = p.streamer.Quit(p.ID(), history)
if err != nil && err != p2p.ErrShuttingDown {
log.Error("quit", "err", err, "peer", p.ID(), "stream", history)
}
err = p.removeServer(live)
if err != nil {
log.Error("remove server", "err", err, "peer", p.ID(), "stream", live)
}
err = p.removeServer(history)
if err != nil {
log.Error("remove server", "err", err, "peer", p.ID(), "stream", live)
}
}
// syncSubscriptionsDiff calculates to which proximity order bins a peer
// (with po peerPO) needs to be subscribed after kademlia neighbourhood depth
// change from prevDepth to newDepth. Max argument limits the number of
// proximity order bins. Returned values are slices of integers which represent
// proximity order bins, the first one to which additional subscriptions need to
// be requested and the second one which subscriptions need to be quit. Argument
// prevDepth with value less then 0 represents no previous depth, used for
// initial syncing subscriptions.
func syncSubscriptionsDiff(peerPO, prevDepth, newDepth, max int) (subBins, quitBins []int) {
newStart, newEnd := syncBins(peerPO, newDepth, max)
if prevDepth < 0 {
// no previous depth, return the complete range
// for subscriptions requests and nothing for quitting
return intRange(newStart, newEnd), nil
}
prevStart, prevEnd := syncBins(peerPO, prevDepth, max)
if newStart < prevStart {
subBins = append(subBins, intRange(newStart, prevStart)...)
}
if prevStart < newStart {
quitBins = append(quitBins, intRange(prevStart, newStart)...)
}
if newEnd < prevEnd {
quitBins = append(quitBins, intRange(newEnd, prevEnd)...)
}
if prevEnd < newEnd {
subBins = append(subBins, intRange(prevEnd, newEnd)...)
}
return subBins, quitBins
}
// syncBins returns the range to which proximity order bins syncing
// subscriptions need to be requested, based on peer proximity and
// kademlia neighbourhood depth. Returned range is [start,end), inclusive for
// start and exclusive for end.
func syncBins(peerPO, depth, max int) (start, end int) {
if peerPO < depth {
// subscribe only to peerPO bin if it is not
// in the nearest neighbourhood
return peerPO, peerPO + 1
}
// subscribe from depth to max bin if the peer
// is in the nearest neighbourhood
return depth, max + 1
}
// intRange returns the slice of integers [start,end). The start
// is inclusive and the end is not.
func intRange(start, end int) (r []int) {
for i := start; i < end; i++ {
r = append(r, i)
}
return r
}

309
network/stream/peer_test.go Normal file
View File

@ -0,0 +1,309 @@
// Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"fmt"
"reflect"
"sort"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/state"
)
// TestSyncSubscriptionsDiff validates the output of syncSubscriptionsDiff
// function for various arguments.
func TestSyncSubscriptionsDiff(t *testing.T) {
max := network.NewKadParams().MaxProxDisplay
for _, tc := range []struct {
po, prevDepth, newDepth int
subBins, quitBins []int
}{
{
po: 0, prevDepth: -1, newDepth: 0,
subBins: []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 1, prevDepth: -1, newDepth: 0,
subBins: []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 2, prevDepth: -1, newDepth: 0,
subBins: []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 0, prevDepth: -1, newDepth: 1,
subBins: []int{0},
},
{
po: 1, prevDepth: -1, newDepth: 1,
subBins: []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 2, prevDepth: -1, newDepth: 2,
subBins: []int{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 3, prevDepth: -1, newDepth: 2,
subBins: []int{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 1, prevDepth: -1, newDepth: 2,
subBins: []int{1},
},
{
po: 0, prevDepth: 0, newDepth: 0, // 0-16 -> 0-16
},
{
po: 1, prevDepth: 0, newDepth: 0, // 0-16 -> 0-16
},
{
po: 0, prevDepth: 0, newDepth: 1, // 0-16 -> 0
quitBins: []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 0, prevDepth: 0, newDepth: 2, // 0-16 -> 0
quitBins: []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 1, prevDepth: 0, newDepth: 1, // 0-16 -> 1-16
quitBins: []int{0},
},
{
po: 1, prevDepth: 1, newDepth: 0, // 1-16 -> 0-16
subBins: []int{0},
},
{
po: 4, prevDepth: 0, newDepth: 1, // 0-16 -> 1-16
quitBins: []int{0},
},
{
po: 4, prevDepth: 0, newDepth: 4, // 0-16 -> 4-16
quitBins: []int{0, 1, 2, 3},
},
{
po: 4, prevDepth: 0, newDepth: 5, // 0-16 -> 4
quitBins: []int{0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 4, prevDepth: 5, newDepth: 0, // 4 -> 0-16
subBins: []int{0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16},
},
{
po: 4, prevDepth: 5, newDepth: 6, // 4 -> 4
},
} {
subBins, quitBins := syncSubscriptionsDiff(tc.po, tc.prevDepth, tc.newDepth, max)
if fmt.Sprint(subBins) != fmt.Sprint(tc.subBins) {
t.Errorf("po: %v, prevDepth: %v, newDepth: %v: got subBins %v, want %v", tc.po, tc.prevDepth, tc.newDepth, subBins, tc.subBins)
}
if fmt.Sprint(quitBins) != fmt.Sprint(tc.quitBins) {
t.Errorf("po: %v, prevDepth: %v, newDepth: %v: got quitBins %v, want %v", tc.po, tc.prevDepth, tc.newDepth, quitBins, tc.quitBins)
}
}
}
// TestUpdateSyncingSubscriptions validates that syncing subscriptions are correctly
// made on initial node connections and that subscriptions are correctly changed
// when kademlia neighbourhood depth is changed by connecting more nodes.
func TestUpdateSyncingSubscriptions(t *testing.T) {
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
SyncUpdateDelay: 100 * time.Millisecond,
Syncing: SyncingAutoSubscribe,
}, nil)
cleanup = func() {
r.Close()
clean()
}
bucket.Store("bzz-address", addr)
return r, cleanup, nil
},
})
defer sim.Close()
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) (err error) {
// initial nodes, first one as pivot center of the start
ids, err := sim.AddNodesAndConnectStar(10)
if err != nil {
return err
}
// pivot values
pivotRegistryID := ids[0]
pivotRegistry := sim.Service("streamer", pivotRegistryID).(*Registry)
pivotKademlia := pivotRegistry.delivery.kad
// nodes proximities from the pivot node
nodeProximities := make(map[string]int)
for _, id := range ids[1:] {
bzzAddr, ok := sim.NodeItem(id, "bzz-address")
if !ok {
t.Fatal("no bzz address for node")
}
nodeProximities[id.String()] = chunk.Proximity(pivotKademlia.BaseAddr(), bzzAddr.(*network.BzzAddr).Over())
}
// wait until sync subscriptions are done for all nodes
waitForSubscriptions(t, pivotRegistry, ids[1:]...)
// check initial sync streams
err = checkSyncStreamsWithRetry(pivotRegistry, nodeProximities)
if err != nil {
return err
}
// add more nodes until the depth is changed
prevDepth := pivotKademlia.NeighbourhoodDepth()
var noDepthChangeChecked bool // true it there was a check when no depth is changed
for {
ids, err := sim.AddNodes(5)
if err != nil {
return err
}
// add new nodes to sync subscriptions check
for _, id := range ids {
bzzAddr, ok := sim.NodeItem(id, "bzz-address")
if !ok {
t.Fatal("no bzz address for node")
}
nodeProximities[id.String()] = chunk.Proximity(pivotKademlia.BaseAddr(), bzzAddr.(*network.BzzAddr).Over())
}
err = sim.Net.ConnectNodesStar(ids, pivotRegistryID)
if err != nil {
return err
}
waitForSubscriptions(t, pivotRegistry, ids...)
newDepth := pivotKademlia.NeighbourhoodDepth()
// depth is not changed, check if streams are still correct
if newDepth == prevDepth {
err = checkSyncStreamsWithRetry(pivotRegistry, nodeProximities)
if err != nil {
return err
}
noDepthChangeChecked = true
}
// do the final check when depth is changed and
// there has been at least one check
// for the case when depth is not changed
if newDepth != prevDepth && noDepthChangeChecked {
// check sync streams for changed depth
return checkSyncStreamsWithRetry(pivotRegistry, nodeProximities)
}
prevDepth = newDepth
}
})
if result.Error != nil {
t.Fatal(result.Error)
}
}
// waitForSubscriptions is a test helper function that blocks until
// stream server subscriptions are established on the provided registry
// to the nodes with provided IDs.
func waitForSubscriptions(t *testing.T, r *Registry, ids ...enode.ID) {
t.Helper()
for retries := 0; retries < 100; retries++ {
subs := r.api.GetPeerServerSubscriptions()
if allSubscribed(subs, ids) {
return
}
time.Sleep(50 * time.Millisecond)
}
t.Fatalf("missing subscriptions")
}
// allSubscribed returns true if nodes with ids have subscriptions
// in provided subs map.
func allSubscribed(subs map[string][]string, ids []enode.ID) bool {
for _, id := range ids {
if s, ok := subs[id.String()]; !ok || len(s) == 0 {
return false
}
}
return true
}
// checkSyncStreamsWithRetry is calling checkSyncStreams with retries.
func checkSyncStreamsWithRetry(r *Registry, nodeProximities map[string]int) (err error) {
for retries := 0; retries < 5; retries++ {
err = checkSyncStreams(r, nodeProximities)
if err == nil {
return nil
}
time.Sleep(500 * time.Millisecond)
}
return err
}
// checkSyncStreams validates that registry contains expected sync
// subscriptions to nodes with proximities in a map nodeProximities.
func checkSyncStreams(r *Registry, nodeProximities map[string]int) error {
depth := r.delivery.kad.NeighbourhoodDepth()
maxPO := r.delivery.kad.MaxProxDisplay
for id, po := range nodeProximities {
wantStreams := syncStreams(po, depth, maxPO)
gotStreams := nodeStreams(r, id)
if r.getPeer(enode.HexID(id)) == nil {
// ignore removed peer
continue
}
if !reflect.DeepEqual(gotStreams, wantStreams) {
return fmt.Errorf("node %s got streams %v, want %v", id, gotStreams, wantStreams)
}
}
return nil
}
// syncStreams returns expected sync streams that need to be
// established between a node with kademlia neighbourhood depth
// and a node with proximity order po.
func syncStreams(po, depth, maxPO int) (streams []string) {
start, end := syncBins(po, depth, maxPO)
for bin := start; bin < end; bin++ {
streams = append(streams, NewStream("SYNC", FormatSyncBinKey(uint8(bin)), false).String())
streams = append(streams, NewStream("SYNC", FormatSyncBinKey(uint8(bin)), true).String())
}
return streams
}
// nodeStreams returns stream server subscriptions on a registry
// to the peer with provided id.
func nodeStreams(r *Registry, id string) []string {
streams := r.api.GetPeerServerSubscriptions()[id]
sort.Strings(streams)
return streams
}

View File

@ -0,0 +1,484 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"bytes"
"context"
"fmt"
"io"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/testutil"
)
// constants for random file generation
const (
minFileSize = 2
maxFileSize = 40
)
// TestFileRetrieval is a retrieval test for nodes.
// A configurable number of nodes can be
// provided to the test.
// Files are uploaded to nodes, other nodes try to retrieve the file
// Number of nodes can be provided via commandline too.
func TestFileRetrieval(t *testing.T) {
var nodeCount []int
if *nodes != 0 {
nodeCount = []int{*nodes}
} else {
nodeCount = []int{16}
if *longrunning {
nodeCount = append(nodeCount, 32, 64)
} else if testutil.RaceEnabled {
nodeCount = []int{4}
}
}
for _, nc := range nodeCount {
runFileRetrievalTest(t, nc)
}
}
// TestPureRetrieval tests pure retrieval without syncing
// A configurable number of nodes and chunks
// can be provided to the test.
// A number of random chunks is generated, then stored directly in
// each node's localstore according to their address.
// Each chunk is supposed to end up at certain nodes
// With retrieval we then make sure that every node can actually retrieve
// the chunks.
func TestPureRetrieval(t *testing.T) {
var nodeCount []int
var chunkCount []int
if *nodes != 0 && *chunks != 0 {
nodeCount = []int{*nodes}
chunkCount = []int{*chunks}
} else {
nodeCount = []int{16}
chunkCount = []int{150}
if *longrunning {
nodeCount = append(nodeCount, 32, 64)
chunkCount = append(chunkCount, 32, 256)
} else if testutil.RaceEnabled {
nodeCount = []int{4}
chunkCount = []int{4}
}
}
for _, nc := range nodeCount {
for _, c := range chunkCount {
runPureRetrievalTest(t, nc, c)
}
}
}
// TestRetrieval tests retrieval of chunks by random nodes.
// One node is randomly selected to be the pivot node.
// A configurable number of chunks and nodes can be
// provided to the test, the number of chunks is uploaded
// to the pivot node and other nodes try to retrieve the chunk(s).
// Number of chunks and nodes can be provided via commandline too.
func TestRetrieval(t *testing.T) {
// if nodes/chunks have been provided via commandline,
// run the tests with these values
if *nodes != 0 && *chunks != 0 {
runRetrievalTest(t, *chunks, *nodes)
} else {
nodeCnt := []int{16}
chnkCnt := []int{32}
if *longrunning {
nodeCnt = []int{16, 32, 64}
chnkCnt = []int{4, 32, 256}
} else if testutil.RaceEnabled {
nodeCnt = []int{4}
chnkCnt = []int{4}
}
for _, n := range nodeCnt {
for _, c := range chnkCnt {
t.Run(fmt.Sprintf("TestRetrieval_%d_%d", n, c), func(t *testing.T) {
runRetrievalTest(t, c, n)
})
}
}
}
}
var retrievalSimServiceMap = map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
syncUpdateDelay := 1 * time.Second
if *longrunning {
syncUpdateDelay = 3 * time.Second
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
Syncing: SyncingAutoSubscribe,
SyncUpdateDelay: syncUpdateDelay,
}, nil)
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
}
// runPureRetrievalTest by uploading a snapshot,
// then starting a simulation, distribute chunks to nodes
// and start retrieval.
// The snapshot should have 'streamer' in its service list.
func runPureRetrievalTest(t *testing.T, nodeCount int, chunkCount int) {
t.Helper()
// the pure retrieval test needs a different service map, as we want
// syncing disabled and we don't need to set the syncUpdateDelay
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
Syncing: SyncingDisabled,
}, nil)
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
},
)
defer sim.Close()
log.Info("Initializing test config", "node count", nodeCount)
conf := &synctestConfig{}
//map of discover ID to indexes of chunks expected at that ID
conf.idToChunksMap = make(map[enode.ID][]int)
//map of overlay address to discover ID
conf.addrToIDMap = make(map[string]enode.ID)
//array where the generated chunk hashes will be stored
conf.hashes = make([]storage.Address, 0)
ctx, cancelSimRun := context.WithTimeout(context.Background(), 3*time.Minute)
defer cancelSimRun()
filename := fmt.Sprintf("testing/snapshot_%d.json", nodeCount)
err := sim.UploadSnapshot(ctx, filename)
if err != nil {
t.Fatal(err)
}
log.Info("Starting simulation")
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) error {
nodeIDs := sim.UpNodeIDs()
// first iteration: create addresses
for _, n := range nodeIDs {
//get the kademlia overlay address from this ID
a := n.Bytes()
//append it to the array of all overlay addresses
conf.addrs = append(conf.addrs, a)
//the proximity calculation is on overlay addr,
//the p2p/simulations check func triggers on enode.ID,
//so we need to know which overlay addr maps to which nodeID
conf.addrToIDMap[string(a)] = n
}
// now create random chunks
chunks := storage.GenerateRandomChunks(int64(chunkSize), chunkCount)
for _, chunk := range chunks {
conf.hashes = append(conf.hashes, chunk.Address())
}
log.Debug("random chunks generated, mapping keys to nodes")
// map addresses to nodes
mapKeysToNodes(conf)
// second iteration: storing chunks at the peer whose
// overlay address is closest to a particular chunk's hash
log.Debug("storing every chunk at correspondent node store")
for _, id := range nodeIDs {
// for every chunk for this node (which are only indexes)...
for _, ch := range conf.idToChunksMap[id] {
item, ok := sim.NodeItem(id, bucketKeyStore)
if !ok {
return fmt.Errorf("Error accessing localstore")
}
lstore := item.(chunk.Store)
// ...get the actual chunk
for _, chnk := range chunks {
if bytes.Equal(chnk.Address(), conf.hashes[ch]) {
// ...and store it in the localstore
if _, err = lstore.Put(ctx, chunk.ModePutUpload, chnk); err != nil {
return err
}
}
}
}
}
// now try to retrieve every chunk from every node
log.Debug("starting retrieval")
cnt := 0
for _, id := range nodeIDs {
item, ok := sim.NodeItem(id, bucketKeyFileStore)
if !ok {
return fmt.Errorf("No filestore")
}
fileStore := item.(*storage.FileStore)
for _, chunk := range chunks {
reader, _ := fileStore.Retrieve(context.TODO(), chunk.Address())
content := make([]byte, chunkSize)
size, err := reader.Read(content)
//check chunk size and content
ok := true
if err != io.EOF {
log.Debug("Retrieve error", "err", err, "hash", chunk.Address(), "nodeId", id)
ok = false
}
if size != chunkSize {
log.Debug("size not equal chunkSize", "size", size, "hash", chunk.Address(), "nodeId", id)
ok = false
}
// skip chunk "metadata" for chunk.Data()
if !bytes.Equal(content, chunk.Data()[8:]) {
log.Debug("content not equal chunk data", "hash", chunk.Address(), "nodeId", id)
ok = false
}
if !ok {
return fmt.Errorf("Expected test to succeed at first run, but failed with chunk not found")
}
log.Debug(fmt.Sprintf("chunk with root hash %x successfully retrieved", chunk.Address()))
cnt++
}
}
log.Info("retrieval terminated, chunks retrieved: ", "count", cnt)
return nil
})
log.Info("Simulation terminated")
if result.Error != nil {
t.Fatal(result.Error)
}
}
// runFileRetrievalTest loads a snapshot file to construct the swarm network.
// The snapshot should have 'streamer' in its service list.
func runFileRetrievalTest(t *testing.T, nodeCount int) {
t.Helper()
sim := simulation.New(retrievalSimServiceMap)
defer sim.Close()
log.Info("Initializing test config", "node count", nodeCount)
conf := &synctestConfig{}
//map of discover ID to indexes of chunks expected at that ID
conf.idToChunksMap = make(map[enode.ID][]int)
//map of overlay address to discover ID
conf.addrToIDMap = make(map[string]enode.ID)
//array where the generated chunk hashes will be stored
conf.hashes = make([]storage.Address, 0)
ctx, cancelSimRun := context.WithTimeout(context.Background(), 3*time.Minute)
defer cancelSimRun()
filename := fmt.Sprintf("testing/snapshot_%d.json", nodeCount)
err := sim.UploadSnapshot(ctx, filename)
if err != nil {
t.Fatal(err)
}
log.Info("Starting simulation")
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) error {
nodeIDs := sim.UpNodeIDs()
for _, n := range nodeIDs {
//get the kademlia overlay address from this ID
a := n.Bytes()
//append it to the array of all overlay addresses
conf.addrs = append(conf.addrs, a)
//the proximity calculation is on overlay addr,
//the p2p/simulations check func triggers on enode.ID,
//so we need to know which overlay addr maps to which nodeID
conf.addrToIDMap[string(a)] = n
}
//an array for the random files
var randomFiles []string
conf.hashes, randomFiles, err = uploadFilesToNodes(sim)
if err != nil {
return err
}
log.Info("network healthy, start file checks")
// File retrieval check is repeated until all uploaded files are retrieved from all nodes
// or until the timeout is reached.
REPEAT:
for {
for _, id := range nodeIDs {
//for each expected file, check if it is in the local store
item, ok := sim.NodeItem(id, bucketKeyFileStore)
if !ok {
return fmt.Errorf("No filestore")
}
fileStore := item.(*storage.FileStore)
//check all chunks
for i, hash := range conf.hashes {
reader, _ := fileStore.Retrieve(context.TODO(), hash)
//check that we can read the file size and that it corresponds to the generated file size
if s, err := reader.Size(ctx, nil); err != nil || s != int64(len(randomFiles[i])) {
log.Debug("Retrieve error", "err", err, "hash", hash, "nodeId", id)
time.Sleep(500 * time.Millisecond)
continue REPEAT
}
log.Debug(fmt.Sprintf("File with root hash %x successfully retrieved", hash))
}
}
return nil
}
})
log.Info("Simulation terminated")
if result.Error != nil {
t.Fatal(result.Error)
}
}
// runRetrievalTest generates the given number of chunks.
// The test loads a snapshot file to construct the swarm network.
// The snapshot should have 'streamer' in its service list.
func runRetrievalTest(t *testing.T, chunkCount int, nodeCount int) {
t.Helper()
sim := simulation.New(retrievalSimServiceMap)
defer sim.Close()
conf := &synctestConfig{}
//map of discover ID to indexes of chunks expected at that ID
conf.idToChunksMap = make(map[enode.ID][]int)
//map of overlay address to discover ID
conf.addrToIDMap = make(map[string]enode.ID)
//array where the generated chunk hashes will be stored
conf.hashes = make([]storage.Address, 0)
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancel()
filename := fmt.Sprintf("testing/snapshot_%d.json", nodeCount)
err := sim.UploadSnapshot(ctx, filename)
if err != nil {
t.Fatal(err)
}
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) error {
nodeIDs := sim.UpNodeIDs()
for _, n := range nodeIDs {
//get the kademlia overlay address from this ID
a := n.Bytes()
//append it to the array of all overlay addresses
conf.addrs = append(conf.addrs, a)
//the proximity calculation is on overlay addr,
//the p2p/simulations check func triggers on enode.ID,
//so we need to know which overlay addr maps to which nodeID
conf.addrToIDMap[string(a)] = n
}
//this is the node selected for upload
node := sim.Net.GetRandomUpNode()
item, ok := sim.NodeItem(node.ID(), bucketKeyStore)
if !ok {
return fmt.Errorf("No localstore")
}
lstore := item.(chunk.Store)
conf.hashes, err = uploadFileToSingleNodeStore(node.ID(), chunkCount, lstore)
if err != nil {
return err
}
// File retrieval check is repeated until all uploaded files are retrieved from all nodes
// or until the timeout is reached.
REPEAT:
for {
for _, id := range nodeIDs {
//for each expected chunk, check if it is in the local store
//check on the node's FileStore (netstore)
item, ok := sim.NodeItem(id, bucketKeyFileStore)
if !ok {
return fmt.Errorf("No filestore")
}
fileStore := item.(*storage.FileStore)
//check all chunks
for _, hash := range conf.hashes {
reader, _ := fileStore.Retrieve(context.TODO(), hash)
//check that we can read the chunk size and that it corresponds to the generated chunk size
if s, err := reader.Size(ctx, nil); err != nil || s != int64(chunkSize) {
log.Debug("Retrieve error", "err", err, "hash", hash, "nodeId", id, "size", s)
time.Sleep(500 * time.Millisecond)
continue REPEAT
}
log.Debug(fmt.Sprintf("Chunk with root hash %x successfully retrieved", hash))
}
}
// all nodes and files found, exit loop and return without error
return nil
}
})
if result.Error != nil {
t.Fatal(result.Error)
}
}

View File

@ -0,0 +1,317 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"errors"
"fmt"
"os"
"runtime"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/pot"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/storage/mock"
mockmem "github.com/ethersphere/swarm/storage/mock/mem"
"github.com/ethersphere/swarm/testutil"
)
type synctestConfig struct {
addrs [][]byte
hashes []storage.Address
idToChunksMap map[enode.ID][]int
//chunksToNodesMap map[string][]int
addrToIDMap map[string]enode.ID
}
const (
// EventTypeNode is the type of event emitted when a node is either
// created, started or stopped
EventTypeChunkCreated simulations.EventType = "chunkCreated"
EventTypeChunkOffered simulations.EventType = "chunkOffered"
EventTypeChunkWanted simulations.EventType = "chunkWanted"
EventTypeChunkDelivered simulations.EventType = "chunkDelivered"
EventTypeChunkArrived simulations.EventType = "chunkArrived"
EventTypeSimTerminated simulations.EventType = "simTerminated"
)
// Tests in this file should not request chunks from peers.
// This function will panic indicating that there is a problem if request has been made.
func dummyRequestFromPeers(_ context.Context, req *network.Request) (*enode.ID, chan struct{}, error) {
panic(fmt.Sprintf("unexpected request: address %s, source %s", req.Addr.String(), req.Source.String()))
}
//This test is a syncing test for nodes.
//One node is randomly selected to be the pivot node.
//A configurable number of chunks and nodes can be
//provided to the test, the number of chunks is uploaded
//to the pivot node, and we check that nodes get the chunks
//they are expected to store based on the syncing protocol.
//Number of chunks and nodes can be provided via commandline too.
func TestSyncingViaGlobalSync(t *testing.T) {
if runtime.GOOS == "darwin" && os.Getenv("TRAVIS") == "true" {
t.Skip("Flaky on mac on travis")
}
if testutil.RaceEnabled {
t.Skip("Segfaults on Travis with -race")
}
//if nodes/chunks have been provided via commandline,
//run the tests with these values
if *nodes != 0 && *chunks != 0 {
log.Info(fmt.Sprintf("Running test with %d chunks and %d nodes...", *chunks, *nodes))
testSyncingViaGlobalSync(t, *chunks, *nodes)
} else {
chunkCounts := []int{4, 32}
nodeCounts := []int{32, 16}
//if the `longrunning` flag has been provided
//run more test combinations
if *longrunning {
chunkCounts = []int{64, 128}
nodeCounts = []int{32, 64}
}
for _, chunkCount := range chunkCounts {
for _, n := range nodeCounts {
log.Info(fmt.Sprintf("Long running test with %d chunks and %d nodes...", chunkCount, n))
testSyncingViaGlobalSync(t, chunkCount, n)
}
}
}
}
var simServiceMap = map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDeliveryWithRequestFunc(ctx, bucket, dummyRequestFromPeers)
if err != nil {
return nil, nil, err
}
store := state.NewInmemoryStore()
r := NewRegistry(addr.ID(), delivery, netStore, store, &RegistryOptions{
Syncing: SyncingAutoSubscribe,
SyncUpdateDelay: 3 * time.Second,
}, nil)
bucket.Store(bucketKeyRegistry, r)
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
}
func testSyncingViaGlobalSync(t *testing.T, chunkCount int, nodeCount int) {
sim := simulation.New(simServiceMap)
defer sim.Close()
log.Info("Initializing test config")
conf := &synctestConfig{}
//map of discover ID to indexes of chunks expected at that ID
conf.idToChunksMap = make(map[enode.ID][]int)
//map of overlay address to discover ID
conf.addrToIDMap = make(map[string]enode.ID)
//array where the generated chunk hashes will be stored
conf.hashes = make([]storage.Address, 0)
ctx, cancelSimRun := context.WithTimeout(context.Background(), 3*time.Minute)
defer cancelSimRun()
filename := fmt.Sprintf("testing/snapshot_%d.json", nodeCount)
err := sim.UploadSnapshot(ctx, filename)
if err != nil {
t.Fatal(err)
}
result := runSim(conf, ctx, sim, chunkCount)
if result.Error != nil {
t.Fatal(result.Error)
}
log.Info("Simulation ended")
}
func runSim(conf *synctestConfig, ctx context.Context, sim *simulation.Simulation, chunkCount int) simulation.Result {
return sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) (err error) {
disconnected := watchDisconnections(ctx, sim)
defer func() {
if err != nil && disconnected.bool() {
err = errors.New("disconnect events received")
}
}()
nodeIDs := sim.UpNodeIDs()
for _, n := range nodeIDs {
//get the kademlia overlay address from this ID
a := n.Bytes()
//append it to the array of all overlay addresses
conf.addrs = append(conf.addrs, a)
//the proximity calculation is on overlay addr,
//the p2p/simulations check func triggers on enode.ID,
//so we need to know which overlay addr maps to which nodeID
conf.addrToIDMap[string(a)] = n
}
//get the node at that index
//this is the node selected for upload
node := sim.Net.GetRandomUpNode()
item, ok := sim.NodeItem(node.ID(), bucketKeyStore)
if !ok {
return errors.New("no store in simulation bucket")
}
store := item.(chunk.Store)
hashes, err := uploadFileToSingleNodeStore(node.ID(), chunkCount, store)
if err != nil {
return err
}
for _, h := range hashes {
evt := &simulations.Event{
Type: EventTypeChunkCreated,
Node: sim.Net.GetNode(node.ID()),
Data: h.String(),
}
sim.Net.Events().Send(evt)
}
conf.hashes = append(conf.hashes, hashes...)
mapKeysToNodes(conf)
// File retrieval check is repeated until all uploaded files are retrieved from all nodes
// or until the timeout is reached.
var globalStore mock.GlobalStorer
if *useMockStore {
globalStore = mockmem.NewGlobalStore()
}
REPEAT:
for {
for _, id := range nodeIDs {
//for each expected chunk, check if it is in the local store
localChunks := conf.idToChunksMap[id]
for _, ch := range localChunks {
//get the real chunk by the index in the index array
ch := conf.hashes[ch]
log.Trace("node has chunk", "address", ch)
//check if the expected chunk is indeed in the localstore
var err error
if *useMockStore {
//use the globalStore if the mockStore should be used; in that case,
//the complete localStore stack is bypassed for getting the chunk
_, err = globalStore.Get(common.BytesToAddress(id.Bytes()), ch)
} else {
//use the actual localstore
item, ok := sim.NodeItem(id, bucketKeyStore)
if !ok {
return errors.New("no store in simulation bucket")
}
store := item.(chunk.Store)
_, err = store.Get(ctx, chunk.ModeGetLookup, ch)
}
if err != nil {
log.Debug("chunk not found", "address", ch.Hex(), "node", id)
// Do not get crazy with logging the warn message
time.Sleep(500 * time.Millisecond)
continue REPEAT
}
evt := &simulations.Event{
Type: EventTypeChunkArrived,
Node: sim.Net.GetNode(id),
Data: ch.String(),
}
sim.Net.Events().Send(evt)
log.Trace("chunk found", "address", ch.Hex(), "node", id)
}
}
return nil
}
})
}
//map chunk keys to addresses which are responsible
func mapKeysToNodes(conf *synctestConfig) {
nodemap := make(map[string][]int)
//build a pot for chunk hashes
np := pot.NewPot(nil, 0)
indexmap := make(map[string]int)
for i, a := range conf.addrs {
indexmap[string(a)] = i
np, _, _ = pot.Add(np, a, pof)
}
ppmap := network.NewPeerPotMap(network.NewKadParams().NeighbourhoodSize, conf.addrs)
//for each address, run EachNeighbour on the chunk hashes pot to identify closest nodes
log.Trace(fmt.Sprintf("Generated hash chunk(s): %v", conf.hashes))
for i := 0; i < len(conf.hashes); i++ {
var a []byte
np.EachNeighbour([]byte(conf.hashes[i]), pof, func(val pot.Val, po int) bool {
// take the first address
a = val.([]byte)
return false
})
nns := ppmap[common.Bytes2Hex(a)].NNSet
nns = append(nns, a)
for _, p := range nns {
nodemap[string(p)] = append(nodemap[string(p)], i)
}
}
for addr, chunks := range nodemap {
//this selects which chunks are expected to be found with the given node
conf.idToChunksMap[conf.addrToIDMap[addr]] = chunks
}
log.Debug(fmt.Sprintf("Map of expected chunks by ID: %v", conf.idToChunksMap))
}
//upload a file(chunks) to a single local node store
func uploadFileToSingleNodeStore(id enode.ID, chunkCount int, store chunk.Store) ([]storage.Address, error) {
log.Debug(fmt.Sprintf("Uploading to node id: %s", id))
fileStore := storage.NewFileStore(store, storage.NewFileStoreParams(), chunk.NewTags())
size := chunkSize
var rootAddrs []storage.Address
for i := 0; i < chunkCount; i++ {
rk, wait, err := fileStore.Store(context.TODO(), testutil.RandomReader(i, size), int64(size), false)
if err != nil {
return nil, err
}
err = wait(context.TODO())
if err != nil {
return nil, err
}
rootAddrs = append(rootAddrs, (rk))
}
return rootAddrs, nil
}

811
network/stream/stream.go Normal file
View File

@ -0,0 +1,811 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"fmt"
"math"
"reflect"
"sync"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/protocols"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/network/stream/intervals"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
)
const (
Low uint8 = iota
Mid
High
Top
PriorityQueue = 4 // number of priority queues - Low, Mid, High, Top
PriorityQueueCap = 4096 // queue capacity
HashSize = 32
)
// Enumerate options for syncing and retrieval
type SyncingOption int
// Syncing options
const (
// Syncing disabled
SyncingDisabled SyncingOption = iota
// Register the client and the server but not subscribe
SyncingRegisterOnly
// Both client and server funcs are registered, subscribe sent automatically
SyncingAutoSubscribe
)
// subscriptionFunc is used to determine what to do in order to perform subscriptions
// usually we would start to really subscribe to nodes, but for tests other functionality may be needed
// (see TestRequestPeerSubscriptions in streamer_test.go)
var subscriptionFunc = doRequestSubscription
// Registry registry for outgoing and incoming streamer constructors
type Registry struct {
addr enode.ID
api *API
skipCheck bool
clientMu sync.RWMutex
serverMu sync.RWMutex
peersMu sync.RWMutex
serverFuncs map[string]func(*Peer, string, bool) (Server, error)
clientFuncs map[string]func(*Peer, string, bool) (Client, error)
peers map[enode.ID]*Peer
delivery *Delivery
intervalsStore state.Store
maxPeerServers int
spec *protocols.Spec //this protocol's spec
balance protocols.Balance //implements protocols.Balance, for accounting
prices protocols.Prices //implements protocols.Prices, provides prices to accounting
quit chan struct{} // terminates registry goroutines
syncMode SyncingOption
syncUpdateDelay time.Duration
}
// RegistryOptions holds optional values for NewRegistry constructor.
type RegistryOptions struct {
SkipCheck bool
Syncing SyncingOption // Defines syncing behavior
SyncUpdateDelay time.Duration
MaxPeerServers int // The limit of servers for each peer in registry
}
// NewRegistry is Streamer constructor
func NewRegistry(localID enode.ID, delivery *Delivery, netStore *storage.NetStore, intervalsStore state.Store, options *RegistryOptions, balance protocols.Balance) *Registry {
if options == nil {
options = &RegistryOptions{}
}
if options.SyncUpdateDelay <= 0 {
options.SyncUpdateDelay = 15 * time.Second
}
quit := make(chan struct{})
streamer := &Registry{
addr: localID,
skipCheck: options.SkipCheck,
serverFuncs: make(map[string]func(*Peer, string, bool) (Server, error)),
clientFuncs: make(map[string]func(*Peer, string, bool) (Client, error)),
peers: make(map[enode.ID]*Peer),
delivery: delivery,
intervalsStore: intervalsStore,
maxPeerServers: options.MaxPeerServers,
balance: balance,
quit: quit,
syncUpdateDelay: options.SyncUpdateDelay,
syncMode: options.Syncing,
}
streamer.setupSpec()
streamer.api = NewAPI(streamer)
delivery.getPeer = streamer.getPeer
// If syncing is not disabled, the syncing functions are registered (both client and server)
if options.Syncing != SyncingDisabled {
RegisterSwarmSyncerServer(streamer, netStore)
RegisterSwarmSyncerClient(streamer, netStore)
}
return streamer
}
// This is an accounted protocol, therefore we need to provide a pricing Hook to the spec
// For simulations to be able to run multiple nodes and not override the hook's balance,
// we need to construct a spec instance per node instance
func (r *Registry) setupSpec() {
// first create the "bare" spec
r.createSpec()
// now create the pricing object
r.createPriceOracle()
// if balance is nil, this node has been started without swap support (swapEnabled flag is false)
if r.balance != nil && !reflect.ValueOf(r.balance).IsNil() {
// swap is enabled, so setup the hook
r.spec.Hook = protocols.NewAccounting(r.balance, r.prices)
}
}
// RegisterClient registers an incoming streamer constructor
func (r *Registry) RegisterClientFunc(stream string, f func(*Peer, string, bool) (Client, error)) {
r.clientMu.Lock()
defer r.clientMu.Unlock()
r.clientFuncs[stream] = f
}
// RegisterServer registers an outgoing streamer constructor
func (r *Registry) RegisterServerFunc(stream string, f func(*Peer, string, bool) (Server, error)) {
r.serverMu.Lock()
defer r.serverMu.Unlock()
r.serverFuncs[stream] = f
}
// GetClient accessor for incoming streamer constructors
func (r *Registry) GetClientFunc(stream string) (func(*Peer, string, bool) (Client, error), error) {
r.clientMu.RLock()
defer r.clientMu.RUnlock()
f := r.clientFuncs[stream]
if f == nil {
return nil, fmt.Errorf("stream %v not registered", stream)
}
return f, nil
}
// GetServer accessor for incoming streamer constructors
func (r *Registry) GetServerFunc(stream string) (func(*Peer, string, bool) (Server, error), error) {
r.serverMu.RLock()
defer r.serverMu.RUnlock()
f := r.serverFuncs[stream]
if f == nil {
return nil, fmt.Errorf("stream %v not registered", stream)
}
return f, nil
}
func (r *Registry) RequestSubscription(peerId enode.ID, s Stream, h *Range, prio uint8) error {
// check if the stream is registered
if _, err := r.GetServerFunc(s.Name); err != nil {
return err
}
peer := r.getPeer(peerId)
if peer == nil {
return fmt.Errorf("peer not found %v", peerId)
}
if _, err := peer.getServer(s); err != nil {
if e, ok := err.(*notFoundError); ok && e.t == "server" {
// request subscription only if the server for this stream is not created
log.Debug("RequestSubscription ", "peer", peerId, "stream", s, "history", h)
return peer.Send(context.TODO(), &RequestSubscriptionMsg{
Stream: s,
History: h,
Priority: prio,
})
}
return err
}
log.Trace("RequestSubscription: already subscribed", "peer", peerId, "stream", s, "history", h)
return nil
}
// Subscribe initiates the streamer
func (r *Registry) Subscribe(peerId enode.ID, s Stream, h *Range, priority uint8) error {
// check if the stream is registered
if _, err := r.GetClientFunc(s.Name); err != nil {
return err
}
peer := r.getPeer(peerId)
if peer == nil {
return fmt.Errorf("peer not found %v", peerId)
}
var to uint64
if !s.Live && h != nil {
to = h.To
}
err := peer.setClientParams(s, newClientParams(priority, to))
if err != nil {
return err
}
if s.Live && h != nil {
if err := peer.setClientParams(
getHistoryStream(s),
newClientParams(getHistoryPriority(priority), h.To),
); err != nil {
return err
}
}
msg := &SubscribeMsg{
Stream: s,
History: h,
Priority: priority,
}
log.Debug("Subscribe ", "peer", peerId, "stream", s, "history", h)
return peer.Send(context.TODO(), msg)
}
func (r *Registry) Unsubscribe(peerId enode.ID, s Stream) error {
peer := r.getPeer(peerId)
if peer == nil {
return fmt.Errorf("peer not found %v", peerId)
}
msg := &UnsubscribeMsg{
Stream: s,
}
log.Debug("Unsubscribe ", "peer", peerId, "stream", s)
if err := peer.Send(context.TODO(), msg); err != nil {
return err
}
return peer.removeClient(s)
}
// Quit sends the QuitMsg to the peer to remove the
// stream peer client and terminate the streaming.
func (r *Registry) Quit(peerId enode.ID, s Stream) error {
peer := r.getPeer(peerId)
if peer == nil {
log.Debug("stream quit: peer not found", "peer", peerId, "stream", s)
// if the peer is not found, abort the request
return nil
}
msg := &QuitMsg{
Stream: s,
}
log.Debug("Quit ", "peer", peerId, "stream", s)
return peer.Send(context.TODO(), msg)
}
func (r *Registry) Close() error {
// Stop sending neighborhood depth change and address count
// change from Kademlia that were initiated in NewRegistry constructor.
r.delivery.Close()
close(r.quit)
return r.intervalsStore.Close()
}
func (r *Registry) getPeer(peerId enode.ID) *Peer {
r.peersMu.RLock()
defer r.peersMu.RUnlock()
return r.peers[peerId]
}
func (r *Registry) setPeer(peer *Peer) {
r.peersMu.Lock()
r.peers[peer.ID()] = peer
metrics.GetOrRegisterCounter("registry.setpeer", nil).Inc(1)
metrics.GetOrRegisterGauge("registry.peers", nil).Update(int64(len(r.peers)))
r.peersMu.Unlock()
}
func (r *Registry) deletePeer(peer *Peer) {
r.peersMu.Lock()
delete(r.peers, peer.ID())
metrics.GetOrRegisterCounter("registry.deletepeer", nil).Inc(1)
metrics.GetOrRegisterGauge("registry.peers", nil).Update(int64(len(r.peers)))
r.peersMu.Unlock()
}
func (r *Registry) peersCount() (c int) {
r.peersMu.Lock()
c = len(r.peers)
r.peersMu.Unlock()
return
}
// Run protocol run function
func (r *Registry) Run(p *network.BzzPeer) error {
sp := NewPeer(p, r)
r.setPeer(sp)
if r.syncMode == SyncingAutoSubscribe {
go sp.runUpdateSyncing()
}
defer r.deletePeer(sp)
defer close(sp.quit)
defer sp.close()
return sp.Run(sp.HandleMsg)
}
// doRequestSubscription sends the actual RequestSubscription to the peer
func doRequestSubscription(r *Registry, id enode.ID, bin uint8) error {
log.Debug("Requesting subscription by registry:", "registry", r.addr, "peer", id, "bin", bin)
// bin is always less then 256 and it is safe to convert it to type uint8
stream := NewStream("SYNC", FormatSyncBinKey(bin), true)
err := r.RequestSubscription(id, stream, NewRange(0, 0), High)
if err != nil {
log.Debug("Request subscription", "err", err, "peer", id, "stream", stream)
return err
}
return nil
}
func (r *Registry) runProtocol(p *p2p.Peer, rw p2p.MsgReadWriter) error {
peer := protocols.NewPeer(p, rw, r.spec)
bp := network.NewBzzPeer(peer)
np := network.NewPeer(bp, r.delivery.kad)
r.delivery.kad.On(np)
defer r.delivery.kad.Off(np)
return r.Run(bp)
}
// HandleMsg is the message handler that delegates incoming messages
func (p *Peer) HandleMsg(ctx context.Context, msg interface{}) error {
select {
case <-p.streamer.quit:
log.Trace("message received after the streamer is closed", "peer", p.ID())
// return without an error since streamer is closed and
// no messages should be handled as other subcomponents like
// storage leveldb may be closed
return nil
default:
}
switch msg := msg.(type) {
case *SubscribeMsg:
return p.handleSubscribeMsg(ctx, msg)
case *SubscribeErrorMsg:
return p.handleSubscribeErrorMsg(msg)
case *UnsubscribeMsg:
return p.handleUnsubscribeMsg(msg)
case *OfferedHashesMsg:
go func() {
err := p.handleOfferedHashesMsg(ctx, msg)
if err != nil {
log.Error(err.Error())
p.Drop()
}
}()
return nil
case *TakeoverProofMsg:
go func() {
err := p.handleTakeoverProofMsg(ctx, msg)
if err != nil {
log.Error(err.Error())
p.Drop()
}
}()
return nil
case *WantedHashesMsg:
go func() {
err := p.handleWantedHashesMsg(ctx, msg)
if err != nil {
log.Error(err.Error())
p.Drop()
}
}()
return nil
case *ChunkDeliveryMsgRetrieval:
// handling chunk delivery is the same for retrieval and syncing, so let's cast the msg
go func() {
err := p.streamer.delivery.handleChunkDeliveryMsg(ctx, p, ((*ChunkDeliveryMsg)(msg)))
if err != nil {
log.Error(err.Error())
p.Drop()
}
}()
return nil
case *ChunkDeliveryMsgSyncing:
// handling chunk delivery is the same for retrieval and syncing, so let's cast the msg
go func() {
err := p.streamer.delivery.handleChunkDeliveryMsg(ctx, p, ((*ChunkDeliveryMsg)(msg)))
if err != nil {
log.Error(err.Error())
p.Drop()
}
}()
return nil
case *RetrieveRequestMsg:
go func() {
err := p.streamer.delivery.handleRetrieveRequestMsg(ctx, p, msg)
if err != nil {
log.Error(err.Error())
p.Drop()
}
}()
return nil
case *RequestSubscriptionMsg:
return p.handleRequestSubscription(ctx, msg)
case *QuitMsg:
return p.handleQuitMsg(msg)
default:
return fmt.Errorf("unknown message type: %T", msg)
}
}
type server struct {
Server
stream Stream
priority uint8
currentBatch []byte
sessionIndex uint64
}
// setNextBatch adjusts passed interval based on session index and whether
// stream is live or history. It calls Server SetNextBatch with adjusted
// interval and returns batch hashes and their interval.
func (s *server) setNextBatch(from, to uint64) ([]byte, uint64, uint64, *HandoverProof, error) {
if s.stream.Live {
if from == 0 {
from = s.sessionIndex
}
if to <= from || from >= s.sessionIndex {
to = math.MaxUint64
}
} else {
if (to < from && to != 0) || from > s.sessionIndex {
return nil, 0, 0, nil, nil
}
if to == 0 || to > s.sessionIndex {
to = s.sessionIndex
}
}
return s.SetNextBatch(from, to)
}
// Server interface for outgoing peer Streamer
type Server interface {
// SessionIndex is called when a server is initialized
// to get the current cursor state of the stream data.
// Based on this index, live and history stream intervals
// will be adjusted before calling SetNextBatch.
SessionIndex() (uint64, error)
SetNextBatch(uint64, uint64) (hashes []byte, from uint64, to uint64, proof *HandoverProof, err error)
GetData(context.Context, []byte) ([]byte, error)
Close()
}
type client struct {
Client
stream Stream
priority uint8
sessionAt uint64
to uint64
next chan error
quit chan struct{}
intervalsKey string
intervalsStore state.Store
}
func peerStreamIntervalsKey(p *Peer, s Stream) string {
return p.ID().String() + s.String()
}
func (c *client) AddInterval(start, end uint64) (err error) {
i := &intervals.Intervals{}
if err = c.intervalsStore.Get(c.intervalsKey, i); err != nil {
return err
}
i.Add(start, end)
return c.intervalsStore.Put(c.intervalsKey, i)
}
func (c *client) NextInterval() (start, end uint64, err error) {
i := &intervals.Intervals{}
err = c.intervalsStore.Get(c.intervalsKey, i)
if err != nil {
return 0, 0, err
}
start, end = i.Next()
return start, end, nil
}
// Client interface for incoming peer Streamer
type Client interface {
NeedData(context.Context, []byte) func(context.Context) error
BatchDone(Stream, uint64, []byte, []byte) func() (*TakeoverProof, error)
Close()
}
func (c *client) nextBatch(from uint64) (nextFrom uint64, nextTo uint64) {
if c.to > 0 && from >= c.to {
return 0, 0
}
if c.stream.Live {
return from, 0
} else if from >= c.sessionAt {
if c.to > 0 {
return from, c.to
}
return from, math.MaxUint64
}
nextFrom, nextTo, err := c.NextInterval()
if err != nil {
log.Error("next intervals", "stream", c.stream)
return
}
if nextTo > c.to {
nextTo = c.to
}
if nextTo == 0 {
nextTo = c.sessionAt
}
return
}
func (c *client) batchDone(p *Peer, req *OfferedHashesMsg, hashes []byte) error {
if tf := c.BatchDone(req.Stream, req.From, hashes, req.Root); tf != nil {
tp, err := tf()
if err != nil {
return err
}
if err := p.Send(context.TODO(), tp); err != nil {
return err
}
if c.to > 0 && tp.Takeover.End >= c.to {
return p.streamer.Unsubscribe(p.Peer.ID(), req.Stream)
}
return nil
}
return c.AddInterval(req.From, req.To)
}
func (c *client) close() {
select {
case <-c.quit:
default:
close(c.quit)
}
c.Close()
}
// clientParams store parameters for the new client
// between a subscription and initial offered hashes request handling.
type clientParams struct {
priority uint8
to uint64
// signal when the client is created
clientCreatedC chan struct{}
}
func newClientParams(priority uint8, to uint64) *clientParams {
return &clientParams{
priority: priority,
to: to,
clientCreatedC: make(chan struct{}),
}
}
func (c *clientParams) waitClient(ctx context.Context) error {
select {
case <-ctx.Done():
return ctx.Err()
case <-c.clientCreatedC:
return nil
}
}
func (c *clientParams) clientCreated() {
close(c.clientCreatedC)
}
// GetSpec returns the streamer spec to callers
// This used to be a global variable but for simulations with
// multiple nodes its fields (notably the Hook) would be overwritten
func (r *Registry) GetSpec() *protocols.Spec {
return r.spec
}
func (r *Registry) createSpec() {
// Spec is the spec of the streamer protocol
var spec = &protocols.Spec{
Name: "stream",
Version: 8,
MaxMsgSize: 10 * 1024 * 1024,
Messages: []interface{}{
UnsubscribeMsg{},
OfferedHashesMsg{},
WantedHashesMsg{},
TakeoverProofMsg{},
SubscribeMsg{},
RetrieveRequestMsg{},
ChunkDeliveryMsgRetrieval{},
SubscribeErrorMsg{},
RequestSubscriptionMsg{},
QuitMsg{},
ChunkDeliveryMsgSyncing{},
},
}
r.spec = spec
}
// An accountable message needs some meta information attached to it
// in order to evaluate the correct price
type StreamerPrices struct {
priceMatrix map[reflect.Type]*protocols.Price
registry *Registry
}
// Price implements the accounting interface and returns the price for a specific message
func (sp *StreamerPrices) Price(msg interface{}) *protocols.Price {
t := reflect.TypeOf(msg).Elem()
return sp.priceMatrix[t]
}
// Instead of hardcoding the price, get it
// through a function - it could be quite complex in the future
func (sp *StreamerPrices) getRetrieveRequestMsgPrice() uint64 {
return uint64(1)
}
// Instead of hardcoding the price, get it
// through a function - it could be quite complex in the future
func (sp *StreamerPrices) getChunkDeliveryMsgRetrievalPrice() uint64 {
return uint64(1)
}
// createPriceOracle sets up a matrix which can be queried to get
// the price for a message via the Price method
func (r *Registry) createPriceOracle() {
sp := &StreamerPrices{
registry: r,
}
sp.priceMatrix = map[reflect.Type]*protocols.Price{
reflect.TypeOf(ChunkDeliveryMsgRetrieval{}): {
Value: sp.getChunkDeliveryMsgRetrievalPrice(), // arbitrary price for now
PerByte: true,
Payer: protocols.Receiver,
},
reflect.TypeOf(RetrieveRequestMsg{}): {
Value: sp.getRetrieveRequestMsgPrice(), // arbitrary price for now
PerByte: false,
Payer: protocols.Sender,
},
}
r.prices = sp
}
func (r *Registry) Protocols() []p2p.Protocol {
return []p2p.Protocol{
{
Name: r.spec.Name,
Version: r.spec.Version,
Length: r.spec.Length(),
Run: r.runProtocol,
},
}
}
func (r *Registry) APIs() []rpc.API {
return []rpc.API{
{
Namespace: "stream",
Version: "3.0",
Service: r.api,
Public: false,
},
}
}
func (r *Registry) Start(server *p2p.Server) error {
log.Info("Streamer started")
return nil
}
func (r *Registry) Stop() error {
return nil
}
type Range struct {
From, To uint64
}
func NewRange(from, to uint64) *Range {
return &Range{
From: from,
To: to,
}
}
func (r *Range) String() string {
return fmt.Sprintf("%v-%v", r.From, r.To)
}
func getHistoryPriority(priority uint8) uint8 {
if priority == 0 {
return 0
}
return priority - 1
}
func getHistoryStream(s Stream) Stream {
return NewStream(s.Name, s.Key, false)
}
type API struct {
streamer *Registry
}
func NewAPI(r *Registry) *API {
return &API{
streamer: r,
}
}
func (api *API) SubscribeStream(peerId enode.ID, s Stream, history *Range, priority uint8) error {
return api.streamer.Subscribe(peerId, s, history, priority)
}
func (api *API) UnsubscribeStream(peerId enode.ID, s Stream) error {
return api.streamer.Unsubscribe(peerId, s)
}
/*
GetPeerServerSubscriptions is a API function which allows to query a peer for stream subscriptions it has.
It can be called via RPC.
It returns a map of node IDs with an array of string representations of Stream objects.
*/
func (api *API) GetPeerServerSubscriptions() map[string][]string {
pstreams := make(map[string][]string)
api.streamer.peersMu.RLock()
defer api.streamer.peersMu.RUnlock()
for id, p := range api.streamer.peers {
var streams []string
//every peer has a map of stream servers
//every stream server represents a subscription
p.serverMu.RLock()
for s := range p.servers {
//append the string representation of the stream
//to the list for this peer
streams = append(streams, s.String())
}
p.serverMu.RUnlock()
//set the array of stream servers to the map
pstreams[id.String()] = streams
}
return pstreams
}

File diff suppressed because it is too large Load Diff

235
network/stream/syncer.go Normal file
View File

@ -0,0 +1,235 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"fmt"
"strconv"
"time"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/storage"
)
const (
BatchSize = 128
)
// SwarmSyncerServer implements an Server for history syncing on bins
// offered streams:
// * live request delivery with or without checkback
// * (live/non-live historical) chunk syncing per proximity bin
type SwarmSyncerServer struct {
correlateId string //used for logging
po uint8
netStore *storage.NetStore
quit chan struct{}
}
// NewSwarmSyncerServer is constructor for SwarmSyncerServer
func NewSwarmSyncerServer(po uint8, netStore *storage.NetStore, correlateId string) (*SwarmSyncerServer, error) {
return &SwarmSyncerServer{
correlateId: correlateId,
po: po,
netStore: netStore,
quit: make(chan struct{}),
}, nil
}
func RegisterSwarmSyncerServer(streamer *Registry, netStore *storage.NetStore) {
streamer.RegisterServerFunc("SYNC", func(p *Peer, t string, _ bool) (Server, error) {
po, err := ParseSyncBinKey(t)
if err != nil {
return nil, err
}
return NewSwarmSyncerServer(po, netStore, fmt.Sprintf("%s|%d", p.ID(), po))
})
// streamer.RegisterServerFunc(stream, func(p *Peer) (Server, error) {
// return NewOutgoingProvableSwarmSyncer(po, db)
// })
}
// Close needs to be called on a stream server
func (s *SwarmSyncerServer) Close() {
close(s.quit)
}
// GetData retrieves the actual chunk from netstore
func (s *SwarmSyncerServer) GetData(ctx context.Context, key []byte) ([]byte, error) {
ch, err := s.netStore.Get(ctx, chunk.ModeGetSync, storage.Address(key))
if err != nil {
return nil, err
}
return ch.Data(), nil
}
// SessionIndex returns current storage bin (po) index.
func (s *SwarmSyncerServer) SessionIndex() (uint64, error) {
return s.netStore.LastPullSubscriptionBinID(s.po)
}
// SetNextBatch retrieves the next batch of hashes from the localstore.
// It expects a range of bin IDs, both ends inclusive in syncing, and returns
// concatenated byte slice of chunk addresses and bin IDs of the first and
// the last one in that slice. The batch may have up to BatchSize number of
// chunk addresses. If at least one chunk is added to the batch and no new chunks
// are added in batchTimeout period, the batch will be returned. This function
// will block until new chunks are received from localstore pull subscription.
func (s *SwarmSyncerServer) SetNextBatch(from, to uint64) ([]byte, uint64, uint64, *HandoverProof, error) {
batchStart := time.Now()
descriptors, stop := s.netStore.SubscribePull(context.Background(), s.po, from, to)
defer stop()
const batchTimeout = 2 * time.Second
var (
batch []byte
batchSize int
batchStartID *uint64
batchEndID uint64
timer *time.Timer
timerC <-chan time.Time
)
defer func(start time.Time) {
metrics.GetOrRegisterResettingTimer("syncer.set-next-batch.total-time", nil).UpdateSince(start)
metrics.GetOrRegisterCounter("syncer.set-next-batch.batch-size", nil).Inc(int64(batchSize))
if timer != nil {
timer.Stop()
}
}(batchStart)
for iterate := true; iterate; {
select {
case d, ok := <-descriptors:
if !ok {
iterate = false
break
}
batch = append(batch, d.Address[:]...)
// This is the most naive approach to label the chunk as synced
// allowing it to be garbage collected. A proper way requires
// validating that the chunk is successfully stored by the peer.
err := s.netStore.Set(context.Background(), chunk.ModeSetSync, d.Address)
if err != nil {
metrics.GetOrRegisterCounter("syncer.set-next-batch.set-sync-err", nil).Inc(1)
log.Debug("syncer pull subscription - err setting chunk as synced", "correlateId", s.correlateId, "err", err)
return nil, 0, 0, nil, err
}
batchSize++
if batchStartID == nil {
// set batch start id only if
// this is the first iteration
batchStartID = &d.BinID
}
batchEndID = d.BinID
if batchSize >= BatchSize {
iterate = false
metrics.GetOrRegisterCounter("syncer.set-next-batch.full-batch", nil).Inc(1)
log.Trace("syncer pull subscription - batch size reached", "correlateId", s.correlateId, "batchSize", batchSize, "batchStartID", batchStartID, "batchEndID", batchEndID)
}
if timer == nil {
timer = time.NewTimer(batchTimeout)
} else {
log.Trace("syncer pull subscription - stopping timer", "correlateId", s.correlateId)
if !timer.Stop() {
<-timer.C
}
log.Trace("syncer pull subscription - channel drained, resetting timer", "correlateId", s.correlateId)
timer.Reset(batchTimeout)
}
timerC = timer.C
case <-timerC:
// return batch if new chunks are not
// received after some time
iterate = false
metrics.GetOrRegisterCounter("syncer.set-next-batch.timer-expire", nil).Inc(1)
log.Trace("syncer pull subscription timer expired", "correlateId", s.correlateId, "batchSize", batchSize, "batchStartID", batchStartID, "batchEndID", batchEndID)
case <-s.quit:
iterate = false
log.Trace("syncer pull subscription - quit received", "correlateId", s.correlateId, "batchSize", batchSize, "batchStartID", batchStartID, "batchEndID", batchEndID)
}
}
if batchStartID == nil {
// if batch start id is not set, return 0
batchStartID = new(uint64)
}
return batch, *batchStartID, batchEndID, nil, nil
}
// SwarmSyncerClient
type SwarmSyncerClient struct {
netStore *storage.NetStore
peer *Peer
stream Stream
}
// NewSwarmSyncerClient is a contructor for provable data exchange syncer
func NewSwarmSyncerClient(p *Peer, netStore *storage.NetStore, stream Stream) (*SwarmSyncerClient, error) {
return &SwarmSyncerClient{
netStore: netStore,
peer: p,
stream: stream,
}, nil
}
// RegisterSwarmSyncerClient registers the client constructor function for
// to handle incoming sync streams
func RegisterSwarmSyncerClient(streamer *Registry, netStore *storage.NetStore) {
streamer.RegisterClientFunc("SYNC", func(p *Peer, t string, live bool) (Client, error) {
return NewSwarmSyncerClient(p, netStore, NewStream("SYNC", t, live))
})
}
// NeedData
func (s *SwarmSyncerClient) NeedData(ctx context.Context, key []byte) (wait func(context.Context) error) {
return s.netStore.FetchFunc(ctx, key)
}
// BatchDone
func (s *SwarmSyncerClient) BatchDone(stream Stream, from uint64, hashes []byte, root []byte) func() (*TakeoverProof, error) {
// TODO: reenable this with putter/getter refactored code
// if s.chunker != nil {
// return func() (*TakeoverProof, error) { return s.TakeoverProof(stream, from, hashes, root) }
// }
return nil
}
func (s *SwarmSyncerClient) Close() {}
// base for parsing and formating sync bin key
// it must be 2 <= base <= 36
const syncBinKeyBase = 36
// FormatSyncBinKey returns a string representation of
// Kademlia bin number to be used as key for SYNC stream.
func FormatSyncBinKey(bin uint8) string {
return strconv.FormatUint(uint64(bin), syncBinKeyBase)
}
// ParseSyncBinKey parses the string representation
// and returns the Kademlia bin number.
func ParseSyncBinKey(s string) (uint8, error) {
bin, err := strconv.ParseUint(s, syncBinKeyBase, 8)
if err != nil {
return 0, err
}
return uint8(bin), nil
}

View File

@ -0,0 +1,347 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package stream
import (
"context"
"errors"
"fmt"
"io/ioutil"
"os"
"sync"
"testing"
"time"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
"github.com/ethersphere/swarm/chunk"
"github.com/ethersphere/swarm/log"
"github.com/ethersphere/swarm/network"
"github.com/ethersphere/swarm/network/simulation"
"github.com/ethersphere/swarm/state"
"github.com/ethersphere/swarm/storage"
"github.com/ethersphere/swarm/testutil"
)
const dataChunkCount = 200
func TestSyncerSimulation(t *testing.T) {
testSyncBetweenNodes(t, 2, dataChunkCount, true, 1)
// This test uses much more memory when running with
// race detector. Allow it to finish successfully by
// reducing its scope, and still check for data races
// with the smallest number of nodes.
if !testutil.RaceEnabled {
testSyncBetweenNodes(t, 4, dataChunkCount, true, 1)
testSyncBetweenNodes(t, 8, dataChunkCount, true, 1)
testSyncBetweenNodes(t, 16, dataChunkCount, true, 1)
}
}
func testSyncBetweenNodes(t *testing.T, nodes, chunkCount int, skipCheck bool, po uint8) {
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr := network.NewAddr(ctx.Config.Node())
//hack to put addresses in same space
addr.OAddr[0] = byte(0)
netStore, delivery, clean, err := newNetStoreAndDeliveryWithBzzAddr(ctx, bucket, addr)
if err != nil {
return nil, nil, err
}
var dir string
var store *state.DBStore
if testutil.RaceEnabled {
// Use on-disk DBStore to reduce memory consumption in race tests.
dir, err = ioutil.TempDir("", "swarm-stream-")
if err != nil {
return nil, nil, err
}
store, err = state.NewDBStore(dir)
if err != nil {
return nil, nil, err
}
} else {
store = state.NewInmemoryStore()
}
r := NewRegistry(addr.ID(), delivery, netStore, store, &RegistryOptions{
Syncing: SyncingAutoSubscribe,
SkipCheck: skipCheck,
}, nil)
cleanup = func() {
r.Close()
clean()
if dir != "" {
os.RemoveAll(dir)
}
}
return r, cleanup, nil
},
})
defer sim.Close()
// create context for simulation run
timeout := 30 * time.Second
ctx, cancel := context.WithTimeout(context.Background(), timeout)
// defer cancel should come before defer simulation teardown
defer cancel()
_, err := sim.AddNodesAndConnectChain(nodes)
if err != nil {
t.Fatal(err)
}
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) (err error) {
nodeIDs := sim.UpNodeIDs()
nodeIndex := make(map[enode.ID]int)
for i, id := range nodeIDs {
nodeIndex[id] = i
}
disconnected := watchDisconnections(ctx, sim)
defer func() {
if err != nil && disconnected.bool() {
err = errors.New("disconnect events received")
}
}()
// each node Subscribes to each other's swarmChunkServerStreamName
for j := 0; j < nodes-1; j++ {
id := nodeIDs[j]
client, err := sim.Net.GetNode(id).Client()
if err != nil {
return fmt.Errorf("node %s client: %v", id, err)
}
sid := nodeIDs[j+1]
client.CallContext(ctx, nil, "stream_subscribeStream", sid, NewStream("SYNC", FormatSyncBinKey(1), false), NewRange(0, 0), Top)
if err != nil {
return err
}
if j > 0 || nodes == 2 {
item, ok := sim.NodeItem(nodeIDs[j], bucketKeyFileStore)
if !ok {
return fmt.Errorf("No filestore")
}
fileStore := item.(*storage.FileStore)
size := chunkCount * chunkSize
_, wait, err := fileStore.Store(ctx, testutil.RandomReader(j, size), int64(size), false)
if err != nil {
return fmt.Errorf("fileStore.Store: %v", err)
}
wait(ctx)
}
}
// here we distribute chunks of a random file into stores 1...nodes
// collect hashes in po 1 bin for each node
hashes := make([][]storage.Address, nodes)
totalHashes := 0
hashCounts := make([]int, nodes)
for i := nodes - 1; i >= 0; i-- {
if i < nodes-1 {
hashCounts[i] = hashCounts[i+1]
}
item, ok := sim.NodeItem(nodeIDs[i], bucketKeyStore)
if !ok {
return fmt.Errorf("No DB")
}
store := item.(chunk.Store)
until, err := store.LastPullSubscriptionBinID(po)
if err != nil {
return err
}
if until > 0 {
c, _ := store.SubscribePull(ctx, po, 0, until)
for iterate := true; iterate; {
select {
case cd, ok := <-c:
if !ok {
iterate = false
break
}
hashes[i] = append(hashes[i], cd.Address)
totalHashes++
hashCounts[i]++
case <-ctx.Done():
return ctx.Err()
}
}
}
}
var total, found int
for _, node := range nodeIDs {
i := nodeIndex[node]
for j := i; j < nodes; j++ {
total += len(hashes[j])
for _, key := range hashes[j] {
item, ok := sim.NodeItem(nodeIDs[j], bucketKeyStore)
if !ok {
return fmt.Errorf("No DB")
}
db := item.(chunk.Store)
_, err := db.Get(ctx, chunk.ModeGetRequest, key)
if err == nil {
found++
}
}
}
log.Debug("sync check", "node", node, "index", i, "bin", po, "found", found, "total", total)
}
if total == found && total > 0 {
return nil
}
return fmt.Errorf("Total not equallying found %v: total is %d", found, total)
})
if result.Error != nil {
t.Fatal(result.Error)
}
}
//TestSameVersionID just checks that if the version is not changed,
//then streamer peers see each other
func TestSameVersionID(t *testing.T) {
//test version ID
v := uint(1)
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
Syncing: SyncingAutoSubscribe,
}, nil)
bucket.Store(bucketKeyRegistry, r)
//assign to each node the same version ID
r.spec.Version = v
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
})
defer sim.Close()
//connect just two nodes
log.Info("Adding nodes to simulation")
_, err := sim.AddNodesAndConnectChain(2)
if err != nil {
t.Fatal(err)
}
log.Info("Starting simulation")
ctx := context.Background()
//make sure they have time to connect
time.Sleep(200 * time.Millisecond)
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) error {
//get the pivot node's filestore
nodes := sim.UpNodeIDs()
item, ok := sim.NodeItem(nodes[0], bucketKeyRegistry)
if !ok {
return fmt.Errorf("No filestore")
}
registry := item.(*Registry)
//the peers should connect, thus getting the peer should not return nil
if registry.getPeer(nodes[1]) == nil {
return errors.New("Expected the peer to not be nil, but it is")
}
return nil
})
if result.Error != nil {
t.Fatal(result.Error)
}
log.Info("Simulation ended")
}
//TestDifferentVersionID proves that if the streamer protocol version doesn't match,
//then the peers are not connected at streamer level
func TestDifferentVersionID(t *testing.T) {
//create a variable to hold the version ID
v := uint(0)
sim := simulation.New(map[string]simulation.ServiceFunc{
"streamer": func(ctx *adapters.ServiceContext, bucket *sync.Map) (s node.Service, cleanup func(), err error) {
addr, netStore, delivery, clean, err := newNetStoreAndDelivery(ctx, bucket)
if err != nil {
return nil, nil, err
}
r := NewRegistry(addr.ID(), delivery, netStore, state.NewInmemoryStore(), &RegistryOptions{
Syncing: SyncingAutoSubscribe,
}, nil)
bucket.Store(bucketKeyRegistry, r)
//increase the version ID for each node
v++
r.spec.Version = v
cleanup = func() {
r.Close()
clean()
}
return r, cleanup, nil
},
})
defer sim.Close()
//connect the nodes
log.Info("Adding nodes to simulation")
_, err := sim.AddNodesAndConnectChain(2)
if err != nil {
t.Fatal(err)
}
log.Info("Starting simulation")
ctx := context.Background()
//make sure they have time to connect
time.Sleep(200 * time.Millisecond)
result := sim.Run(ctx, func(ctx context.Context, sim *simulation.Simulation) error {
//get the pivot node's filestore
nodes := sim.UpNodeIDs()
item, ok := sim.NodeItem(nodes[0], bucketKeyRegistry)
if !ok {
return fmt.Errorf("No filestore")
}
registry := item.(*Registry)
//getting the other peer should fail due to the different version numbers
if registry.getPeer(nodes[1]) != nil {
return errors.New("Expected the peer to be nil, but it is not")
}
return nil
})
if result.Error != nil {
t.Fatal(result.Error)
}
log.Info("Simulation ended")
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long