repo stringlengths 5 67 | sha stringlengths 40 40 | path stringlengths 4 234 | url stringlengths 85 339 | language stringclasses 6 values | split stringclasses 3 values | doc stringlengths 3 51.2k | sign stringlengths 5 8.01k | problem stringlengths 13 51.2k | output stringlengths 0 3.87M |
|---|---|---|---|---|---|---|---|---|---|
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L130-L135 | go | train | // IsValid returns true if the client is registered, false otherwise. | func (cs *clientStore) isValid(ID string, connID []byte) bool | // IsValid returns true if the client is registered, false otherwise.
func (cs *clientStore) isValid(ID string, connID []byte) bool | {
cs.RLock()
valid := cs.lookupByConnIDOrID(ID, connID) != nil
cs.RUnlock()
return valid
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L144-L172 | go | train | // isValidWithTimeout will return true if the client is registered,
// false if not.
// When the client is not yet registered, this call sets up a go channel
// and waits up to `timeout` for the register() call to send the newly
// registered client to the channel.
// On timeout, this call return false to indicate that the client
// has still not registered. | func (cs *clientStore) isValidWithTimeout(ID string, connID []byte, timeout time.Duration) bool | // isValidWithTimeout will return true if the client is registered,
// false if not.
// When the client is not yet registered, this call sets up a go channel
// and waits up to `timeout` for the register() call to send the newly
// registered client to the channel.
// On timeout, this call return false to indicate that the client
// has still not registered.
func (cs *clientStore) isValidWithTimeout(ID string, connID []byte, timeout time.Duration) bool | {
cs.Lock()
c := cs.lookupByConnIDOrID(ID, connID)
if c != nil {
cs.Unlock()
return true
}
if cs.knownToBeInvalid(ID, connID) {
cs.Unlock()
return false
}
if cs.waitOnRegister == nil {
cs.waitOnRegister = make(map[string]chan struct{})
}
ch := make(chan struct{}, 1)
cs.waitOnRegister[ID] = ch
cs.Unlock()
select {
case <-ch:
return true
case <-time.After(timeout):
// We timed out, remove the entry in the map
cs.Lock()
delete(cs.waitOnRegister, ID)
cs.addToKnownInvalid(ID, connID)
cs.Unlock()
return false
}
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L197-L205 | go | train | // Lookup client by ConnID if not nil, otherwise by clientID.
// Assume at least clientStore RLock is held on entry. | func (cs *clientStore) lookupByConnIDOrID(ID string, connID []byte) *client | // Lookup client by ConnID if not nil, otherwise by clientID.
// Assume at least clientStore RLock is held on entry.
func (cs *clientStore) lookupByConnIDOrID(ID string, connID []byte) *client | {
var c *client
if len(connID) > 0 {
c = cs.connIDs[string(connID)]
} else {
c = cs.clients[ID]
}
return c
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L208-L213 | go | train | // Lookup a client | func (cs *clientStore) lookup(ID string) *client | // Lookup a client
func (cs *clientStore) lookup(ID string) *client | {
cs.RLock()
c := cs.clients[ID]
cs.RUnlock()
return c
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L216-L221 | go | train | // Lookup a client by connection ID | func (cs *clientStore) lookupByConnID(connID []byte) *client | // Lookup a client by connection ID
func (cs *clientStore) lookupByConnID(connID []byte) *client | {
cs.RLock()
c := cs.connIDs[string(connID)]
cs.RUnlock()
return c
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L225-L236 | go | train | // GetSubs returns the list of subscriptions for the client identified by ID,
// or nil if such client is not found. | func (cs *clientStore) getSubs(ID string) []*subState | // GetSubs returns the list of subscriptions for the client identified by ID,
// or nil if such client is not found.
func (cs *clientStore) getSubs(ID string) []*subState | {
cs.RLock()
defer cs.RUnlock()
c := cs.clients[ID]
if c == nil {
return nil
}
c.RLock()
subs := c.getSubsCopy()
c.RUnlock()
return subs
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L241-L252 | go | train | // AddSub adds the subscription to the client identified by clientID
// and returns true only if the client has not been unregistered,
// otherwise returns false. | func (cs *clientStore) addSub(ID string, sub *subState) bool | // AddSub adds the subscription to the client identified by clientID
// and returns true only if the client has not been unregistered,
// otherwise returns false.
func (cs *clientStore) addSub(ID string, sub *subState) bool | {
cs.RLock()
defer cs.RUnlock()
c := cs.clients[ID]
if c == nil {
return false
}
c.Lock()
c.subs = append(c.subs, sub)
c.Unlock()
return true
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L257-L269 | go | train | // RemoveSub removes the subscription from the client identified by clientID
// and returns true only if the client has not been unregistered and that
// the subscription was found, otherwise returns false. | func (cs *clientStore) removeSub(ID string, sub *subState) bool | // RemoveSub removes the subscription from the client identified by clientID
// and returns true only if the client has not been unregistered and that
// the subscription was found, otherwise returns false.
func (cs *clientStore) removeSub(ID string, sub *subState) bool | {
cs.RLock()
defer cs.RUnlock()
c := cs.clients[ID]
if c == nil {
return false
}
c.Lock()
removed := false
c.subs, removed = sub.deleteFromList(c.subs)
c.Unlock()
return removed
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L273-L283 | go | train | // recoverClients recreates the content of the client store based on clients
// information recovered from the Store. | func (cs *clientStore) recoverClients(clients []*stores.Client) | // recoverClients recreates the content of the client store based on clients
// information recovered from the Store.
func (cs *clientStore) recoverClients(clients []*stores.Client) | {
cs.Lock()
for _, sc := range clients {
client := &client{info: sc, subs: make([]*subState, 0, 4)}
cs.clients[client.info.ID] = client
if len(client.info.ConnID) > 0 {
cs.connIDs[string(client.info.ConnID)] = client
}
}
cs.Unlock()
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L287-L299 | go | train | // setClientHB will lookup the client `ID` and, if present, set the
// client's timer with the given interval and function. | func (cs *clientStore) setClientHB(ID string, interval time.Duration, f func()) | // setClientHB will lookup the client `ID` and, if present, set the
// client's timer with the given interval and function.
func (cs *clientStore) setClientHB(ID string, interval time.Duration, f func()) | {
cs.RLock()
defer cs.RUnlock()
c := cs.clients[ID]
if c == nil {
return
}
c.Lock()
if c.hbt == nil {
c.hbt = time.AfterFunc(interval, f)
}
c.Unlock()
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L303-L313 | go | train | // removeClientHB will stop and remove the client's heartbeat timer, if
// present. | func (cs *clientStore) removeClientHB(c *client) | // removeClientHB will stop and remove the client's heartbeat timer, if
// present.
func (cs *clientStore) removeClientHB(c *client) | {
if c == nil {
return
}
c.Lock()
if c.hbt != nil {
c.hbt.Stop()
c.hbt = nil
}
c.Unlock()
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L318-L326 | go | train | // getClients returns a snapshot of the registered clients.
// The map itself is a copy (can be iterated safely), but
// the clients objects returned are the one stored in the clientStore. | func (cs *clientStore) getClients() map[string]*client | // getClients returns a snapshot of the registered clients.
// The map itself is a copy (can be iterated safely), but
// the clients objects returned are the one stored in the clientStore.
func (cs *clientStore) getClients() map[string]*client | {
cs.RLock()
defer cs.RUnlock()
clients := make(map[string]*client, len(cs.clients))
for _, c := range cs.clients {
clients[c.info.ID] = c
}
return clients
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/client.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/client.go#L329-L334 | go | train | // count returns the number of registered clients | func (cs *clientStore) count() int | // count returns the number of registered clients
func (cs *clientStore) count() int | {
cs.RLock()
total := len(cs.clients)
cs.RUnlock()
return total
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/signal_windows.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/signal_windows.go#L22-L33 | go | train | // Signal Handling | func (s *StanServer) handleSignals() | // Signal Handling
func (s *StanServer) handleSignals() | {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
go func() {
// We register only 1 signal (os.Interrupt) so we don't
// need to check which one we get, since Notify() relays
// only the ones that are registered.
<-c
s.Shutdown()
os.Exit(0)
}()
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/clustering.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/clustering.go#L96-L130 | go | train | // shutdown attempts to stop the Raft node. | func (r *raftNode) shutdown() error | // shutdown attempts to stop the Raft node.
func (r *raftNode) shutdown() error | {
r.Lock()
if r.closed {
r.Unlock()
return nil
}
r.closed = true
r.Unlock()
if r.Raft != nil {
if err := r.Raft.Shutdown().Error(); err != nil {
return err
}
}
if r.transport != nil {
if err := r.transport.Close(); err != nil {
return err
}
}
if r.store != nil {
if err := r.store.Close(); err != nil {
return err
}
}
if r.joinSub != nil {
if err := r.joinSub.Unsubscribe(); err != nil {
return err
}
}
if r.logInput != nil {
if err := r.logInput.Close(); err != nil {
return err
}
}
return nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/clustering.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/clustering.go#L133-L199 | go | train | // createRaftNode creates and starts a new Raft node. | func (s *StanServer) createServerRaftNode(hasStreamingState bool) error | // createRaftNode creates and starts a new Raft node.
func (s *StanServer) createServerRaftNode(hasStreamingState bool) error | {
var (
name = s.info.ClusterID
addr = s.getClusteringAddr(name)
existingState, err = s.createRaftNode(name)
)
if err != nil {
return err
}
if !existingState && hasStreamingState {
return fmt.Errorf("streaming state was recovered but cluster log path %q is empty", s.opts.Clustering.RaftLogPath)
}
node := s.raft
// Bootstrap if there is no previous state and we are starting this node as
// a seed or a cluster configuration is provided.
bootstrap := !existingState && (s.opts.Clustering.Bootstrap || len(s.opts.Clustering.Peers) > 0)
if bootstrap {
if err := s.bootstrapCluster(name, node.Raft); err != nil {
node.shutdown()
return err
}
} else if !existingState {
// Attempt to join the cluster if we're not bootstrapping.
req, err := (&spb.RaftJoinRequest{NodeID: s.opts.Clustering.NodeID, NodeAddr: addr}).Marshal()
if err != nil {
panic(err)
}
var (
joined = false
resp = &spb.RaftJoinResponse{}
)
s.log.Debugf("Joining Raft group %s", name)
// Attempt to join up to 5 times before giving up.
for i := 0; i < 5; i++ {
r, err := s.ncr.Request(fmt.Sprintf("%s.%s.join", defaultRaftPrefix, name), req, joinRaftGroupTimeout)
if err != nil {
time.Sleep(20 * time.Millisecond)
continue
}
if err := resp.Unmarshal(r.Data); err != nil {
time.Sleep(20 * time.Millisecond)
continue
}
if resp.Error != "" {
time.Sleep(20 * time.Millisecond)
continue
}
joined = true
break
}
if !joined {
node.shutdown()
return fmt.Errorf("failed to join Raft group %s", name)
}
}
if s.opts.Clustering.Bootstrap {
// If node is started with bootstrap, regardless if state exist or not, try to
// detect (and report) other nodes in same cluster started with bootstrap=true.
s.wg.Add(1)
go func() {
s.detectBootstrapMisconfig(name)
s.wg.Done()
}()
}
return nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/clustering.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/clustering.go#L262-L415 | go | train | // createRaftNode creates and starts a new Raft node with the given name and FSM. | func (s *StanServer) createRaftNode(name string) (bool, error) | // createRaftNode creates and starts a new Raft node with the given name and FSM.
func (s *StanServer) createRaftNode(name string) (bool, error) | {
path := filepath.Join(s.opts.Clustering.RaftLogPath, name)
if _, err := os.Stat(path); os.IsNotExist(err) {
if err := os.MkdirAll(path, os.ModeDir+os.ModePerm); err != nil {
return false, err
}
}
// We create s.raft early because once NewRaft() is called, the
// raft code may asynchronously invoke FSM.Apply() and FSM.Restore()
// So we want the object to exist so we can check on leader atomic, etc..
s.raft = &raftNode{}
raftLogFileName := filepath.Join(path, raftLogFile)
store, err := newRaftLog(s.log, raftLogFileName, s.opts.Clustering.Sync, int(s.opts.Clustering.TrailingLogs),
s.opts.Encrypt, s.opts.EncryptionCipher, s.opts.EncryptionKey)
if err != nil {
return false, err
}
cacheStore, err := raft.NewLogCache(s.opts.Clustering.LogCacheSize, store)
if err != nil {
store.Close()
return false, err
}
addr := s.getClusteringAddr(name)
config := raft.DefaultConfig()
// For tests
if runningInTests {
config.ElectionTimeout = 100 * time.Millisecond
config.HeartbeatTimeout = 100 * time.Millisecond
config.LeaderLeaseTimeout = 50 * time.Millisecond
} else {
if s.opts.Clustering.RaftHeartbeatTimeout == 0 {
s.opts.Clustering.RaftHeartbeatTimeout = defaultRaftHBTimeout
}
if s.opts.Clustering.RaftElectionTimeout == 0 {
s.opts.Clustering.RaftElectionTimeout = defaultRaftElectionTimeout
}
if s.opts.Clustering.RaftLeaseTimeout == 0 {
s.opts.Clustering.RaftLeaseTimeout = defaultRaftLeaseTimeout
}
if s.opts.Clustering.RaftCommitTimeout == 0 {
s.opts.Clustering.RaftCommitTimeout = defaultRaftCommitTimeout
}
config.HeartbeatTimeout = s.opts.Clustering.RaftHeartbeatTimeout
config.ElectionTimeout = s.opts.Clustering.RaftElectionTimeout
config.LeaderLeaseTimeout = s.opts.Clustering.RaftLeaseTimeout
config.CommitTimeout = s.opts.Clustering.RaftCommitTimeout
}
config.LocalID = raft.ServerID(s.opts.Clustering.NodeID)
config.TrailingLogs = uint64(s.opts.Clustering.TrailingLogs)
logWriter := &raftLogger{s}
config.LogOutput = logWriter
snapshotStore, err := raft.NewFileSnapshotStore(path, s.opts.Clustering.LogSnapshots, logWriter)
if err != nil {
store.Close()
return false, err
}
sl, err := snapshotStore.List()
if err != nil {
store.Close()
return false, err
}
// TODO: using a single NATS conn for every channel might be a bottleneck. Maybe pool conns?
transport, err := newNATSTransport(addr, s.ncr, 2*time.Second, logWriter)
if err != nil {
store.Close()
return false, err
}
// Make the snapshot process never timeout... check (s *serverSnapshot).Persist() for details
transport.TimeoutScale = 1
// Set up a channel for reliable leader notifications.
raftNotifyCh := make(chan bool, 1)
config.NotifyCh = raftNotifyCh
fsm := &raftFSM{server: s}
fsm.Lock()
fsm.snapshotsOnInit = len(sl)
fsm.Unlock()
s.raft.fsm = fsm
node, err := raft.NewRaft(config, fsm, cacheStore, store, snapshotStore, transport)
if err != nil {
transport.Close()
store.Close()
return false, err
}
if testPauseAfterNewRaftCalled {
time.Sleep(time.Second)
}
existingState, err := raft.HasExistingState(cacheStore, store, snapshotStore)
if err != nil {
node.Shutdown()
transport.Close()
store.Close()
return false, err
}
if existingState {
s.log.Debugf("Loaded existing state for Raft group %s", name)
}
// Handle requests to join the cluster.
sub, err := s.ncr.Subscribe(fmt.Sprintf("%s.%s.join", defaultRaftPrefix, name), func(msg *nats.Msg) {
// Drop the request if we're not the leader. There's no race condition
// after this check because even if we proceed with the cluster add, it
// will fail if the node is not the leader as cluster changes go
// through the Raft log.
if node.State() != raft.Leader {
return
}
req := &spb.RaftJoinRequest{}
if err := req.Unmarshal(msg.Data); err != nil {
s.log.Errorf("Invalid join request for Raft group %s", name)
return
}
// Add the node as a voter. This is idempotent. No-op if the request
// came from ourselves.
resp := &spb.RaftJoinResponse{}
if req.NodeID != s.opts.Clustering.NodeID {
future := node.AddVoter(
raft.ServerID(req.NodeID),
raft.ServerAddress(req.NodeAddr), 0, 0)
if err := future.Error(); err != nil {
resp.Error = err.Error()
}
}
// Send the response.
r, err := resp.Marshal()
if err != nil {
panic(err)
}
s.ncr.Publish(msg.Reply, r)
})
if err != nil {
node.Shutdown()
transport.Close()
store.Close()
return false, err
}
s.raft.Raft = node
s.raft.store = store
s.raft.transport = transport
s.raft.logInput = logWriter
s.raft.notifyCh = raftNotifyCh
s.raft.joinSub = sub
return existingState, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/clustering.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/clustering.go#L420-L444 | go | train | // bootstrapCluster bootstraps the node for the provided Raft group either as a
// seed node or with the given peer configuration, depending on configuration
// and with the latter taking precedence. | func (s *StanServer) bootstrapCluster(name string, node *raft.Raft) error | // bootstrapCluster bootstraps the node for the provided Raft group either as a
// seed node or with the given peer configuration, depending on configuration
// and with the latter taking precedence.
func (s *StanServer) bootstrapCluster(name string, node *raft.Raft) error | {
var (
addr = s.getClusteringAddr(name)
// Include ourself in the cluster.
servers = []raft.Server{raft.Server{
ID: raft.ServerID(s.opts.Clustering.NodeID),
Address: raft.ServerAddress(addr),
}}
)
if len(s.opts.Clustering.Peers) > 0 {
// Bootstrap using provided cluster configuration.
s.log.Debugf("Bootstrapping Raft group %s using provided configuration", name)
for _, peer := range s.opts.Clustering.Peers {
servers = append(servers, raft.Server{
ID: raft.ServerID(peer),
Address: raft.ServerAddress(s.getClusteringPeerAddr(name, peer)),
})
}
} else {
// Bootstrap as a seed node.
s.log.Debugf("Bootstrapping Raft group %s as seed node", name)
}
config := raft.Configuration{Servers: servers}
return node.BootstrapCluster(config).Error()
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/clustering.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/clustering.go#L458-L525 | go | train | // Apply log is invoked once a log entry is committed.
// It returns a value which will be made available in the
// ApplyFuture returned by Raft.Apply method if that
// method was called on the same Raft node as the FSM. | func (r *raftFSM) Apply(l *raft.Log) interface{} | // Apply log is invoked once a log entry is committed.
// It returns a value which will be made available in the
// ApplyFuture returned by Raft.Apply method if that
// method was called on the same Raft node as the FSM.
func (r *raftFSM) Apply(l *raft.Log) interface{} | {
s := r.server
op := &spb.RaftOperation{}
if err := op.Unmarshal(l.Data); err != nil {
panic(err)
}
switch op.OpType {
case spb.RaftOperation_Publish:
// Message replication.
var (
c *channel
err error
lastSeq uint64
)
for _, msg := range op.PublishBatch.Messages {
// This is a batch for a given channel, so lookup channel once.
if c == nil {
c, err = s.lookupOrCreateChannel(msg.Subject)
// That should not be the case, but if it happens,
// just bail out.
if err == ErrChanDelInProgress {
return nil
}
lastSeq, err = c.store.Msgs.LastSequence()
}
if err == nil && lastSeq < msg.Sequence-1 {
err = s.raft.fsm.restoreMsgsFromSnapshot(c, lastSeq+1, msg.Sequence-1)
}
if err == nil {
_, err = c.store.Msgs.Store(msg)
}
if err != nil {
return fmt.Errorf("failed to store replicated message %d on channel %s: %v",
msg.Sequence, msg.Subject, err)
}
}
return nil
case spb.RaftOperation_Connect:
// Client connection create replication.
return s.processConnect(op.ClientConnect.Request, op.ClientConnect.Refresh)
case spb.RaftOperation_Disconnect:
// Client connection close replication.
return s.closeClient(op.ClientDisconnect.ClientID)
case spb.RaftOperation_Subscribe:
// Subscription replication.
sub, err := s.processSub(nil, op.Sub.Request, op.Sub.AckInbox, op.Sub.ID)
return &replicatedSub{sub: sub, err: err}
case spb.RaftOperation_RemoveSubscription:
fallthrough
case spb.RaftOperation_CloseSubscription:
// Close/Unsub subscription replication.
isSubClose := op.OpType == spb.RaftOperation_CloseSubscription
s.closeMu.Lock()
err := s.unsubscribe(op.Unsub, isSubClose)
s.closeMu.Unlock()
return err
case spb.RaftOperation_SendAndAck:
if !s.isLeader() {
s.processReplicatedSendAndAck(op.SubSentAck)
}
return nil
case spb.RaftOperation_DeleteChannel:
s.processDeleteChannel(op.Channel)
return nil
default:
panic(fmt.Sprintf("unknown op type %s", op.OpType))
}
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/service_windows.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/service_windows.go#L74-L119 | go | train | // Execute will be called by the package code at the start of
// the service, and the service will exit once Execute completes.
// Inside Execute you must read service change requests from r and
// act accordingly. You must keep service control manager up to date
// about state of your service by writing into s as required.
// args contains service name followed by argument strings passed
// to the service.
// You can provide service exit code in exitCode return parameter,
// with 0 being "no error". You can also indicate if exit code,
// if any, is service specific or not by using svcSpecificEC
// parameter. | func (w *winServiceWrapper) Execute(args []string, changes <-chan svc.ChangeRequest, status chan<- svc.Status) (bool, uint32) | // Execute will be called by the package code at the start of
// the service, and the service will exit once Execute completes.
// Inside Execute you must read service change requests from r and
// act accordingly. You must keep service control manager up to date
// about state of your service by writing into s as required.
// args contains service name followed by argument strings passed
// to the service.
// You can provide service exit code in exitCode return parameter,
// with 0 being "no error". You can also indicate if exit code,
// if any, is service specific or not by using svcSpecificEC
// parameter.
func (w *winServiceWrapper) Execute(args []string, changes <-chan svc.ChangeRequest, status chan<- svc.Status) (bool, uint32) | {
status <- svc.Status{State: svc.StartPending}
if sysLog != nil {
sysLog.Info(1, "Starting NATS Streaming Server...")
}
// Override NoSigs since we are doing signal handling HERE
w.sOpts.HandleSignals = false
server, err := RunServerWithOpts(w.sOpts, w.nOpts)
if err != nil && sysLog != nil {
sysLog.Error(2, fmt.Sprintf("Starting server returned: %v", err))
}
if err != nil {
w.errCh <- err
// Failed to start.
return true, 1
}
status <- svc.Status{
State: svc.Running,
Accepts: svc.AcceptStop | svc.AcceptShutdown | svc.AcceptParamChange | acceptReopenLog,
}
w.srvCh <- server
loop:
for change := range changes {
switch change.Cmd {
case svc.Interrogate:
status <- change.CurrentStatus
case svc.Stop, svc.Shutdown:
status <- svc.Status{State: svc.StopPending}
server.Shutdown()
break loop
case reopenLogCmd:
// File log re-open for rotating file logs.
server.log.ReopenLogFile()
case svc.ParamChange:
// Ignore for now
default:
server.log.Debugf("Unexpected control request: %v", change.Cmd)
}
}
status <- svc.Status{State: svc.Stopped}
return false, 0
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | server/service_windows.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/server/service_windows.go#L123-L176 | go | train | // Run starts the NATS Streaming server. This wrapper function allows Windows to add a
// hook for running NATS Streaming as a service. | func Run(sOpts *Options, nOpts *natsd.Options) (*StanServer, error) | // Run starts the NATS Streaming server. This wrapper function allows Windows to add a
// hook for running NATS Streaming as a service.
func Run(sOpts *Options, nOpts *natsd.Options) (*StanServer, error) | {
if dockerized {
return RunServerWithOpts(sOpts, nOpts)
}
run := svc.Run
isInteractive, err := svc.IsAnInteractiveSession()
if err != nil {
return nil, err
}
if isInteractive {
run = debug.Run
} else {
sysLogInitLock.Lock()
// We create a syslog here because we want to capture possible startup
// failure message.
if sysLog == nil {
if sOpts.SyslogName != "" {
sysLogName = sOpts.SyslogName
}
err := eventlog.InstallAsEventCreate(sysLogName, eventlog.Info|eventlog.Error|eventlog.Warning)
if err != nil {
if !strings.Contains(err.Error(), "registry key already exists") {
panic(err)
}
}
sysLog, err = eventlog.Open(sysLogName)
if err != nil {
panic(fmt.Sprintf("could not open event log: %v", err))
}
}
sysLogInitLock.Unlock()
}
wrapper := &winServiceWrapper{
srvCh: make(chan *StanServer, 1),
errCh: make(chan error, 1),
sOpts: sOpts,
nOpts: nOpts,
}
go func() {
// If no error, we exit here, otherwise, we are getting the
// error down below.
if err := run(serviceName, wrapper); err == nil {
os.Exit(0)
}
}()
var srv *StanServer
// Wait for server instance to be created
select {
case err = <-wrapper.errCh:
case srv = <-wrapper.srvCh:
}
return srv, err
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L56-L67 | go | train | // NewBackoffTimeCheck creates an instance of BackoffTimeCheck.
// The `minFrequency` indicates how frequently BackoffTimeCheck.Ok() can return true.
// When Ok() returns true, the allowed frequency is multiplied by `factor`. The
// resulting frequency is capped by `maxFrequency`. | func NewBackoffTimeCheck(minFrequency time.Duration, factor int, maxFrequency time.Duration) (*BackoffTimeCheck, error) | // NewBackoffTimeCheck creates an instance of BackoffTimeCheck.
// The `minFrequency` indicates how frequently BackoffTimeCheck.Ok() can return true.
// When Ok() returns true, the allowed frequency is multiplied by `factor`. The
// resulting frequency is capped by `maxFrequency`.
func NewBackoffTimeCheck(minFrequency time.Duration, factor int, maxFrequency time.Duration) (*BackoffTimeCheck, error) | {
if minFrequency <= 0 || factor < 1 || maxFrequency < minFrequency {
return nil, fmt.Errorf("minFrequency must be positive, factor at least 1 and maxFrequency at least equal to minFrequency, got %v - %v - %v",
minFrequency, factor, maxFrequency)
}
return &BackoffTimeCheck{
frequency: minFrequency,
minFrequency: minFrequency,
maxFrequency: maxFrequency,
factor: factor,
}, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L75-L99 | go | train | // Ok returns true for the first time it is invoked after creation of the object
// or call to Reset(), or after an amount of time (based on the last success
// and the allowed frequency) has elapsed.
// When at the maximum frequency, if this call is made after a delay at least
// equal to 3x the max frequency (or in other words, 2x after what was the target
// for the next print), then the object is auto-reset. | func (bp *BackoffTimeCheck) Ok() bool | // Ok returns true for the first time it is invoked after creation of the object
// or call to Reset(), or after an amount of time (based on the last success
// and the allowed frequency) has elapsed.
// When at the maximum frequency, if this call is made after a delay at least
// equal to 3x the max frequency (or in other words, 2x after what was the target
// for the next print), then the object is auto-reset.
func (bp *BackoffTimeCheck) Ok() bool | {
if bp.nextTime.IsZero() {
bp.nextTime = time.Now().Add(bp.minFrequency)
return true
}
now := time.Now()
if now.Before(bp.nextTime) {
return false
}
// If we are already at the max frequency and this call
// is made after 2x the max frequency, then auto-reset.
if bp.frequency == bp.maxFrequency &&
now.Sub(bp.nextTime) >= 2*bp.maxFrequency {
bp.Reset()
return true
}
if bp.frequency < bp.maxFrequency {
bp.frequency *= time.Duration(bp.factor)
if bp.frequency > bp.maxFrequency {
bp.frequency = bp.maxFrequency
}
}
bp.nextTime = now.Add(bp.frequency)
return true
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L102-L105 | go | train | // Reset the state so that next call to BackoffPrint.Ok() will return true. | func (bp *BackoffTimeCheck) Reset() | // Reset the state so that next call to BackoffPrint.Ok() will return true.
func (bp *BackoffTimeCheck) Reset() | {
bp.nextTime = time.Time{}
bp.frequency = bp.minFrequency
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L109-L116 | go | train | // EnsureBufBigEnough checks that given buffer is big enough to hold 'needed'
// bytes, otherwise returns a buffer of a size of at least 'needed' bytes. | func EnsureBufBigEnough(buf []byte, needed int) []byte | // EnsureBufBigEnough checks that given buffer is big enough to hold 'needed'
// bytes, otherwise returns a buffer of a size of at least 'needed' bytes.
func EnsureBufBigEnough(buf []byte, needed int) []byte | {
if buf == nil {
return make([]byte, needed)
} else if needed > len(buf) {
return make([]byte, int(float32(needed)*1.1))
}
return buf
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L119-L127 | go | train | // WriteInt writes an int (4 bytes) to the given writer using ByteOrder. | func WriteInt(w io.Writer, v int) error | // WriteInt writes an int (4 bytes) to the given writer using ByteOrder.
func WriteInt(w io.Writer, v int) error | {
var b [4]byte
bs := b[:4]
ByteOrder.PutUint32(bs, uint32(v))
_, err := w.Write(bs)
return err
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L130-L140 | go | train | // ReadInt reads an int (4 bytes) from the reader using ByteOrder. | func ReadInt(r io.Reader) (int, error) | // ReadInt reads an int (4 bytes) from the reader using ByteOrder.
func ReadInt(r io.Reader) (int, error) | {
var b [4]byte
bs := b[:4]
_, err := io.ReadFull(r, bs)
if err != nil {
return 0, err
}
return int(ByteOrder.Uint32(bs)), nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L144-L149 | go | train | // CloseFile closes the given file and report the possible error only
// if the given error `err` is not already set. | func CloseFile(err error, f io.Closer) error | // CloseFile closes the given file and report the possible error only
// if the given error `err` is not already set.
func CloseFile(err error, f io.Closer) error | {
if lerr := f.Close(); lerr != nil && err == nil {
err = lerr
}
return err
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L162-L191 | go | train | // IsChannelNameValid returns false if any of these conditions for
// the channel name apply:
// - is empty
// - contains the `/` character
// - token separator `.` is first or last
// - there are two consecutives token separators `.`
// if wildcardsAllowed is false:
// - contains wildcards `*` or `>`
// if wildcardsAllowed is true:
// - '*' or '>' are not a token in their own
// - `>` is not the last token | func IsChannelNameValid(channel string, wildcardsAllowed bool) bool | // IsChannelNameValid returns false if any of these conditions for
// the channel name apply:
// - is empty
// - contains the `/` character
// - token separator `.` is first or last
// - there are two consecutives token separators `.`
// if wildcardsAllowed is false:
// - contains wildcards `*` or `>`
// if wildcardsAllowed is true:
// - '*' or '>' are not a token in their own
// - `>` is not the last token
func IsChannelNameValid(channel string, wildcardsAllowed bool) bool | {
if channel == "" || channel[0] == btsep {
return false
}
for i := 0; i < len(channel); i++ {
c := channel[i]
if c == '/' {
return false
}
if (c == btsep) && (i == len(channel)-1 || channel[i+1] == btsep) {
return false
}
if !wildcardsAllowed {
if c == pwc || c == fwc {
return false
}
} else if c == pwc || c == fwc {
if i > 0 && channel[i-1] != btsep {
return false
}
if c == fwc && i != len(channel)-1 {
return false
}
if i < len(channel)-1 && channel[i+1] != btsep {
return false
}
}
}
return true
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L196-L203 | go | train | // IsChannelNameLiteral returns true if the channel name is a literal (that is,
// it does not contain any wildcard).
// The channel name is assumed to be valid. | func IsChannelNameLiteral(channel string) bool | // IsChannelNameLiteral returns true if the channel name is a literal (that is,
// it does not contain any wildcard).
// The channel name is assumed to be valid.
func IsChannelNameLiteral(channel string) bool | {
for i := 0; i < len(channel); i++ {
if channel[i] == pwc || channel[i] == fwc {
return false
}
}
return true
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | util/util.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/util/util.go#L207-L217 | go | train | // FriendlyBytes returns a string with the given bytes int64
// represented as a size, such as 1KB, 10MB, etc... | func FriendlyBytes(bytes int64) string | // FriendlyBytes returns a string with the given bytes int64
// represented as a size, such as 1KB, 10MB, etc...
func FriendlyBytes(bytes int64) string | {
fbytes := float64(bytes)
base := 1024
pre := []string{"K", "M", "G", "T", "P", "E"}
if fbytes < float64(base) {
return fmt.Sprintf("%v B", fbytes)
}
exp := int(math.Log(fbytes) / math.Log(float64(base)))
index := exp - 1
return fmt.Sprintf("%.2f %sB", fbytes/math.Pow(float64(base), float64(exp)), pre[index])
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L93-L107 | go | train | // NewEDStore returns an instance of EDStore that adds Encrypt/Decrypt
// capabilities. | func NewEDStore(encryptionCipher string, encryptionKey []byte, idx uint64) (*EDStore, error) | // NewEDStore returns an instance of EDStore that adds Encrypt/Decrypt
// capabilities.
func NewEDStore(encryptionCipher string, encryptionKey []byte, idx uint64) (*EDStore, error) | {
code, keyHash, err := createMasterKeyHash(encryptionCipher, encryptionKey)
if err != nil {
return nil, err
}
s, err := newEDStore(code, keyHash, idx)
if err != nil {
return nil, err
}
// On success, erase the key
for i := 0; i < len(encryptionKey); i++ {
encryptionKey[i] = 'x'
}
return s, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L201-L225 | go | train | // Encrypt returns the encrypted data or an error | func (s *EDStore) Encrypt(pbuf *[]byte, data []byte) ([]byte, error) | // Encrypt returns the encrypted data or an error
func (s *EDStore) Encrypt(pbuf *[]byte, data []byte) ([]byte, error) | {
var buf []byte
// If given a buffer, use that one
if pbuf != nil {
buf = *pbuf
}
// Make sure size is ok, expand if necessary
buf = util.EnsureBufBigEnough(buf, 1+s.nonceSize+s.cryptoOverhead+len(data))
// If buffer was passed, update the reference
if pbuf != nil {
*pbuf = buf
}
buf[0] = s.code
copy(buf[1:], s.nonce)
copy(buf[1+s.nonceSize:], data)
dst := buf[1+s.nonceSize : 1+s.nonceSize+len(data)]
ed := s.gcm.Seal(dst[:0], s.nonce, dst, nil)
for i := s.nonceSize - 1; i >= 0; i-- {
s.nonce[i]++
if s.nonce[i] != 0 {
break
}
}
return buf[:1+s.nonceSize+len(ed)], nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L228-L249 | go | train | // Decrypt returns the decrypted data or an error | func (s *EDStore) Decrypt(dst []byte, cipherText []byte) ([]byte, error) | // Decrypt returns the decrypted data or an error
func (s *EDStore) Decrypt(dst []byte, cipherText []byte) ([]byte, error) | {
var gcm cipher.AEAD
if len(cipherText) > 0 {
switch cipherText[0] {
case CryptoCodeAES:
gcm = s.aesgcm
case CryptoCodeChaCha:
gcm = s.chachagcm
default:
// Anything else, assume no algo or something we don't know how to decrypt.
return cipherText, nil
}
}
if len(cipherText) <= 1+s.nonceSize {
return nil, fmt.Errorf("trying to decrypt data that is not (len=%v)", len(cipherText))
}
dd, err := gcm.Open(dst, cipherText[1:1+s.nonceSize], cipherText[1+s.nonceSize:], nil)
if err != nil {
return nil, err
}
return dd, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L253-L268 | go | train | // NewCryptoStore returns a CryptoStore instance with
// given underlying store. | func NewCryptoStore(s Store, encryptionCipher string, encryptionKey []byte) (*CryptoStore, error) | // NewCryptoStore returns a CryptoStore instance with
// given underlying store.
func NewCryptoStore(s Store, encryptionCipher string, encryptionKey []byte) (*CryptoStore, error) | {
code, mkh, err := createMasterKeyHash(encryptionCipher, encryptionKey)
if err != nil {
return nil, err
}
cs := &CryptoStore{
Store: s,
code: code,
mkh: mkh,
}
// On success, erase the key
for i := 0; i < len(encryptionKey); i++ {
encryptionKey[i] = 'x'
}
return cs, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L295-L310 | go | train | // Recover implements the Store interface | func (cs *CryptoStore) Recover() (*RecoveredState, error) | // Recover implements the Store interface
func (cs *CryptoStore) Recover() (*RecoveredState, error) | {
cs.Lock()
defer cs.Unlock()
rs, err := cs.Store.Recover()
if rs == nil || err != nil {
return nil, err
}
for cn, rc := range rs.Channels {
cms, err := cs.newCryptoMsgStore(cn, rc.Channel.Msgs)
if err != nil {
return nil, err
}
rc.Channel.Msgs = cms
}
return rs, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L313-L327 | go | train | // CreateChannel implements the Store interface | func (cs *CryptoStore) CreateChannel(channel string) (*Channel, error) | // CreateChannel implements the Store interface
func (cs *CryptoStore) CreateChannel(channel string) (*Channel, error) | {
cs.Lock()
defer cs.Unlock()
c, err := cs.Store.CreateChannel(channel)
if err != nil {
return nil, err
}
cms, err := cs.newCryptoMsgStore(channel, c.Msgs)
if err != nil {
return nil, err
}
c.Msgs = cms
return c, nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L330-L340 | go | train | // Store implements the MsgStore interface | func (cms *CryptoMsgStore) Store(msg *pb.MsgProto) (uint64, error) | // Store implements the MsgStore interface
func (cms *CryptoMsgStore) Store(msg *pb.MsgProto) (uint64, error) | {
if len(msg.Data) == 0 {
return cms.MsgStore.Store(msg)
}
ed, err := cms.encrypt(msg.Data)
if err != nil {
return 0, err
}
msg.Data = ed
return cms.MsgStore.Store(msg)
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L369-L375 | go | train | // Lookup implements the MsgStore interface | func (cms *CryptoMsgStore) Lookup(seq uint64) (*pb.MsgProto, error) | // Lookup implements the MsgStore interface
func (cms *CryptoMsgStore) Lookup(seq uint64) (*pb.MsgProto, error) | {
m, err := cms.MsgStore.Lookup(seq)
if m == nil || m.Data == nil || err != nil {
return m, err
}
return cms.decryptedMsg(m)
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/cryptostore.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/cryptostore.go#L378-L384 | go | train | // FirstMsg implements the MsgStore interface | func (cms *CryptoMsgStore) FirstMsg() (*pb.MsgProto, error) | // FirstMsg implements the MsgStore interface
func (cms *CryptoMsgStore) FirstMsg() (*pb.MsgProto, error) | {
m, err := cms.MsgStore.FirstMsg()
if m == nil || m.Data == nil || err != nil {
return m, err
}
return cms.decryptedMsg(m)
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/limits.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/limits.go#L31-L35 | go | train | // Clone returns a copy of the store limits | func (sl *StoreLimits) Clone() *StoreLimits | // Clone returns a copy of the store limits
func (sl *StoreLimits) Clone() *StoreLimits | {
cloned := *sl
cloned.PerChannel = sl.ClonePerChannelMap()
return &cloned
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/limits.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/limits.go#L38-L48 | go | train | // ClonePerChannelMap returns a deep copy of the StoreLimits's PerChannel map | func (sl *StoreLimits) ClonePerChannelMap() map[string]*ChannelLimits | // ClonePerChannelMap returns a deep copy of the StoreLimits's PerChannel map
func (sl *StoreLimits) ClonePerChannelMap() map[string]*ChannelLimits | {
if sl.PerChannel == nil {
return nil
}
clone := make(map[string]*ChannelLimits, len(sl.PerChannel))
for k, v := range sl.PerChannel {
copyVal := *v
clone[k] = ©Val
}
return clone
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/limits.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/limits.go#L54-L59 | go | train | // AddPerChannel stores limits for the given channel `name` in the StoreLimits.
// Inheritance (that is, specifying 0 for a limit means that the global limit
// should be used) is not applied in this call. This is done in StoreLimits.Build
// along with some validation. | func (sl *StoreLimits) AddPerChannel(name string, cl *ChannelLimits) | // AddPerChannel stores limits for the given channel `name` in the StoreLimits.
// Inheritance (that is, specifying 0 for a limit means that the global limit
// should be used) is not applied in this call. This is done in StoreLimits.Build
// along with some validation.
func (sl *StoreLimits) AddPerChannel(name string, cl *ChannelLimits) | {
if sl.PerChannel == nil {
sl.PerChannel = make(map[string]*ChannelLimits)
}
sl.PerChannel[name] = cl
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/limits.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/limits.go#L73-L107 | go | train | // Build sets the global limits into per-channel limits that are set
// to zero. This call also validates the limits. An error is returned if:
// * any global limit is set to a negative value.
// * the number of per-channel is higher than StoreLimits.MaxChannels.
// * a per-channel name is invalid | func (sl *StoreLimits) Build() error | // Build sets the global limits into per-channel limits that are set
// to zero. This call also validates the limits. An error is returned if:
// * any global limit is set to a negative value.
// * the number of per-channel is higher than StoreLimits.MaxChannels.
// * a per-channel name is invalid
func (sl *StoreLimits) Build() error | {
// Check that there is no negative value
if err := sl.checkGlobalLimits(); err != nil {
return err
}
// If there is no per-channel, we are done.
if len(sl.PerChannel) == 0 {
return nil
}
literals := 0
sublist := util.NewSublist()
for cn, cl := range sl.PerChannel {
if !util.IsChannelNameValid(cn, true) {
return fmt.Errorf("invalid channel name %q", cn)
}
isLiteral := util.IsChannelNameLiteral(cn)
if isLiteral {
literals++
if sl.MaxChannels > 0 && literals > sl.MaxChannels {
return fmt.Errorf("too many channels defined (%v). The max channels limit is set to %v",
literals, sl.MaxChannels)
}
}
cli := &channelLimitInfo{
name: cn,
limits: cl,
isLiteral: isLiteral,
}
sublist.Insert(cn, cli)
}
// If we are here, it means that there was no error,
// so we now apply inheritance.
sl.applyInheritance(sublist)
return nil
} |
nats-io/nats-streaming-server | 57c6c84265c0012a1efef365703c221329804d4c | stores/limits.go | https://github.com/nats-io/nats-streaming-server/blob/57c6c84265c0012a1efef365703c221329804d4c/stores/limits.go#L188-L242 | go | train | // Print returns an array of strings suitable for printing the store limits. | func (sl *StoreLimits) Print() []string | // Print returns an array of strings suitable for printing the store limits.
func (sl *StoreLimits) Print() []string | {
sublist := util.NewSublist()
for cn, cl := range sl.PerChannel {
sublist.Insert(cn, &channelLimitInfo{
name: cn,
limits: cl,
isLiteral: util.IsChannelNameLiteral(cn),
})
}
maxLevels := sublist.NumLevels()
txt := []string{}
title := "---------- Store Limits ----------"
txt = append(txt, title)
txt = append(txt, fmt.Sprintf("Channels: %s",
getLimitStr(true, int64(sl.MaxChannels),
int64(DefaultStoreLimits.MaxChannels),
limitCount)))
maxLen := len(title)
txt = append(txt, "--------- Channels Limits --------")
txt = append(txt, getGlobalLimitsPrintLines(&sl.ChannelLimits)...)
if len(sl.PerChannel) > 0 {
channels := sublist.Subjects()
channelLines := []string{}
for _, cn := range channels {
r := sublist.Match(cn)
var prev *channelLimitInfo
for i := 0; i < len(r); i++ {
channel := r[i].(*channelLimitInfo)
if channel.name == cn {
var parentLimits *ChannelLimits
if prev == nil {
parentLimits = &sl.ChannelLimits
} else {
parentLimits = prev.limits
}
channelLines = append(channelLines,
getChannelLimitsPrintLines(i, maxLevels, &maxLen, channel.name, channel.limits, parentLimits)...)
break
}
prev = channel
}
}
title := " List of Channels "
numberDashesLeft := (maxLen - len(title)) / 2
numberDashesRight := maxLen - len(title) - numberDashesLeft
title = fmt.Sprintf("%s%s%s",
repeatChar("-", numberDashesLeft),
title,
repeatChar("-", numberDashesRight))
txt = append(txt, title)
txt = append(txt, channelLines...)
}
txt = append(txt, repeatChar("-", maxLen))
return txt
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L72-L82 | go | train | // Create a new job with the time interval. | func NewJob(intervel uint64) *Job | // Create a new job with the time interval.
func NewJob(intervel uint64) *Job | {
return &Job{
intervel,
"", "", "",
time.Unix(0, 0),
time.Unix(0, 0), 0,
time.Sunday,
make(map[string]interface{}),
make(map[string]([]interface{})),
}
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L90-L105 | go | train | //Run the job and immediately reschedule it | func (j *Job) run() (result []reflect.Value, err error) | //Run the job and immediately reschedule it
func (j *Job) run() (result []reflect.Value, err error) | {
f := reflect.ValueOf(j.funcs[j.jobFunc])
params := j.fparams[j.jobFunc]
if len(params) != f.Type().NumIn() {
err = errors.New("the number of param is not adapted")
return
}
in := make([]reflect.Value, len(params))
for k, param := range params {
in[k] = reflect.ValueOf(param)
}
result = f.Call(in)
j.lastRun = time.Now()
j.scheduleNextRun()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L108-L110 | go | train | // for given function fn, get the name of function. | func getFunctionName(fn interface{}) string | // for given function fn, get the name of function.
func getFunctionName(fn interface{}) string | {
return runtime.FuncForPC(reflect.ValueOf((fn)).Pointer()).Name()
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L114-L126 | go | train | // Specifies the jobFunc that should be called every time the job runs
// | func (j *Job) Do(jobFun interface{}, params ...interface{}) | // Specifies the jobFunc that should be called every time the job runs
//
func (j *Job) Do(jobFun interface{}, params ...interface{}) | {
typ := reflect.TypeOf(jobFun)
if typ.Kind() != reflect.Func {
panic("only function can be schedule into the job queue.")
}
fname := getFunctionName(jobFun)
j.funcs[fname] = jobFun
j.fparams[fname] = params
j.jobFunc = fname
//schedule the next run
j.scheduleNextRun()
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L154-L181 | go | train | // s.Every(1).Day().At("10:30").Do(task)
// s.Every(1).Monday().At("10:30").Do(task) | func (j *Job) At(t string) *Job | // s.Every(1).Day().At("10:30").Do(task)
// s.Every(1).Monday().At("10:30").Do(task)
func (j *Job) At(t string) *Job | {
hour, min, err := formatTime(t)
if err != nil {
panic(err)
}
// time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
mock := time.Date(time.Now().Year(), time.Now().Month(), time.Now().Day(), int(hour), int(min), 0, 0, loc)
if j.unit == "days" {
if time.Now().After(mock) {
j.lastRun = mock
} else {
j.lastRun = time.Date(time.Now().Year(), time.Now().Month(), time.Now().Day()-1, hour, min, 0, 0, loc)
}
} else if j.unit == "weeks" {
if j.startDay != time.Now().Weekday() || (time.Now().After(mock) && j.startDay == time.Now().Weekday()) {
i := mock.Weekday() - j.startDay
if i < 0 {
i = 7 + i
}
j.lastRun = time.Date(time.Now().Year(), time.Now().Month(), time.Now().Day()-int(i), hour, min, 0, 0, loc)
} else {
j.lastRun = time.Date(time.Now().Year(), time.Now().Month(), time.Now().Day()-7, hour, min, 0, 0, loc)
}
}
return j
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L184-L220 | go | train | //Compute the instant when this job should run next | func (j *Job) scheduleNextRun() | //Compute the instant when this job should run next
func (j *Job) scheduleNextRun() | {
if j.lastRun == time.Unix(0, 0) {
if j.unit == "weeks" {
i := time.Now().Weekday() - j.startDay
if i < 0 {
i = 7 + i
}
j.lastRun = time.Date(time.Now().Year(), time.Now().Month(), time.Now().Day()-int(i), 0, 0, 0, 0, loc)
} else {
j.lastRun = time.Now()
}
}
if j.period != 0 {
// translate all the units to the Seconds
j.nextRun = j.lastRun.Add(j.period * time.Second)
} else {
switch j.unit {
case "minutes":
j.period = time.Duration(j.interval * 60)
break
case "hours":
j.period = time.Duration(j.interval * 60 * 60)
break
case "days":
j.period = time.Duration(j.interval * 60 * 60 * 24)
break
case "weeks":
j.period = time.Duration(j.interval * 60 * 60 * 24 * 7)
break
case "seconds":
j.period = time.Duration(j.interval)
}
j.nextRun = j.lastRun.Add(j.period * time.Second)
}
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L230-L236 | go | train | // the follow functions set the job's unit with seconds,minutes,hours...
// Set the unit with second | func (j *Job) Second() (job *Job) | // the follow functions set the job's unit with seconds,minutes,hours...
// Set the unit with second
func (j *Job) Second() (job *Job) | {
if j.interval != 1 {
panic("")
}
job = j.Seconds()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L245-L251 | go | train | // Set the unit with minute, which interval is 1 | func (j *Job) Minute() (job *Job) | // Set the unit with minute, which interval is 1
func (j *Job) Minute() (job *Job) | {
if j.interval != 1 {
panic("")
}
job = j.Minutes()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L260-L266 | go | train | //set the unit with hour, which interval is 1 | func (j *Job) Hour() (job *Job) | //set the unit with hour, which interval is 1
func (j *Job) Hour() (job *Job) | {
if j.interval != 1 {
panic("")
}
job = j.Hours()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L275-L281 | go | train | // Set the job's unit with day, which interval is 1 | func (j *Job) Day() (job *Job) | // Set the job's unit with day, which interval is 1
func (j *Job) Day() (job *Job) | {
if j.interval != 1 {
panic("")
}
job = j.Days()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L291-L298 | go | train | // s.Every(1).Monday().Do(task)
// Set the start day with Monday | func (j *Job) Monday() (job *Job) | // s.Every(1).Monday().Do(task)
// Set the start day with Monday
func (j *Job) Monday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 1
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L301-L308 | go | train | // Set the start day with Tuesday | func (j *Job) Tuesday() (job *Job) | // Set the start day with Tuesday
func (j *Job) Tuesday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 2
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L311-L318 | go | train | // Set the start day woth Wednesday | func (j *Job) Wednesday() (job *Job) | // Set the start day woth Wednesday
func (j *Job) Wednesday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 3
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L321-L328 | go | train | // Set the start day with thursday | func (j *Job) Thursday() (job *Job) | // Set the start day with thursday
func (j *Job) Thursday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 4
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L331-L338 | go | train | // Set the start day with friday | func (j *Job) Friday() (job *Job) | // Set the start day with friday
func (j *Job) Friday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 5
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L341-L348 | go | train | // Set the start day with saturday | func (j *Job) Saturday() (job *Job) | // Set the start day with saturday
func (j *Job) Saturday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 6
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L351-L358 | go | train | // Set the start day with sunday | func (j *Job) Sunday() (job *Job) | // Set the start day with sunday
func (j *Job) Sunday() (job *Job) | {
if j.interval != 1 {
panic("")
}
j.startDay = 0
job = j.Weeks()
return
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L395-L410 | go | train | // Get the current runnable jobs, which shouldRun is True | func (s *Scheduler) getRunnableJobs() (running_jobs [MAXJOBNUM]*Job, n int) | // Get the current runnable jobs, which shouldRun is True
func (s *Scheduler) getRunnableJobs() (running_jobs [MAXJOBNUM]*Job, n int) | {
runnableJobs := [MAXJOBNUM]*Job{}
n = 0
sort.Sort(s)
for i := 0; i < s.size; i++ {
if s.jobs[i].shouldRun() {
runnableJobs[n] = s.jobs[i]
//fmt.Println(runnableJobs)
n++
} else {
break
}
}
return runnableJobs, n
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L413-L419 | go | train | // Datetime when the next job should run. | func (s *Scheduler) NextRun() (*Job, time.Time) | // Datetime when the next job should run.
func (s *Scheduler) NextRun() (*Job, time.Time) | {
if s.size <= 0 {
return nil, time.Now()
}
sort.Sort(s)
return s.jobs[0], s.jobs[0].nextRun
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L422-L427 | go | train | // Schedule a new periodic job | func (s *Scheduler) Every(interval uint64) *Job | // Schedule a new periodic job
func (s *Scheduler) Every(interval uint64) *Job | {
job := NewJob(interval)
s.jobs[s.size] = job
s.size++
return job
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L430-L438 | go | train | // Run all the jobs that are scheduled to run. | func (s *Scheduler) RunPending() | // Run all the jobs that are scheduled to run.
func (s *Scheduler) RunPending() | {
runnableJobs, n := s.getRunnableJobs()
if n != 0 {
for i := 0; i < n; i++ {
runnableJobs[i].run()
}
}
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L441-L445 | go | train | // Run all jobs regardless if they are scheduled to run or not | func (s *Scheduler) RunAll() | // Run all jobs regardless if they are scheduled to run or not
func (s *Scheduler) RunAll() | {
for i := 0; i < s.size; i++ {
s.jobs[i].run()
}
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L448-L453 | go | train | // Run all jobs with delay seconds | func (s *Scheduler) RunAllwithDelay(d int) | // Run all jobs with delay seconds
func (s *Scheduler) RunAllwithDelay(d int) | {
for i := 0; i < s.size; i++ {
s.jobs[i].run()
time.Sleep(time.Duration(d))
}
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L456-L476 | go | train | // Remove specific job j | func (s *Scheduler) Remove(j interface{}) | // Remove specific job j
func (s *Scheduler) Remove(j interface{}) | {
i := 0
found := false
for ; i < s.size; i++ {
if s.jobs[i].jobFunc == getFunctionName(j) {
found = true
break
}
}
if !found {
return
}
for j := (i + 1); j < s.size; j++ {
s.jobs[i] = s.jobs[j]
i++
}
s.size = s.size - 1
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L479-L484 | go | train | // Delete all scheduled jobs | func (s *Scheduler) Clear() | // Delete all scheduled jobs
func (s *Scheduler) Clear() | {
for i := 0; i < s.size; i++ {
s.jobs[i] = nil
}
s.size = 0
} |
jasonlvhit/gocron | 5bcdd9fcfa9bf10574722841a11ccb2a57866dc8 | gocron.go | https://github.com/jasonlvhit/gocron/blob/5bcdd9fcfa9bf10574722841a11ccb2a57866dc8/gocron.go#L488-L504 | go | train | // Start all the pending jobs
// Add seconds ticker | func (s *Scheduler) Start() chan bool | // Start all the pending jobs
// Add seconds ticker
func (s *Scheduler) Start() chan bool | {
stopped := make(chan bool, 1)
ticker := time.NewTicker(1 * time.Second)
go func() {
for {
select {
case <-ticker.C:
s.RunPending()
case <-stopped:
return
}
}
}()
return stopped
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | acm/acmstate/state.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/acm/acmstate/state.go#L108-L115 | go | train | // Get global permissions from the account at GlobalPermissionsAddress | func GlobalAccountPermissions(getter AccountGetter) permission.AccountPermissions | // Get global permissions from the account at GlobalPermissionsAddress
func GlobalAccountPermissions(getter AccountGetter) permission.AccountPermissions | {
if getter == nil {
return permission.AccountPermissions{
Roles: []string{},
}
}
return GlobalPermissionsAccount(getter).Permissions
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | execution/state/accounts.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/execution/state/accounts.go#L13-L23 | go | train | // Returns nil if account does not exist with given address. | func (s *ReadState) GetAccount(address crypto.Address) (*acm.Account, error) | // Returns nil if account does not exist with given address.
func (s *ReadState) GetAccount(address crypto.Address) (*acm.Account, error) | {
tree, err := s.Forest.Reader(keys.Account.Prefix())
if err != nil {
return nil, err
}
accBytes := tree.Get(keys.Account.KeyNoPrefix(address))
if accBytes == nil {
return nil, nil
}
return acm.Decode(accBytes)
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | execution/state/accounts.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/execution/state/accounts.go#L105-L112 | go | train | // Storage | func (s *ReadState) GetStorage(address crypto.Address, key binary.Word256) (binary.Word256, error) | // Storage
func (s *ReadState) GetStorage(address crypto.Address, key binary.Word256) (binary.Word256, error) | {
keyFormat := keys.Storage.Fix(address)
tree, err := s.Forest.Reader(keyFormat.Prefix())
if err != nil {
return binary.Zero256, err
}
return binary.LeftPadWord256(tree.Get(keyFormat.KeyNoPrefix(key))), nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | execution/state/validators.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/execution/state/validators.go#L15-L77 | go | train | // Initialises the validator Ring from the validator storage in forest | func LoadValidatorRing(version int64, ringSize int,
getImmutable func(version int64) (*storage.ImmutableForest, error)) (*validator.Ring, error) | // Initialises the validator Ring from the validator storage in forest
func LoadValidatorRing(version int64, ringSize int,
getImmutable func(version int64) (*storage.ImmutableForest, error)) (*validator.Ring, error) | {
// In this method we have to page through previous version of the tree in order to reconstruct the in-memory
// ring structure. The corner cases are a little subtle but printing the buckets helps
// The basic idea is to load each version of the tree ringSize back, work out the difference that must have occurred
// between each bucket in the ring, and apply each diff to the ring. Once the ring is full it is symmetrical (up to
// a reindexing). If we are loading a chain whose height is less than the ring size we need to get the initial state
// correct
startVersion := version - int64(ringSize)
if startVersion < 1 {
// The ring will not be fully populated
startVersion = 1
}
var err error
// Read state to pull immutable forests from
rs := &ReadState{}
// Start with an empty ring - we want the initial bucket to have no cumulative power
ring := validator.NewRing(nil, ringSize)
// Load the IAVL state
rs.Forest, err = getImmutable(startVersion)
// Write the validator state at startVersion from IAVL tree into the ring's current bucket delta
err = validator.Write(ring, rs)
if err != nil {
return nil, err
}
// Rotate, now we have [ {bucket 0: cum: {}, delta: {start version changes} }, {bucket 1: cum: {start version changes}, delta {}, ... ]
// which is what we need (in particular we need this initial state if we are loading into a incompletely populated ring
_, _, err = ring.Rotate()
if err != nil {
return nil, err
}
// Rebuild validator Ring
for v := startVersion + 1; v <= version; v++ {
// Update IAVL read state to version of interest
rs.Forest, err = getImmutable(v)
if err != nil {
return nil, err
}
// Calculate the difference between the rings current cum and what is in state at this version
diff, err := validator.Diff(ring.CurrentSet(), rs)
if err != nil {
return nil, err
}
// Write that diff into the ring (just like it was when it was originally written to setPower)
err = validator.Write(ring, diff)
if err != nil {
return nil, err
}
// Rotate just like it was on the original commit
_, _, err = ring.Rotate()
if err != nil {
return nil, err
}
}
// Our ring should be the same up to symmetry in its index so we reindex to regain equality with the version we are loading
// This is the head index we would have had if we had started from version 1 like the chain did
ring.ReIndex(int(version % int64(ringSize)))
return ring, err
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L51-L61 | go | train | // Create a []byte key format based on a single byte prefix and fixed width key segments each of whose length is
// specified by by the corresponding element of layout. A final segment length of 0 can be used to indicate a variadic
// final element that may be of arbitrary length.
//
// For example, to store keys that could index some objects by a version number and their SHA256 hash using the form:
// 'c<version uint64><hash [32]byte>' then you would define the KeyFormat with:
//
// var keyFormat = NewKeyFormat('c', 8, 32)
//
// Then you can create a key with:
//
// func ObjectKey(version uint64, objectBytes []byte) []byte {
// hasher := sha256.New()
// hasher.Sum(nil)
// return keyFormat.Key(version, hasher.Sum(nil))
// }} | func NewKeyFormat(prefix string, layout ...int) (*KeyFormat, error) | // Create a []byte key format based on a single byte prefix and fixed width key segments each of whose length is
// specified by by the corresponding element of layout. A final segment length of 0 can be used to indicate a variadic
// final element that may be of arbitrary length.
//
// For example, to store keys that could index some objects by a version number and their SHA256 hash using the form:
// 'c<version uint64><hash [32]byte>' then you would define the KeyFormat with:
//
// var keyFormat = NewKeyFormat('c', 8, 32)
//
// Then you can create a key with:
//
// func ObjectKey(version uint64, objectBytes []byte) []byte {
// hasher := sha256.New()
// hasher.Sum(nil)
// return keyFormat.Key(version, hasher.Sum(nil))
// }}
func NewKeyFormat(prefix string, layout ...int) (*KeyFormat, error) | {
kf := &KeyFormat{
prefix: Prefix(prefix),
layout: layout,
}
err := kf.init()
if err != nil {
return nil, err
}
return kf, nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L64-L93 | go | train | // Format the byte segments into the key format - will panic if the segment lengths do not match the layout. | func (kf *KeyFormat) KeyBytes(segments ...[]byte) ([]byte, error) | // Format the byte segments into the key format - will panic if the segment lengths do not match the layout.
func (kf *KeyFormat) KeyBytes(segments ...[]byte) ([]byte, error) | {
key := make([]byte, kf.length)
n := copy(key, kf.prefix)
var offset int
for i, l := range kf.layout {
si := i + offset
if len(segments) <= si {
break
}
s := segments[si]
switch l {
case VariadicSegmentLength:
// Must be a final variadic element
key = append(key, s...)
n += len(s)
case DelimiterSegmentLength:
// ignore
offset--
default:
if len(s) != l {
return nil, fmt.Errorf("the segment '0x%X' provided to KeyFormat.KeyBytes() does not have required "+
"%d bytes required by layout for segment %d", s, l, i)
}
n += l
// Big endian so pad on left if not given the full width for this segment
copy(key[n-len(s):n], s)
}
}
return key[:n], nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L98-L108 | go | train | // Format the args passed into the key format - will panic if the arguments passed do not match the length
// of the segment to which they correspond. When called with no arguments returns the raw prefix (useful as a start
// element of the entire keys space when sorted lexicographically). | func (kf *KeyFormat) Key(args ...interface{}) ([]byte, error) | // Format the args passed into the key format - will panic if the arguments passed do not match the length
// of the segment to which they correspond. When called with no arguments returns the raw prefix (useful as a start
// element of the entire keys space when sorted lexicographically).
func (kf *KeyFormat) Key(args ...interface{}) ([]byte, error) | {
if len(args) > len(kf.layout) {
return nil, fmt.Errorf("KeyFormat.Key() is provided with %d args but format only has %d segments",
len(args), len(kf.layout))
}
segments := make([][]byte, len(args))
for i, a := range args {
segments[i] = format(a)
}
return kf.KeyBytes(segments...)
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L111-L127 | go | train | // Reads out the bytes associated with each segment of the key format from key. | func (kf *KeyFormat) ScanBytes(key []byte) [][]byte | // Reads out the bytes associated with each segment of the key format from key.
func (kf *KeyFormat) ScanBytes(key []byte) [][]byte | {
segments := make([][]byte, len(kf.layout))
n := kf.prefix.Length()
for i, l := range kf.layout {
if l == 0 {
// Must be final variadic segment
segments[i] = key[n:]
return segments
}
n += l
if n > len(key) {
return segments[:i]
}
segments[i] = key[n-l : n]
}
return segments
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L131-L141 | go | train | // Extracts the segments into the values pointed to by each of args. Each arg must be a pointer to int64, uint64, or
// []byte, and the width of the args must match layout. | func (kf *KeyFormat) Scan(key []byte, args ...interface{}) error | // Extracts the segments into the values pointed to by each of args. Each arg must be a pointer to int64, uint64, or
// []byte, and the width of the args must match layout.
func (kf *KeyFormat) Scan(key []byte, args ...interface{}) error | {
segments := kf.ScanBytes(key)
if len(args) > len(segments) {
return fmt.Errorf("KeyFormat.Scan() is provided with %d args but format only has %d segments in key %X",
len(args), len(segments), key)
}
for i, a := range args {
scan(a, segments[i])
}
return nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L149-L154 | go | train | // Like Scan but adds expects a key with the KeyFormat's prefix trimmed | func (kf *KeyFormat) ScanNoPrefix(key []byte, args ...interface{}) error | // Like Scan but adds expects a key with the KeyFormat's prefix trimmed
func (kf *KeyFormat) ScanNoPrefix(key []byte, args ...interface{}) error | {
// Just pad by the length of the prefix
paddedKey := make([]byte, len(kf.prefix)+len(key))
copy(paddedKey[len(kf.prefix):], key)
return kf.Scan(paddedKey, args...)
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L157-L163 | go | train | // Like Key but removes the prefix string | func (kf *KeyFormat) KeyNoPrefix(args ...interface{}) (Prefix, error) | // Like Key but removes the prefix string
func (kf *KeyFormat) KeyNoPrefix(args ...interface{}) (Prefix, error) | {
key, err := kf.Key(args...)
if err != nil {
return nil, err
}
return key[len(kf.prefix):], nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L167-L173 | go | train | // Fixes the first args many segments as the prefix of a new KeyFormat by using the args to generate a key that becomes
// that prefix. Any remaining unassigned segments become the layout of the new KeyFormat. | func (kf *KeyFormat) Fix(args ...interface{}) (*KeyFormat, error) | // Fixes the first args many segments as the prefix of a new KeyFormat by using the args to generate a key that becomes
// that prefix. Any remaining unassigned segments become the layout of the new KeyFormat.
func (kf *KeyFormat) Fix(args ...interface{}) (*KeyFormat, error) | {
key, err := kf.Key(args...)
if err != nil {
return nil, err
}
return NewKeyFormat(string(key), kf.layout[len(args):]...)
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | storage/key_format.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/storage/key_format.go#L178-L183 | go | train | // Returns an iterator over the underlying iterable using this KeyFormat's prefix. This is to support proper iteration over the
// prefix in the presence of nil start or end which requests iteration to the inclusive edges of the domain. An optional
// argument for reverse can be passed to get reverse iteration. | func (kf *KeyFormat) Iterator(iterable KVIterable, start, end []byte, reverse ...bool) KVIterator | // Returns an iterator over the underlying iterable using this KeyFormat's prefix. This is to support proper iteration over the
// prefix in the presence of nil start or end which requests iteration to the inclusive edges of the domain. An optional
// argument for reverse can be passed to get reverse iteration.
func (kf *KeyFormat) Iterator(iterable KVIterable, start, end []byte, reverse ...bool) KVIterator | {
if len(reverse) > 0 && reverse[0] {
return kf.prefix.Iterator(iterable.ReverseIterator, start, end)
}
return kf.prefix.Iterator(iterable.Iterator, start, end)
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/server.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/server.go#L21-L34 | go | train | // NewServer returns a new HTTP server | func NewServer(cfg *config.VentConfig, log *logger.Logger, consumer *Consumer) *Server | // NewServer returns a new HTTP server
func NewServer(cfg *config.VentConfig, log *logger.Logger, consumer *Consumer) *Server | {
// setup handlers
mux := http.NewServeMux()
mux.HandleFunc("/health", healthHandler(consumer))
return &Server{
Config: cfg,
Log: log,
Consumer: consumer,
mux: mux,
stopCh: make(chan bool, 1),
}
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/server.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/server.go#L37-L54 | go | train | // Run starts the HTTP server | func (s *Server) Run() | // Run starts the HTTP server
func (s *Server) Run() | {
s.Log.Info("msg", "Starting HTTP Server")
// start http server
httpServer := &http.Server{Addr: s.Config.HTTPAddr, Handler: s}
go func() {
s.Log.Info("msg", "HTTP Server listening", "address", s.Config.HTTPAddr)
httpServer.ListenAndServe()
}()
// wait for stop signal
<-s.stopCh
s.Log.Info("msg", "Shutting down HTTP Server...")
httpServer.Shutdown(context.Background())
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/server.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/server.go#L57-L59 | go | train | // ServeHTTP dispatches the HTTP requests using the Server Mux | func (s *Server) ServeHTTP(resp http.ResponseWriter, req *http.Request) | // ServeHTTP dispatches the HTTP requests using the Server Mux
func (s *Server) ServeHTTP(resp http.ResponseWriter, req *http.Request) | {
s.mux.ServeHTTP(resp, req)
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/rowbuilder.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/rowbuilder.go#L19-L67 | go | train | // buildEventData builds event data from transactions | func buildEventData(projection *sqlsol.Projection, eventClass *types.EventClass, event *exec.Event, abiSpec *abi.AbiSpec,
l *logger.Logger) (types.EventDataRow, error) | // buildEventData builds event data from transactions
func buildEventData(projection *sqlsol.Projection, eventClass *types.EventClass, event *exec.Event, abiSpec *abi.AbiSpec,
l *logger.Logger) (types.EventDataRow, error) | {
// a fresh new row to store column/value data
row := make(map[string]interface{})
// get header & log data for the given event
eventHeader := event.GetHeader()
eventLog := event.GetLog()
// decode event data using the provided abi specification
decodedData, err := decodeEvent(eventHeader, eventLog, abiSpec)
if err != nil {
return types.EventDataRow{}, errors.Wrapf(err, "Error decoding event (filter: %s)", eventClass.Filter)
}
l.Info("msg", fmt.Sprintf("Unpacked data: %v", decodedData), "eventName", decodedData[types.EventNameLabel])
rowAction := types.ActionUpsert
// for each data element, maps to SQL columnName and gets its value
// if there is no matching column for the item, it doesn't need to be stored in db
for fieldName, value := range decodedData {
// Can't think of case where we will get a key that is empty, but if we ever did we should not treat
// it as a delete marker when the delete marker field in unset
if eventClass.DeleteMarkerField != "" && eventClass.DeleteMarkerField == fieldName {
rowAction = types.ActionDelete
}
fieldMapping := eventClass.GetFieldMapping(fieldName)
if fieldMapping == nil {
continue
}
column, err := projection.GetColumn(eventClass.TableName, fieldMapping.ColumnName)
if err == nil {
if fieldMapping.BytesToString {
if bs, ok := value.(*[]byte); ok {
str := sanitiseBytesForString(*bs, l)
row[column.Name] = interface{}(str)
continue
}
}
row[column.Name] = value
} else {
l.Debug("msg", "could not get column", "err", err)
}
}
return types.EventDataRow{Action: rowAction, RowData: row, EventClass: eventClass}, nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/rowbuilder.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/rowbuilder.go#L70-L88 | go | train | // buildBlkData builds block data from block stream | func buildBlkData(tbls types.EventTables, block *exec.BlockExecution) (types.EventDataRow, error) | // buildBlkData builds block data from block stream
func buildBlkData(tbls types.EventTables, block *exec.BlockExecution) (types.EventDataRow, error) | {
// a fresh new row to store column/value data
row := make(map[string]interface{})
// block raw data
if _, ok := tbls[types.SQLBlockTableName]; ok {
blockHeader, err := json.Marshal(block.Header)
if err != nil {
return types.EventDataRow{}, fmt.Errorf("Couldn't marshal BlockHeader in block %v", block)
}
row[types.SQLColumnLabelHeight] = fmt.Sprintf("%v", block.Height)
row[types.SQLColumnLabelBlockHeader] = string(blockHeader)
} else {
return types.EventDataRow{}, fmt.Errorf("table: %s not found in table structure %v", types.SQLBlockTableName, tbls)
}
return types.EventDataRow{Action: types.ActionUpsert, RowData: row}, nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/rowbuilder.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/rowbuilder.go#L91-L131 | go | train | // buildTxData builds transaction data from tx stream | func buildTxData(txe *exec.TxExecution) (types.EventDataRow, error) | // buildTxData builds transaction data from tx stream
func buildTxData(txe *exec.TxExecution) (types.EventDataRow, error) | {
// transaction raw data
envelope, err := json.Marshal(txe.Envelope)
if err != nil {
return types.EventDataRow{}, fmt.Errorf("couldn't marshal envelope in tx %v", txe)
}
events, err := json.Marshal(txe.Events)
if err != nil {
return types.EventDataRow{}, fmt.Errorf("couldn't marshal events in tx %v", txe)
}
result, err := json.Marshal(txe.Result)
if err != nil {
return types.EventDataRow{}, fmt.Errorf("couldn't marshal result in tx %v", txe)
}
receipt, err := json.Marshal(txe.Receipt)
if err != nil {
return types.EventDataRow{}, fmt.Errorf("couldn't marshal receipt in tx %v", txe)
}
exception, err := json.Marshal(txe.Exception)
if err != nil {
return types.EventDataRow{}, fmt.Errorf("couldn't marshal exception in tx %v", txe)
}
return types.EventDataRow{
Action: types.ActionUpsert,
RowData: map[string]interface{}{
types.SQLColumnLabelHeight: txe.Height,
types.SQLColumnLabelTxHash: txe.TxHash.String(),
types.SQLColumnLabelIndex: txe.Index,
types.SQLColumnLabelTxType: txe.TxType.String(),
types.SQLColumnLabelEnvelope: string(envelope),
types.SQLColumnLabelEvents: string(events),
types.SQLColumnLabelResult: string(result),
types.SQLColumnLabelReceipt: string(receipt),
types.SQLColumnLabelException: string(exception),
},
}, nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | vent/service/rowbuilder.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/vent/service/rowbuilder.go#L146-L177 | go | train | // Checks whether the bytes passed are valid utf8 string bytes. If they are not returns a sanitised string version of the
// bytes with offending sequences replaced by the utf8 replacement/error rune and an error indicating the offending
// byte sequences and their position. Note: always returns a valid string regardless of error. | func UTF8StringFromBytes(bs []byte) (string, error) | // Checks whether the bytes passed are valid utf8 string bytes. If they are not returns a sanitised string version of the
// bytes with offending sequences replaced by the utf8 replacement/error rune and an error indicating the offending
// byte sequences and their position. Note: always returns a valid string regardless of error.
func UTF8StringFromBytes(bs []byte) (string, error) | {
// Provide fast path for good strings
if utf8.Valid(bs) {
return string(bs), nil
}
buf := new(bytes.Buffer)
var runeErrs []string
// This loops over runs (code points and unlike range of string gives us index of code point (i.e. utf8 char)
// not bytes, which we want for error message
var offset int
// Iterate over character indices (not byte indices)
for i := 0; i < len(bs); i++ {
r, n := utf8.DecodeRune(bs[offset:])
buf.WriteRune(r)
if r == utf8.RuneError {
runeErrs = append(runeErrs, fmt.Sprintf("0x% X (at index %d)", bs[offset:offset+n], i))
}
offset += n
}
str := buf.String()
errHeader := fmt.Sprintf("bytes purported to represent the string '%s'", str)
switch len(runeErrs) {
case 0:
// should not happen
return str, fmt.Errorf("bytes appear to be invalid utf8 but do not contain invalid code points")
case 1:
return str, fmt.Errorf("%s contain invalid utf8 byte sequence: %s", errHeader, runeErrs[0])
default:
return str, fmt.Errorf("%s contain invalid utf8 byte sequences: %s", errHeader,
strings.Join(runeErrs, ", "))
}
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | event/pubsub/pubsub.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/event/pubsub/pubsub.go#L71-L85 | go | train | // NewServer returns a new server. See the commentary on the Option functions
// for a detailed description of how to configure buffering. If no options are
// provided, the resulting server's queue is unbuffered. | func NewServer(options ...Option) *Server | // NewServer returns a new server. See the commentary on the Option functions
// for a detailed description of how to configure buffering. If no options are
// provided, the resulting server's queue is unbuffered.
func NewServer(options ...Option) *Server | {
s := &Server{
subscriptions: make(map[string]map[string]query.Query),
}
s.BaseService = *common.NewBaseService(nil, "PubSub", s)
for _, option := range options {
option(s)
}
// if BufferCapacity option was not set, the channel is unbuffered
s.cmds = make(chan cmd, s.cmdsCap)
return s
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | event/pubsub/pubsub.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/event/pubsub/pubsub.go#L108-L134 | go | train | // Subscribe creates a subscription for the given client. It accepts a channel
// on which messages matching the given query can be received. An error will be
// returned to the caller if the context is canceled or if subscription already
// exist for pair clientID and query. | func (s *Server) Subscribe(ctx context.Context, clientID string, qry query.Query, outBuffer int) (<-chan interface{}, error) | // Subscribe creates a subscription for the given client. It accepts a channel
// on which messages matching the given query can be received. An error will be
// returned to the caller if the context is canceled or if subscription already
// exist for pair clientID and query.
func (s *Server) Subscribe(ctx context.Context, clientID string, qry query.Query, outBuffer int) (<-chan interface{}, error) | {
s.mtx.RLock()
clientSubscriptions, ok := s.subscriptions[clientID]
if ok {
_, ok = clientSubscriptions[qry.String()]
}
s.mtx.RUnlock()
if ok {
return nil, ErrAlreadySubscribed
}
// We are responsible for closing this channel so we create it
out := make(chan interface{}, outBuffer)
select {
case s.cmds <- cmd{op: sub, clientID: clientID, query: qry, ch: out}:
s.mtx.Lock()
if _, ok = s.subscriptions[clientID]; !ok {
s.subscriptions[clientID] = make(map[string]query.Query)
}
// preserve original query
// see Unsubscribe
s.subscriptions[clientID][qry.String()] = qry
s.mtx.Unlock()
return out, nil
case <-ctx.Done():
return nil, ctx.Err()
}
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | event/pubsub/pubsub.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/event/pubsub/pubsub.go#L186-L188 | go | train | // Publish publishes the given message. An error will be returned to the caller
// if the context is canceled. | func (s *Server) Publish(ctx context.Context, msg interface{}) error | // Publish publishes the given message. An error will be returned to the caller
// if the context is canceled.
func (s *Server) Publish(ctx context.Context, msg interface{}) error | {
return s.PublishWithTags(ctx, msg, query.TagMap(make(map[string]interface{})))
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | event/pubsub/pubsub.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/event/pubsub/pubsub.go#L193-L200 | go | train | // PublishWithTags publishes the given message with the set of tags. The set is
// matched with clients queries. If there is a match, the message is sent to
// the client. | func (s *Server) PublishWithTags(ctx context.Context, msg interface{}, tags query.Tagged) error | // PublishWithTags publishes the given message with the set of tags. The set is
// matched with clients queries. If there is a match, the message is sent to
// the client.
func (s *Server) PublishWithTags(ctx context.Context, msg interface{}, tags query.Tagged) error | {
select {
case s.cmds <- cmd{op: pub, msg: msg, tags: tags}:
return nil
case <-ctx.Done():
return ctx.Err()
}
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | event/pubsub/pubsub.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/event/pubsub/pubsub.go#L216-L222 | go | train | // OnStart implements Service.OnStart by starting the server. | func (s *Server) OnStart() error | // OnStart implements Service.OnStart by starting the server.
func (s *Server) OnStart() error | {
go s.loop(state{
queries: make(map[query.Query]map[string]chan interface{}),
clients: make(map[string]map[query.Query]struct{}),
})
return nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | bcm/block_store.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/bcm/block_store.go#L55-L79 | go | train | // Iterate over blocks between start (inclusive) and end (exclusive) | func (bs *BlockStore) Blocks(start, end int64, iter func(*Block) (stop bool)) (stopped bool, err error) | // Iterate over blocks between start (inclusive) and end (exclusive)
func (bs *BlockStore) Blocks(start, end int64, iter func(*Block) (stop bool)) (stopped bool, err error) | {
if end > 0 && start >= end {
return false, fmt.Errorf("end height must be strictly greater than start height")
}
if start <= 0 {
// From first block
start = 1
}
if end < 0 {
// -1 means include the very last block so + 1 for offset
end = bs.Height() + end + 1
}
for height := start; height <= end; height++ {
block, err := bs.Block(height)
if err != nil {
return false, err
}
if iter(block) {
return true, nil
}
}
return false, nil
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | logging/logconfig/sinks.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/logging/logconfig/sinks.go#L205-L214 | go | train | // Transforms | func CaptureTransform(name string, bufferCap int, passthrough bool) *TransformConfig | // Transforms
func CaptureTransform(name string, bufferCap int, passthrough bool) *TransformConfig | {
return &TransformConfig{
TransformType: Capture,
CaptureConfig: &CaptureConfig{
Name: name,
BufferCap: bufferCap,
Passthrough: passthrough,
},
}
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | logging/logconfig/sinks.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/logging/logconfig/sinks.go#L301-L303 | go | train | // Logger formation | func (sinkConfig *SinkConfig) BuildLogger() (log.Logger, map[string]*loggers.CaptureLogger, error) | // Logger formation
func (sinkConfig *SinkConfig) BuildLogger() (log.Logger, map[string]*loggers.CaptureLogger, error) | {
return BuildLoggerFromSinkConfig(sinkConfig, make(map[string]*loggers.CaptureLogger))
} |
hyperledger/burrow | 59993f5aad71a8e16ab6ed4e57e138e2398eae4e | logging/loggers/sort_logger.go | https://github.com/hyperledger/burrow/blob/59993f5aad71a8e16ab6ed4e57e138e2398eae4e/logging/loggers/sort_logger.go#L33-L35 | go | train | // Less reports whether the element with
// index i should sort before the element with index j. | func (skv *sortableKeyvals) Less(i, j int) bool | // Less reports whether the element with
// index i should sort before the element with index j.
func (skv *sortableKeyvals) Less(i, j int) bool | {
return skv.keyRank(i) < skv.keyRank(j)
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.