INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Return a dictionary form of the message :param msg: the message to be sent :raises: ValueError if msg cannot be converted to an appropriate format for transmission
def toDict(self, msg: Dict) -> Dict: """ Return a dictionary form of the message :param msg: the message to be sent :raises: ValueError if msg cannot be converted to an appropriate format for transmission """ if isinstance(msg, Request): tmsg = m...
Get all ledger IDs for which A) not updated for more than Freshness Timeout B) hasn't been attempted to update (returned from this method) for more than Freshness Timeout Should be called whenever we need to decide if ledgers need to be updated. :param ts: the current time check the...
def check_freshness(self, ts): ''' Get all ledger IDs for which A) not updated for more than Freshness Timeout B) hasn't been attempted to update (returned from this method) for more than Freshness Timeout Should be called whenever we need to decide if ledgers need to be upda...
Updates the time at which the ledger was updated. Should be called whenever a txn for the ledger is ordered. :param ledger_id: the ID of the ledgers a txn was ordered for :param ts: the current time :return: None
def update_freshness(self, ledger_id, ts): ''' Updates the time at which the ledger was updated. Should be called whenever a txn for the ledger is ordered. :param ledger_id: the ID of the ledgers a txn was ordered for :param ts: the current time :return: None '''...
Gets the time at which each ledger was updated. Can be called at any time to get this information. :return: an ordered dict of outdated ledgers sorted by last update time (from old to new) and then by ledger ID (in case of equal update time)
def get_last_update_time(self): ''' Gets the time at which each ledger was updated. Can be called at any time to get this information. :return: an ordered dict of outdated ledgers sorted by last update time (from old to new) and then by ledger ID (in case of equal update time) ...
Create a string representation of the given object. Examples: :: >>> serialize("str") 'str' >>> serialize([1,2,3,4,5]) '1,2,3,4,5' >>> signing.serlize({1:'a', 2:'b'}) '1:a|2:b' >>> signing.serlize({1:'a', 2:'b', 3:[1,{2:'k'}]}) '1:a|2:b|3:...
def serialize(self, obj, level=0, objname=None, topLevelKeysToIgnore=None, toBytes=True): """ Create a string representation of the given object. Examples: :: >>> serialize("str") 'str' >>> serialize([1,2,3,4,5]) '1,2,3,4,5' >>> ...
The number of txns from the beginning of `uncommittedTxns` to commit :param count: :return: a tuple of 2 seqNos indicating the start and end of sequence numbers of the committed txns
def commitTxns(self, count: int) -> Tuple[Tuple[int, int], List]: """ The number of txns from the beginning of `uncommittedTxns` to commit :param count: :return: a tuple of 2 seqNos indicating the start and end of sequence numbers of the committed txns """ committ...
The number of txns in `uncommittedTxns` which have to be discarded :param count: :return:
def discardTxns(self, count: int): """ The number of txns in `uncommittedTxns` which have to be discarded :param count: :return: """ # TODO: This can be optimised if multiple discards are combined # together since merkle root computation will be done only ...
Return a copy of merkle tree after applying the txns :param txns: :return:
def treeWithAppliedTxns(self, txns: List, currentTree=None): """ Return a copy of merkle tree after applying the txns :param txns: :return: """ currentTree = currentTree or self.tree # Copying the tree is not a problem since its a Compact Merkle Tree # so ...
Clear the values of all attributes of the transaction store.
def reset(self): """ Clear the values of all attributes of the transaction store. """ self.getsCounter = 0 # dictionary of processed requests for each client. Value for each # client is a dictionary with request id as key and transaction id as # value sel...
Try to stop the transaction store in the given timeout or raise an exception.
def stop(self, timeout: int = 5) -> None: """ Try to stop the transaction store in the given timeout or raise an exception. """ self.running = False start = time.perf_counter() while True: if self.getsCounter == 0: return True ...
Add a client request to the transaction store's list of processed requests.
def addToProcessedTxns(self, identifier: str, txnId: str, reply: Reply) -> None: """ Add a client request to the transaction store's list of processed requests. """ self.transactions[txnId] = reply ...
Add the given Reply to this transaction store's list of responses. Also add to processedRequests if not added previously.
async def append(self, reply: Reply) \ -> None: """ Add the given Reply to this transaction store's list of responses. Also add to processedRequests if not added previously. """ result = reply.result identifier = result.get(f.IDENTIFIER.nm) txnId = res...
If client is not in `processedRequests` or requestId is not there in processed requests and txnId is present then its a new reply
def _isNewTxn(self, identifier, reply, txnId) -> bool: """ If client is not in `processedRequests` or requestId is not there in processed requests and txnId is present then its a new reply """ return (identifier not in self.processedRequests or reply.reqId not in ...
Add the specified request to this request store.
def add(self, req: Request): """ Add the specified request to this request store. """ key = req.key if key not in self: self[key] = ReqState(req) return self[key]
Should be called by each replica when request is ordered or replica is removed.
def ordered_by_replica(self, request_key): """ Should be called by each replica when request is ordered or replica is removed. """ state = self.get(request_key) if not state: return state.unordered_by_replicas_num -= 1
Works together with 'mark_as_executed' and 'free' methods. It marks request as forwarded to 'to' replicas. To let request be removed, it should be marked as executed and each of 'to' replicas should call 'free'.
def mark_as_forwarded(self, req: Request, to: int): """ Works together with 'mark_as_executed' and 'free' methods. It marks request as forwarded to 'to' replicas. To let request be removed, it should be marked as executed and each of 'to' replicas should call 'free'. """...
Add the specified request to the list of received PROPAGATEs. :param req: the REQUEST to add :param sender: the name of the node sending the msg
def add_propagate(self, req: Request, sender: str): """ Add the specified request to the list of received PROPAGATEs. :param req: the REQUEST to add :param sender: the name of the node sending the msg """ data = self.add(req) data.propagates[sender] = req
Get the number of propagates for a given reqId and identifier.
def votes(self, req) -> int: """ Get the number of propagates for a given reqId and identifier. """ try: votes = len(self[req.key].propagates) except KeyError: votes = 0 return votes
Works together with 'mark_as_forwarded' and 'free' methods. It makes request to be removed if all replicas request was forwarded to freed it.
def mark_as_executed(self, req: Request): """ Works together with 'mark_as_forwarded' and 'free' methods. It makes request to be removed if all replicas request was forwarded to freed it. """ state = self[req.key] state.executed = True self._clean(state)
Works together with 'mark_as_forwarded' and 'mark_as_executed' methods. It makes request to be removed if all replicas request was forwarded to freed it and if request executor marked it as executed.
def free(self, request_key): """ Works together with 'mark_as_forwarded' and 'mark_as_executed' methods. It makes request to be removed if all replicas request was forwarded to freed it and if request executor marked it as executed. """ state = self.get(request_k...
Check whether the request specified has already been propagated.
def has_propagated(self, req: Request, sender: str) -> bool: """ Check whether the request specified has already been propagated. """ return req.key in self and sender in self[req.key].propagates
Broadcast a PROPAGATE to all other nodes :param request: the REQUEST to propagate
def propagate(self, request: Request, clientName): """ Broadcast a PROPAGATE to all other nodes :param request: the REQUEST to propagate """ if self.requests.has_propagated(request, self.name): logger.trace("{} already propagated {}".format(self, request)) el...
Create a new PROPAGATE for the given REQUEST. :param request: the client REQUEST :return: a new PROPAGATE msg
def createPropagate( request: Union[Request, dict], client_name) -> Propagate: """ Create a new PROPAGATE for the given REQUEST. :param request: the client REQUEST :return: a new PROPAGATE msg """ if not isinstance(request, (Request, dict)): logge...
Determine whether to forward client REQUESTs to replicas, based on the following logic: - If exactly f+1 PROPAGATE requests are received, then forward. - If less than f+1 of requests then probably there's no consensus on the REQUEST, don't forward. - If more than f+1 then al...
def canForward(self, request: Request): """ Determine whether to forward client REQUESTs to replicas, based on the following logic: - If exactly f+1 PROPAGATE requests are received, then forward. - If less than f+1 of requests then probably there's no consensus on the ...
Forward the specified client REQUEST to the other replicas on this node :param request: the REQUEST to propagate
def forward(self, request: Request): """ Forward the specified client REQUEST to the other replicas on this node :param request: the REQUEST to propagate """ key = request.key num_replicas = self.replicas.num_replicas logger.debug('{} forwarding request {} to {} ...
Record the request in the list of requests and propagate. :param request: :param clientName:
def recordAndPropagate(self, request: Request, clientName): """ Record the request in the list of requests and propagate. :param request: :param clientName: """ self.requests.add(request) self.propagate(request, clientName) self.tryForwarding(request)
Try to forward the request if the required conditions are met. See the method `canForward` for the conditions to check before forwarding a request.
def tryForwarding(self, request: Request): """ Try to forward the request if the required conditions are met. See the method `canForward` for the conditions to check before forwarding a request. """ cannot_reason_msg = self.canForward(request) if cannot_reason_msg...
Request PROPAGATEs for the given request keys. Since replicas can request PROPAGATEs independently of each other, check if it has been requested recently :param req_keys: :return:
def request_propagates(self, req_keys): """ Request PROPAGATEs for the given request keys. Since replicas can request PROPAGATEs independently of each other, check if it has been requested recently :param req_keys: :return: """ i = 0 for digest in ...
Currently not using clear
def removeRemote(self, remote: Remote, clear=True): """ Currently not using clear """ name = remote.name pkey = remote.publicKey vkey = remote.verKey if name in self.remotes: self.remotes.pop(name) self.remotesByKeys.pop(pkey, None) ...
Service `limit` number of received messages in this stack. :param limit: the maximum number of messages to be processed. If None, processes all of the messages in rxMsgs. :return: the number of messages processed.
async def service(self, limit=None, quota: Optional[Quota] = None) -> int: """ Service `limit` number of received messages in this stack. :param limit: the maximum number of messages to be processed. If None, processes all of the messages in rxMsgs. :return: the number of messag...
Receives messages from listener :param quota: number of messages to receive :return: number of received messages
def _receiveFromListener(self, quota: Quota) -> int: """ Receives messages from listener :param quota: number of messages to receive :return: number of received messages """ i = 0 incoming_size = 0 while i < quota.count and incoming_size < quota.size: ...
Receives messages from remotes :param quotaPerRemote: number of messages to receive from one remote :return: number of received messages
def _receiveFromRemotes(self, quotaPerRemote) -> int: """ Receives messages from remotes :param quotaPerRemote: number of messages to receive from one remote :return: number of received messages """ assert quotaPerRemote totalReceived = 0 for ident, remot...
Connect to the node specified by name.
def connect(self, name=None, remoteId=None, ha=None, verKeyRaw=None, publicKeyRaw=None): """ Connect to the node specified by name. """ if not name: raise ValueError('Remote name should be specifi...
Disconnect remote and connect to it again :param remote: instance of Remote from self.remotes :param remoteName: name of remote :return:
def reconnectRemote(self, remote): """ Disconnect remote and connect to it again :param remote: instance of Remote from self.remotes :param remoteName: name of remote :return: """ if not isinstance(remote, Remote): raise PlenumTypeError('remote', remo...
Returns a list of hashes with serial numbers between start and end, both inclusive.
def _readMultiple(self, start, end, db): """ Returns a list of hashes with serial numbers between start and end, both inclusive. """ self._validatePos(start, end) # Converting any bytearray to bytes return [bytes(db.get(str(pos))) for pos in range(start, end + 1...
pack nibbles to binary :param nibbles: a nibbles sequence. may have a terminator
def pack_nibbles(nibbles): """pack nibbles to binary :param nibbles: a nibbles sequence. may have a terminator """ if nibbles[-1] == NIBBLE_TERMINATOR: flags = 2 nibbles = nibbles[:-1] else: flags = 0 oddlen = len(nibbles) % 2 flags |= oddlen # set lowest bit if ...
get last node for the given prefix, also update `seen_prfx` to track the path already traversed :param node: node in form of list, or BLANK_NODE :param key_prfx: prefix to look for :param seen_prfx: prefix already seen, updates with each call :return: BLANK_NODE if does not ...
def _get_last_node_for_prfx(self, node, key_prfx, seen_prfx): """ get last node for the given prefix, also update `seen_prfx` to track the path already traversed :param node: node in form of list, or BLANK_NODE :param key_prfx: prefix to look for :param seen_prfx: prefix already seen, u...
yield (key, value) stored in this and the descendant nodes :param node: node in form of list, or BLANK_NODE .. note:: Here key is in full form, rather than key of the individual node
def _iter_branch(self, node): """yield (key, value) stored in this and the descendant nodes :param node: node in form of list, or BLANK_NODE .. note:: Here key is in full form, rather than key of the individual node """ if node == BLANK_NODE: raise StopIt...
Get value of a key when the root node was `root_node` :param root_node: :param key: :return:
def get_at(self, root_node, key): """ Get value of a key when the root node was `root_node` :param root_node: :param key: :return: """ return self._get(root_node, bin_to_nibbles(to_string(key)))
Calculate and return the metrics.
def metrics(self): """ Calculate and return the metrics. """ masterThrp, backupThrp = self.getThroughputs(self.instances.masterId) r = self.instance_throughput_ratio(self.instances.masterId) m = [ ("{} Monitor metrics:".format(self), None), ("Delta...
Pretty printing for metrics
def prettymetrics(self) -> str: """ Pretty printing for metrics """ rendered = ["{}: {}".format(*m) for m in self.metrics()] return "\n ".join(rendered)
Reset the monitor. Sets all monitored values to defaults.
def reset(self): """ Reset the monitor. Sets all monitored values to defaults. """ logger.debug("{}'s Monitor being reset".format(self)) instances_ids = self.instances.started.keys() self.numOrderedRequests = {inst_id: (0, 0) for inst_id in instances_ids} self.req...
Add one protocol instance for monitoring.
def addInstance(self, inst_id): """ Add one protocol instance for monitoring. """ self.instances.add(inst_id) self.requestTracker.add_instance(inst_id) self.numOrderedRequests[inst_id] = (0, 0) rm = self.create_throughput_measurement(self.config) self.thr...
Measure the time taken for ordering of a request and return it. Monitor might have been reset due to view change due to which this method returns None
def requestOrdered(self, reqIdrs: List[str], instId: int, requests, byMaster: bool = False) -> Dict: """ Measure the time taken for ordering of a request and return it. Monitor might have been reset due to view change due to which this method returns None "...
Record the time at which request ordering started.
def requestUnOrdered(self, key: str): """ Record the time at which request ordering started. """ now = time.perf_counter() if self.acc_monitor: self.acc_monitor.update_time(now) self.acc_monitor.request_received(key) self.requestTracker.start(key, ...
Return whether the master instance is slow.
def isMasterDegraded(self): """ Return whether the master instance is slow. """ if self.acc_monitor: self.acc_monitor.update_time(time.perf_counter()) return self.acc_monitor.is_master_degraded() else: return (self.instances.masterId is not Non...
Return slow instance.
def areBackupsDegraded(self): """ Return slow instance. """ slow_instances = [] if self.acc_monitor: for instance in self.instances.backupIds: if self.acc_monitor.is_instance_degraded(instance): slow_instances.append(instance) ...
The relative throughput of an instance compared to the backup instances.
def instance_throughput_ratio(self, inst_id): """ The relative throughput of an instance compared to the backup instances. """ inst_thrp, otherThrp = self.getThroughputs(inst_id) # Backup throughput may be 0 so moving ahead only if it is not 0 r = inst_thrp / oth...
Return whether the throughput of the master instance is greater than the acceptable threshold
def is_instance_throughput_too_low(self, inst_id): """ Return whether the throughput of the master instance is greater than the acceptable threshold """ r = self.instance_throughput_ratio(inst_id) if r is None: logger.debug("{} instance {} throughput is not " ...
Return whether the request latency of the master instance is greater than the acceptable threshold
def isMasterReqLatencyTooHigh(self): """ Return whether the request latency of the master instance is greater than the acceptable threshold """ # TODO for now, view_change procedure can take more that 15 minutes # (5 minutes for catchup and 10 minutes for primary's answer...
Return whether the average request latency of an instance is greater than the acceptable threshold
def is_instance_avg_req_latency_too_high(self, inst_id): """ Return whether the average request latency of an instance is greater than the acceptable threshold """ avg_lat, avg_lat_others = self.getLatencies() if not avg_lat or not avg_lat_others: return False...
Return a tuple of the throughput of the given instance and the average throughput of the remaining instances. :param instId: the id of the protocol instance
def getThroughputs(self, desired_inst_id: int): """ Return a tuple of the throughput of the given instance and the average throughput of the remaining instances. :param instId: the id of the protocol instance """ instance_thrp = self.getThroughput(desired_inst_id) ...
Return the throughput of the specified instance. :param instId: the id of the protocol instance
def getThroughput(self, instId: int) -> float: """ Return the throughput of the specified instance. :param instId: the id of the protocol instance """ # We are using the instanceStarted time in the denominator instead of # a time interval. This is alright for now as all ...
Calculate and return the average throughput of all the instances except the one specified as `forAllExcept`.
def getInstanceMetrics( self, forAllExcept: int) -> Tuple[Optional[int], Optional[float]]: """ Calculate and return the average throughput of all the instances except the one specified as `forAllExcept`. """ m = [(reqs, tm) for i, (reqs, tm) in self.numOr...
Return a dict with client identifier as a key and calculated latency as a value
def getLatency(self, instId: int) -> float: """ Return a dict with client identifier as a key and calculated latency as a value """ if len(self.clientAvgReqLatencies) == 0: return 0.0 return self.clientAvgReqLatencies[instId].get_avg_latency()
Enqueue the message into the remote's queue. :param msg: the message to enqueue :param rid: the id of the remote node
def _enqueue(self, msg: Any, rid: int, signer: Signer) -> None: """ Enqueue the message into the remote's queue. :param msg: the message to enqueue :param rid: the id of the remote node """ if rid not in self.outBoxes: self.outBoxes[rid] = deque() sel...
Enqueue the specified message into all the remotes in the nodestack. :param msg: the message to enqueue
def _enqueueIntoAllRemotes(self, msg: Any, signer: Signer) -> None: """ Enqueue the specified message into all the remotes in the nodestack. :param msg: the message to enqueue """ for rid in self.remotes.keys(): self._enqueue(msg, rid, signer)
Enqueue the given message into the outBoxes of the specified remotes or into the outBoxes of all the remotes if rids is None :param msg: the message to enqueue :param rids: ids of the remotes to whose outBoxes this message must be enqueued :param message_splitter: callable t...
def send(self, msg: Any, * rids: Iterable[int], signer: Signer = None, message_splitter=None) -> None: """ Enqueue the given message into the outBoxes of the specified remotes or into the outBoxes of all the remotes if rids is None :pa...
Clear the outBoxes and transmit batched messages to remotes.
def flushOutBoxes(self) -> None: """ Clear the outBoxes and transmit batched messages to remotes. """ removedRemotes = [] for rid, msgs in self.outBoxes.items(): try: dest = self.remotes[rid].name except KeyError: removedRem...
Call `prod` once for each Prodable in this Looper :return: the sum of the number of events executed successfully
async def prodAllOnce(self): """ Call `prod` once for each Prodable in this Looper :return: the sum of the number of events executed successfully """ # TODO: looks like limit is always None??? limit = None s = 0 for n in self.prodables: s += a...
Add one Prodable object to this Looper's list of Prodables :param prodable: the Prodable object to add
def add(self, prodable: Prodable) -> None: """ Add one Prodable object to this Looper's list of Prodables :param prodable: the Prodable object to add """ if prodable.name in [p.name for p in self.prodables]: raise ProdableAlreadyAdded("Prodable {} already added.". ...
Remove the specified Prodable object from this Looper's list of Prodables :param prodable: the Prodable to remove
def removeProdable(self, prodable: Prodable=None, name: str=None) -> Optional[Prodable]: """ Remove the specified Prodable object from this Looper's list of Prodables :param prodable: the Prodable to remove """ if prodable: self.prodables.remove(prodable) ...
Execute `runOnce` with a small tolerance of 0.01 seconds so that the Prodables can complete their other asynchronous tasks not running on the event-loop.
async def runOnceNicely(self): """ Execute `runOnce` with a small tolerance of 0.01 seconds so that the Prodables can complete their other asynchronous tasks not running on the event-loop. """ start = time.perf_counter() msgsProcessed = await self.prodAllOnce() if...
Runs an arbitrary list of coroutines in order and then quits the loop, if not running as a context manager.
def run(self, *coros: CoroWrapper): """ Runs an arbitrary list of coroutines in order and then quits the loop, if not running as a context manager. """ if not self.running: raise RuntimeError("not running!") async def wrapper(): results = [] ...
Shut down this Looper.
async def shutdown(self): """ Shut down this Looper. """ logger.display("Looper shutting down now...", extra={"cli": False}) self.running = False start = time.perf_counter() if not self.runFut.done(): await self.runFut self.stopall() lo...
Creates a default BLS factory to instantiate BLS BFT classes. :param node: Node instance :return: BLS factory instance
def create_default_bls_bft_factory(node): ''' Creates a default BLS factory to instantiate BLS BFT classes. :param node: Node instance :return: BLS factory instance ''' bls_keys_dir = os.path.join(node.keys_dir, node.name) bls_crypto_factory = create_default_bls_crypto_factory(bls_keys_dir)...
Verifies the signature of a signed message, returning the message if it has not been tampered with else raising :class:`~ValueError`. :param smessage: [:class:`bytes`] Either the original messaged or a signature and message concated together. :param signature: [:class:`bytes...
def verify(self, smessage, signature=None, encoder=encoding.RawEncoder): """ Verifies the signature of a signed message, returning the message if it has not been tampered with else raising :class:`~ValueError`. :param smessage: [:class:`bytes`] Either the original messaged or a ...
Generates a random :class:`~SigningKey` object. :rtype: :class:`~SigningKey`
def generate(cls): """ Generates a random :class:`~SigningKey` object. :rtype: :class:`~SigningKey` """ return cls( libnacl.randombytes(libnacl.crypto_sign_SEEDBYTES), encoder=encoding.RawEncoder, )
Sign a message using this key. :param message: [:class:`bytes`] The data to be signed. :param encoder: A class that is used to encode the signed message. :rtype: :class:`~SignedMessage`
def sign(self, message, encoder=encoding.RawEncoder): """ Sign a message using this key. :param message: [:class:`bytes`] The data to be signed. :param encoder: A class that is used to encode the signed message. :rtype: :class:`~SignedMessage` """ raw_signed = li...
Verify the message
def verify(self, signature, msg): ''' Verify the message ''' if not self.key: return False try: self.key.verify(signature + msg) except ValueError: return False return True
Generates a random :class:`~PrivateKey` object :rtype: :class:`~PrivateKey`
def generate(cls): """ Generates a random :class:`~PrivateKey` object :rtype: :class:`~PrivateKey` """ return cls(libnacl.randombytes(PrivateKey.SIZE), encoder=encoding.RawEncoder)
Encrypts the plaintext message using the given `nonce` and returns the ciphertext encoded with the encoder. .. warning:: It is **VITALLY** important that the nonce is a nonce, i.e. it is a number used only once for any given key. If you fail to do this, you compromise the privac...
def encrypt(self, plaintext, nonce, encoder=encoding.RawEncoder): """ Encrypts the plaintext message using the given `nonce` and returns the ciphertext encoded with the encoder. .. warning:: It is **VITALLY** important that the nonce is a nonce, i.e. it is a number used only...
Decrypts the ciphertext using the given nonce and returns the plaintext message. :param ciphertext: [:class:`bytes`] The encrypted message to decrypt :param nonce: [:class:`bytes`] The nonce used when encrypting the ciphertext :param encoder: The encoder used to decode the c...
def decrypt(self, ciphertext, nonce=None, encoder=encoding.RawEncoder): """ Decrypts the ciphertext using the given nonce and returns the plaintext message. :param ciphertext: [:class:`bytes`] The encrypted message to decrypt :param nonce: [:class:`bytes`] The nonce used when en...
Return duple of (cyphertext, nonce) resulting from encrypting the message using shared key generated from the .key and the pubkey If pubkey is hex encoded it is converted first If enhex is True then use HexEncoder otherwise use RawEncoder Intended for the owner of the passed in public k...
def encrypt(self, msg, pubkey, enhex=False): ''' Return duple of (cyphertext, nonce) resulting from encrypting the message using shared key generated from the .key and the pubkey If pubkey is hex encoded it is converted first If enhex is True then use HexEncoder otherwise use Raw...
Return decrypted msg contained in cypher using nonce and shared key generated from .key and pubkey. If pubkey is hex encoded it is converted first If dehex is True then use HexEncoder otherwise use RawEncoder Intended for the owner of .key cypher is string nonce is stri...
def decrypt(self, cipher, nonce, pubkey, dehex=False): ''' Return decrypted msg contained in cypher using nonce and shared key generated from .key and pubkey. If pubkey is hex encoded it is converted first If dehex is True then use HexEncoder otherwise use RawEncoder Int...
Add the leaf (transaction) to the log and the merkle tree. Note: Currently data is serialised same way for inserting it in the log as well as the merkle tree, only difference is the tree needs binary data to the textual (utf-8) representation is converted to bytes.
def add(self, leaf): """ Add the leaf (transaction) to the log and the merkle tree. Note: Currently data is serialised same way for inserting it in the log as well as the merkle tree, only difference is the tree needs binary data to the textual (utf-8) representation is converte...
:param source: some iterable source (list, file, etc) :param lineSep: string of separators (chars) that must be removed :return: list of non empty lines with removed separators
def cleanLines(source, lineSep=os.linesep): """ :param source: some iterable source (list, file, etc) :param lineSep: string of separators (chars) that must be removed :return: list of non empty lines with removed separators """ stripped = (line.strip(lineSep) for line in source) return (lin...
Reads config from the installation directory of Plenum. :param installDir: installation directory of Plenum :param configFile: name of the configuration file :raises: FileNotFoundError :return: the configuration as a python object
def getInstalledConfig(installDir, configFile): """ Reads config from the installation directory of Plenum. :param installDir: installation directory of Plenum :param configFile: name of the configuration file :raises: FileNotFoundError :return: the configuration as a python object """ ...
Reads a file called config.py in the project directory :raises: FileNotFoundError :return: the configuration as a python object
def _getConfig(general_config_dir: str = None): """ Reads a file called config.py in the project directory :raises: FileNotFoundError :return: the configuration as a python object """ stp_config = STPConfig() plenum_config = import_module("plenum.config") config = stp_config config....
Get the next function from the list of routes that is capable of processing o's type. :param o: the object to process :return: the next function
def getFunc(self, o: Any) -> Callable: """ Get the next function from the list of routes that is capable of processing o's type. :param o: the object to process :return: the next function """ for cls, func in self.routes.items(): if isinstance(o, cls)...
Pass the message as an argument to the function defined in `routes`. If the msg is a tuple, pass the values as multiple arguments to the function. :param msg: tuple of object and callable
def handleSync(self, msg: Any) -> Any: """ Pass the message as an argument to the function defined in `routes`. If the msg is a tuple, pass the values as multiple arguments to the function. :param msg: tuple of object and callable """ # If a plain python tuple and not a ...
Handle both sync and async functions. :param msg: a message :return: the result of execution of the function corresponding to this message's type
async def handle(self, msg: Any) -> Any: """ Handle both sync and async functions. :param msg: a message :return: the result of execution of the function corresponding to this message's type """ res = self.handleSync(msg) if isawaitable(res): return a...
Handle all items in a deque. Can call asynchronous handlers. :param deq: a deque of items to be handled by this router :param limit: the number of items in the deque to the handled :return: the number of items handled successfully
async def handleAll(self, deq: deque, limit=None) -> int: """ Handle all items in a deque. Can call asynchronous handlers. :param deq: a deque of items to be handled by this router :param limit: the number of items in the deque to the handled :return: the number of items handled...
Synchronously handle all items in a deque. :param deq: a deque of items to be handled by this router :param limit: the number of items in the deque to the handled :return: the number of items handled successfully
def handleAllSync(self, deq: deque, limit=None) -> int: """ Synchronously handle all items in a deque. :param deq: a deque of items to be handled by this router :param limit: the number of items in the deque to the handled :return: the number of items handled successfully ...
Load this tree from a dumb data object for serialisation. The object must have attributes tree_size:int and hashes:list.
def load(self, other: merkle_tree.MerkleTree): """Load this tree from a dumb data object for serialisation. The object must have attributes tree_size:int and hashes:list. """ self._update(other.tree_size, other.hashes)
Save this tree into a dumb data object for serialisation. The object must have attributes tree_size:int and hashes:list.
def save(self, other: merkle_tree.MerkleTree): """Save this tree into a dumb data object for serialisation. The object must have attributes tree_size:int and hashes:list. """ other.__tree_size = self.__tree_size other.__hashes = self.__hashes
Returns the root hash of this tree. (Only re-computed on change.)
def root_hash(self): """Returns the root hash of this tree. (Only re-computed on change.)""" if self.__root_hash is None: self.__root_hash = ( self.__hasher._hash_fold(self.__hashes) if self.__hashes else self.__hasher.hash_empty()) return self.__root_...
Append a new leaf onto the end of this tree and return the audit path
def append(self, new_leaf: bytes) -> List[bytes]: """Append a new leaf onto the end of this tree and return the audit path""" auditPath = list(reversed(self.__hashes)) self._push_subtree([new_leaf]) return auditPath
Extend this tree with new_leaves on the end. The algorithm works by using _push_subtree() as a primitive, calling it with the maximum number of allowed leaves until we can add the remaining leaves as a valid entire (non-full) subtree in one go.
def extend(self, new_leaves: List[bytes]): """Extend this tree with new_leaves on the end. The algorithm works by using _push_subtree() as a primitive, calling it with the maximum number of allowed leaves until we can add the remaining leaves as a valid entire (non-full) subtree in one ...
Returns a new tree equal to this tree extended with new_leaves.
def extended(self, new_leaves: List[bytes]): """Returns a new tree equal to this tree extended with new_leaves.""" new_tree = self.__copy__() new_tree.extend(new_leaves) return new_tree
Check that the tree has same leaf count as expected and the number of nodes are also as expected
def verify_consistency(self, expected_leaf_count) -> bool: """ Check that the tree has same leaf count as expected and the number of nodes are also as expected """ if expected_leaf_count != self.leafCount: raise ConsistencyVerificationFailed() if self.get_expe...
Return whether the given str represents a hex value or not :param val: the string to check :return: whether the given str represents a hex value
def isHex(val: str) -> bool: """ Return whether the given str represents a hex value or not :param val: the string to check :return: whether the given str represents a hex value """ if isinstance(val, bytes): # only decodes utf-8 string try: val = val.decode() ...
Generate client and server CURVE certificate files
def generate_certificates(base_dir, *peer_names, pubKeyDir=None, secKeyDir=None, sigKeyDir=None, verkeyDir=None, clean=True): ''' Generate client and server CURVE certificate files''' pubKeyDir = pubKeyDir or 'public_keys' secKeyDir = secKeyDir or 'private...
Queries state for data on specified path :param path: path to data :param is_committed: queries the committed state root if True else the uncommitted root :param with_proof: creates proof if True :return: data
def lookup(self, path, is_committed=True, with_proof=False) -> (str, int): """ Queries state for data on specified path :param path: path to data :param is_committed: queries the committed state root if True else the uncommitted root :param with_proof: creates proof if True ...
Implement exponential moving average
def _accumulate(self, old_accum, next_val): """ Implement exponential moving average """ return old_accum * (1 - self.alpha) + next_val * self.alpha
Calculates node position based on start and height :param start: The sequence number of the first leaf under this tree. :param height: Height of this node in the merkle tree :return: the node's position
def getNodePosition(cls, start, height=None) -> int: """ Calculates node position based on start and height :param start: The sequence number of the first leaf under this tree. :param height: Height of this node in the merkle tree :return: the node's position """ ...
Get the audit path of the leaf at the position specified by serNo. :param seqNo: sequence number of the leaf to calculate the path for :param offset: the sequence number of the node from where the path should begin. :return: tuple of leafs and nodes
def getPath(cls, seqNo, offset=0): """ Get the audit path of the leaf at the position specified by serNo. :param seqNo: sequence number of the leaf to calculate the path for :param offset: the sequence number of the node from where the path should begin. :return: tuple ...
Fetches nodeHash based on start leaf and height of the node in the tree. :return: the nodeHash
def readNodeByTree(self, start, height=None): """ Fetches nodeHash based on start leaf and height of the node in the tree. :return: the nodeHash """ pos = self.getNodePosition(start, height) return self.readNode(pos)
Returns True if number of nodes are consistent with number of leaves
def is_consistent(self) -> bool: """ Returns True if number of nodes are consistent with number of leaves """ from ledger.compact_merkle_tree import CompactMerkleTree return self.nodeCount == CompactMerkleTree.get_expected_node_count( self.leafCount)
Close current and start next chunk
def _startNextChunk(self) -> None: """ Close current and start next chunk """ if self.currentChunk is None: self._useLatestChunk() else: self._useChunk(self.currentChunkIndex + self.chunkSize)