body stringlengths 26 98.2k | body_hash int64 -9,222,864,604,528,158,000 9,221,803,474B | docstring stringlengths 1 16.8k | path stringlengths 5 230 | name stringlengths 1 96 | repository_name stringlengths 7 89 | lang stringclasses 1
value | body_without_docstring stringlengths 20 98.2k |
|---|---|---|---|---|---|---|---|
@property
def eigvals(self):
'Return the eigenvalues of the specified tensor product observable.\n\n This method uses pre-stored eigenvalues for standard observables where\n possible.\n\n Returns:\n array[float]: array containing the eigenvalues of the tensor product\n obs... | -838,111,445,676,742,700 | Return the eigenvalues of the specified tensor product observable.
This method uses pre-stored eigenvalues for standard observables where
possible.
Returns:
array[float]: array containing the eigenvalues of the tensor product
observable | pennylane/operation.py | eigvals | DanielPolatajko/pennylane | python | @property
def eigvals(self):
'Return the eigenvalues of the specified tensor product observable.\n\n This method uses pre-stored eigenvalues for standard observables where\n possible.\n\n Returns:\n array[float]: array containing the eigenvalues of the tensor product\n obs... |
def diagonalizing_gates(self):
'Return the gate set that diagonalizes a circuit according to the\n specified tensor observable.\n\n This method uses pre-stored eigenvalues for standard observables where\n possible and stores the corresponding eigenvectors from the eigendecomposition.\n\n ... | 6,156,173,566,321,034,000 | Return the gate set that diagonalizes a circuit according to the
specified tensor observable.
This method uses pre-stored eigenvalues for standard observables where
possible and stores the corresponding eigenvectors from the eigendecomposition.
Returns:
list: list containing the gates diagonalizing the tensor obs... | pennylane/operation.py | diagonalizing_gates | DanielPolatajko/pennylane | python | def diagonalizing_gates(self):
'Return the gate set that diagonalizes a circuit according to the\n specified tensor observable.\n\n This method uses pre-stored eigenvalues for standard observables where\n possible and stores the corresponding eigenvectors from the eigendecomposition.\n\n ... |
@property
def matrix(self):
'Matrix representation of the tensor operator\n in the computational basis.\n\n **Example:**\n\n Note that the returned matrix *only includes explicitly\n declared observables* making up the tensor product;\n that is, it only returns the matrix for the ... | -51,734,776,827,415,040 | Matrix representation of the tensor operator
in the computational basis.
**Example:**
Note that the returned matrix *only includes explicitly
declared observables* making up the tensor product;
that is, it only returns the matrix for the specified
subsystem it is defined for.
>>> O = qml.PauliZ(0) @ qml.PauliZ(2)
>>... | pennylane/operation.py | matrix | DanielPolatajko/pennylane | python | @property
def matrix(self):
'Matrix representation of the tensor operator\n in the computational basis.\n\n **Example:**\n\n Note that the returned matrix *only includes explicitly\n declared observables* making up the tensor product;\n that is, it only returns the matrix for the ... |
def prune(self):
"Returns a pruned tensor product of observables by removing :class:`~.Identity` instances from\n the observables building up the :class:`~.Tensor`.\n\n The ``return_type`` attribute is preserved while pruning.\n\n If the tensor product only contains one observable, then this ob... | -3,808,663,140,798,350,300 | Returns a pruned tensor product of observables by removing :class:`~.Identity` instances from
the observables building up the :class:`~.Tensor`.
The ``return_type`` attribute is preserved while pruning.
If the tensor product only contains one observable, then this observable instance is
returned.
Note that, as a res... | pennylane/operation.py | prune | DanielPolatajko/pennylane | python | def prune(self):
"Returns a pruned tensor product of observables by removing :class:`~.Identity` instances from\n the observables building up the :class:`~.Tensor`.\n\n The ``return_type`` attribute is preserved while pruning.\n\n If the tensor product only contains one observable, then this ob... |
def heisenberg_expand(self, U, wires):
'Expand the given local Heisenberg-picture array into a full-system one.\n\n Args:\n U (array[float]): array to expand (expected to be of the dimension ``1+2*self.num_wires``)\n wires (Wires): wires on the device the array ``U`` should be expanded\... | -4,894,675,324,531,311,000 | Expand the given local Heisenberg-picture array into a full-system one.
Args:
U (array[float]): array to expand (expected to be of the dimension ``1+2*self.num_wires``)
wires (Wires): wires on the device the array ``U`` should be expanded
to apply to
Raises:
ValueError: if the size of the input ma... | pennylane/operation.py | heisenberg_expand | DanielPolatajko/pennylane | python | def heisenberg_expand(self, U, wires):
'Expand the given local Heisenberg-picture array into a full-system one.\n\n Args:\n U (array[float]): array to expand (expected to be of the dimension ``1+2*self.num_wires``)\n wires (Wires): wires on the device the array ``U`` should be expanded\... |
@staticmethod
def _heisenberg_rep(p):
"Heisenberg picture representation of the operation.\n\n * For Gaussian CV gates, this method returns the matrix of the linear\n transformation carried out by the gate for the given parameter values.\n The method is not defined for non-Gaussian gates.\n... | 7,129,213,776,502,749,000 | Heisenberg picture representation of the operation.
* For Gaussian CV gates, this method returns the matrix of the linear
transformation carried out by the gate for the given parameter values.
The method is not defined for non-Gaussian gates.
**The existence of this method is equivalent to setting** ``grad_meth... | pennylane/operation.py | _heisenberg_rep | DanielPolatajko/pennylane | python | @staticmethod
def _heisenberg_rep(p):
"Heisenberg picture representation of the operation.\n\n * For Gaussian CV gates, this method returns the matrix of the linear\n transformation carried out by the gate for the given parameter values.\n The method is not defined for non-Gaussian gates.\n... |
@classproperty
def supports_heisenberg(self):
'Returns True iff the CV Operation has overridden the :meth:`~.CV._heisenberg_rep`\n static method, thereby indicating that it is Gaussian and does not block the use\n of the parameter-shift differentiation method if found between the differentiated gate\n... | -8,325,412,333,071,810,000 | Returns True iff the CV Operation has overridden the :meth:`~.CV._heisenberg_rep`
static method, thereby indicating that it is Gaussian and does not block the use
of the parameter-shift differentiation method if found between the differentiated gate
and an observable. | pennylane/operation.py | supports_heisenberg | DanielPolatajko/pennylane | python | @classproperty
def supports_heisenberg(self):
'Returns True iff the CV Operation has overridden the :meth:`~.CV._heisenberg_rep`\n static method, thereby indicating that it is Gaussian and does not block the use\n of the parameter-shift differentiation method if found between the differentiated gate\n... |
@classproperty
def supports_parameter_shift(self):
"Returns True iff the CV Operation supports the parameter-shift differentiation method.\n This means that it has ``grad_method='A'`` and\n has overridden the :meth:`~.CV._heisenberg_rep` static method.\n "
return ((self.grad_method == 'A') ... | 879,775,891,246,995,800 | Returns True iff the CV Operation supports the parameter-shift differentiation method.
This means that it has ``grad_method='A'`` and
has overridden the :meth:`~.CV._heisenberg_rep` static method. | pennylane/operation.py | supports_parameter_shift | DanielPolatajko/pennylane | python | @classproperty
def supports_parameter_shift(self):
"Returns True iff the CV Operation supports the parameter-shift differentiation method.\n This means that it has ``grad_method='A'`` and\n has overridden the :meth:`~.CV._heisenberg_rep` static method.\n "
return ((self.grad_method == 'A') ... |
def heisenberg_pd(self, idx):
'Partial derivative of the Heisenberg picture transform matrix.\n\n Computed using grad_recipe.\n\n Args:\n idx (int): index of the parameter with respect to which the\n partial derivative is computed.\n Returns:\n array[float]:... | -5,387,619,469,353,053,000 | Partial derivative of the Heisenberg picture transform matrix.
Computed using grad_recipe.
Args:
idx (int): index of the parameter with respect to which the
partial derivative is computed.
Returns:
array[float]: partial derivative | pennylane/operation.py | heisenberg_pd | DanielPolatajko/pennylane | python | def heisenberg_pd(self, idx):
'Partial derivative of the Heisenberg picture transform matrix.\n\n Computed using grad_recipe.\n\n Args:\n idx (int): index of the parameter with respect to which the\n partial derivative is computed.\n Returns:\n array[float]:... |
def heisenberg_tr(self, wires, inverse=False):
'Heisenberg picture representation of the linear transformation carried\n out by the gate at current parameter values.\n\n Given a unitary quantum gate :math:`U`, we may consider its linear\n transformation in the Heisenberg picture, :math:`U^\\dag... | -4,233,527,887,951,043,000 | Heisenberg picture representation of the linear transformation carried
out by the gate at current parameter values.
Given a unitary quantum gate :math:`U`, we may consider its linear
transformation in the Heisenberg picture, :math:`U^\dagger(\cdot) U`.
If the gate is Gaussian, this linear transformation preserves the... | pennylane/operation.py | heisenberg_tr | DanielPolatajko/pennylane | python | def heisenberg_tr(self, wires, inverse=False):
'Heisenberg picture representation of the linear transformation carried\n out by the gate at current parameter values.\n\n Given a unitary quantum gate :math:`U`, we may consider its linear\n transformation in the Heisenberg picture, :math:`U^\\dag... |
def heisenberg_obs(self, wires):
'Representation of the observable in the position/momentum operator basis.\n\n Returns the expansion :math:`q` of the observable, :math:`Q`, in the\n basis :math:`\\mathbf{r} = (\\I, \\x_0, \\p_0, \\x_1, \\p_1, \\ldots)`.\n\n * For first-order observables return... | -4,023,639,562,349,603,300 | Representation of the observable in the position/momentum operator basis.
Returns the expansion :math:`q` of the observable, :math:`Q`, in the
basis :math:`\mathbf{r} = (\I, \x_0, \p_0, \x_1, \p_1, \ldots)`.
* For first-order observables returns a real vector such
that :math:`Q = \sum_i q_i \mathbf{r}_i`.
* For se... | pennylane/operation.py | heisenberg_obs | DanielPolatajko/pennylane | python | def heisenberg_obs(self, wires):
'Representation of the observable in the position/momentum operator basis.\n\n Returns the expansion :math:`q` of the observable, :math:`Q`, in the\n basis :math:`\\mathbf{r} = (\\I, \\x_0, \\p_0, \\x_1, \\p_1, \\ldots)`.\n\n * For first-order observables return... |
def evaluate(p):
'Evaluate a single parameter.'
if isinstance(p, np.ndarray):
if (p.dtype == object):
temp = np.array([(x.val if isinstance(x, Variable) else x) for x in p.flat])
return temp.reshape(p.shape)
return p
if isinstance(p, list):
evaled_list = []
... | 5,925,395,568,560,989,000 | Evaluate a single parameter. | pennylane/operation.py | evaluate | DanielPolatajko/pennylane | python | def evaluate(p):
if isinstance(p, np.ndarray):
if (p.dtype == object):
temp = np.array([(x.val if isinstance(x, Variable) else x) for x in p.flat])
return temp.reshape(p.shape)
return p
if isinstance(p, list):
evaled_list = []
for arr in p:
... |
def loc(w):
'Returns the slice denoting the location of (x_w, p_w) in the basis.'
ind = ((2 * w) + 1)
return slice(ind, (ind + 2)) | 8,122,650,268,305,981,000 | Returns the slice denoting the location of (x_w, p_w) in the basis. | pennylane/operation.py | loc | DanielPolatajko/pennylane | python | def loc(w):
ind = ((2 * w) + 1)
return slice(ind, (ind + 2)) |
def __init__(self, faces, vertexes=None):
'\n See :class:`MeshData <pyqtgraph.opengl.MeshData>` for initialization arguments.\n '
if isinstance(faces, MeshData):
self.data = faces
else:
self.data = MeshData()
self.data.setFaces(faces, vertexes)
GLGraphicsItem.__init... | 6,456,963,119,954,934,000 | See :class:`MeshData <pyqtgraph.opengl.MeshData>` for initialization arguments. | pyqtgraph/opengl/items/GLMeshItem.py | __init__ | robertsj/poropy | python | def __init__(self, faces, vertexes=None):
'\n \n '
if isinstance(faces, MeshData):
self.data = faces
else:
self.data = MeshData()
self.data.setFaces(faces, vertexes)
GLGraphicsItem.__init__(self) |
def Video_AutoInit():
"This is a function that's called from the c extension code\n just before the display module is initialized"
if (MacOS and (not MacOS.WMAvailable())):
if (not sdlmain_osx.WMEnable()):
raise ImportError('Can not access the window manager. Use py2app or execute wit... | -1,161,491,649,498,932,500 | This is a function that's called from the c extension code
just before the display module is initialized | venv/Lib/site-packages/pygame/macosx.py | Video_AutoInit | AdamaTraore75020/PYBomber | python | def Video_AutoInit():
"This is a function that's called from the c extension code\n just before the display module is initialized"
if (MacOS and (not MacOS.WMAvailable())):
if (not sdlmain_osx.WMEnable()):
raise ImportError('Can not access the window manager. Use py2app or execute wit... |
def define_parameters(self):
'\n Define the CLI arguments accepted by this plugin app.\n Use self.add_argument to specify a new app argument.\n '
self.add_argument('--executable', dest='executable', type=str, optional=True, help='the conversion program to use', default='/usr/bin/mri_convert... | -7,354,833,314,273,596,000 | Define the CLI arguments accepted by this plugin app.
Use self.add_argument to specify a new app argument. | mri_convert_ppc64/mri_convert_ppc64.py | define_parameters | quinnyyy/pl-mri_convert_ppc64 | python | def define_parameters(self):
'\n Define the CLI arguments accepted by this plugin app.\n Use self.add_argument to specify a new app argument.\n '
self.add_argument('--executable', dest='executable', type=str, optional=True, help='the conversion program to use', default='/usr/bin/mri_convert... |
def run(self, options):
'\n Define the code to be run by this plugin app.\n '
if (not len(options.inputFile)):
print('ERROR: No input file has been specified!')
print('You must specify an input file relative to the input directory.')
sys.exit(1)
if (not len(options.outp... | 5,497,002,344,144,382,000 | Define the code to be run by this plugin app. | mri_convert_ppc64/mri_convert_ppc64.py | run | quinnyyy/pl-mri_convert_ppc64 | python | def run(self, options):
'\n \n '
if (not len(options.inputFile)):
print('ERROR: No input file has been specified!')
print('You must specify an input file relative to the input directory.')
sys.exit(1)
if (not len(options.outputFile)):
print('ERROR: No output fil... |
def show_man_page(self):
"\n Print the app's man page.\n "
print(Gstr_title)
print(Gstr_synopsis) | -1,878,531,900,290,933,500 | Print the app's man page. | mri_convert_ppc64/mri_convert_ppc64.py | show_man_page | quinnyyy/pl-mri_convert_ppc64 | python | def show_man_page(self):
"\n \n "
print(Gstr_title)
print(Gstr_synopsis) |
def prepare_data(seqs_x, seqs_y=None, cuda=False, batch_first=True):
"\n Args:\n eval ('bool'): indicator for eval/infer.\n\n Returns:\n\n "
def _np_pad_batch_2D(samples, pad, batch_first=True, cuda=True):
batch_size = len(samples)
sizes = [len(s) for s in samples]
max_s... | 6,239,063,456,486,674,000 | Args:
eval ('bool'): indicator for eval/infer.
Returns: | src/tasks/lm.py | prepare_data | skysky77/MGNMT | python | def prepare_data(seqs_x, seqs_y=None, cuda=False, batch_first=True):
"\n Args:\n eval ('bool'): indicator for eval/infer.\n\n Returns:\n\n "
def _np_pad_batch_2D(samples, pad, batch_first=True, cuda=True):
batch_size = len(samples)
sizes = [len(s) for s in samples]
max_s... |
def compute_forward(model, critic, seqs_x, eval=False, normalization=1.0, norm_by_words=False):
'\n :type model: nn.Module\n\n :type critic: NMTCriterion\n '
x_inp = seqs_x[:, :(- 1)].contiguous()
x_label = seqs_x[:, 1:].contiguous()
words_norm = x_label.ne(PAD).float().sum(1)
if (not eval)... | -5,583,026,677,254,529,000 | :type model: nn.Module
:type critic: NMTCriterion | src/tasks/lm.py | compute_forward | skysky77/MGNMT | python | def compute_forward(model, critic, seqs_x, eval=False, normalization=1.0, norm_by_words=False):
'\n :type model: nn.Module\n\n :type critic: NMTCriterion\n '
x_inp = seqs_x[:, :(- 1)].contiguous()
x_label = seqs_x[:, 1:].contiguous()
words_norm = x_label.ne(PAD).float().sum(1)
if (not eval)... |
def loss_validation(model, critic, valid_iterator):
'\n :type model: Transformer\n\n :type critic: NMTCriterion\n\n :type valid_iterator: DataIterator\n '
n_sents = 0
n_tokens = 0.0
sum_loss = 0.0
valid_iter = valid_iterator.build_generator()
for batch in valid_iter:
(_, seqs... | -1,263,063,538,261,875,700 | :type model: Transformer
:type critic: NMTCriterion
:type valid_iterator: DataIterator | src/tasks/lm.py | loss_validation | skysky77/MGNMT | python | def loss_validation(model, critic, valid_iterator):
'\n :type model: Transformer\n\n :type critic: NMTCriterion\n\n :type valid_iterator: DataIterator\n '
n_sents = 0
n_tokens = 0.0
sum_loss = 0.0
valid_iter = valid_iterator.build_generator()
for batch in valid_iter:
(_, seqs... |
def load_pretrained_model(nmt_model, pretrain_path, device, exclude_prefix=None):
"\n Args:\n nmt_model: model.\n pretrain_path ('str'): path to pretrained model.\n map_dict ('dict'): mapping specific parameter names to those names\n in current model.\n exclude_prefix ('dic... | 2,162,646,821,228,752,000 | Args:
nmt_model: model.
pretrain_path ('str'): path to pretrained model.
map_dict ('dict'): mapping specific parameter names to those names
in current model.
exclude_prefix ('dict'): excluding parameters with specific names
for pretraining.
Raises:
ValueError: Size not match, parame... | src/tasks/lm.py | load_pretrained_model | skysky77/MGNMT | python | def load_pretrained_model(nmt_model, pretrain_path, device, exclude_prefix=None):
"\n Args:\n nmt_model: model.\n pretrain_path ('str'): path to pretrained model.\n map_dict ('dict'): mapping specific parameter names to those names\n in current model.\n exclude_prefix ('dic... |
def train(FLAGS):
'\n FLAGS:\n saveto: str\n reload: store_true\n config_path: str\n pretrain_path: str, default=""\n model_name: str\n log_path: str\n '
write_log_to_file(os.path.join(FLAGS.log_path, ('%s.log' % time.strftime('%Y%m%d-%H%M%S'))))
GlobalNames.U... | 350,397,659,040,292,100 | FLAGS:
saveto: str
reload: store_true
config_path: str
pretrain_path: str, default=""
model_name: str
log_path: str | src/tasks/lm.py | train | skysky77/MGNMT | python | def train(FLAGS):
'\n FLAGS:\n saveto: str\n reload: store_true\n config_path: str\n pretrain_path: str, default=\n model_name: str\n log_path: str\n '
write_log_to_file(os.path.join(FLAGS.log_path, ('%s.log' % time.strftime('%Y%m%d-%H%M%S'))))
GlobalNames.USE... |
def __init__(self, *, section_proxy: Callable[([], List[SectionMsg])], lane_proxy: Callable[([int], LaneMsg)], obstacle_proxy: Callable[([int], List[LabeledPolygonMsg])], surface_marking_proxy: Callable[([int], List[LabeledPolygonMsg])], parking_proxy: Callable[([int], Any)], intersection_proxy: Callable[([int], Any)],... | 3,897,407,619,816,136,000 | Initialize zone speaker.
Args:
section_proxy: Returns all sections when called.
lane_proxy: Returns a LaneMsg for each section.
obstacle_proxy: function which returns obstacles in a section.
surface_marking_proxy: function which returns surface_markings in a section.
parking_proxy: function which r... | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | __init__ | KITcar-Team/kitcar-gazebo-simulation | python | def __init__(self, *, section_proxy: Callable[([], List[SectionMsg])], lane_proxy: Callable[([int], LaneMsg)], obstacle_proxy: Callable[([int], List[LabeledPolygonMsg])], surface_marking_proxy: Callable[([int], List[LabeledPolygonMsg])], parking_proxy: Callable[([int], Any)], intersection_proxy: Callable[([int], Any)],... |
@functools.cached_property
def overtaking_zones(self) -> List[Tuple[(float, float)]]:
'Intervals in which the car is allowed to overtake along the\n :py:attr:`Speaker.middle_line`.'
obstacles = list((lp.frame for sec in self.sections if (sec.type != road_section_type.PARKING_AREA) for lp in self.get_obst... | -8,128,807,900,204,974,000 | Intervals in which the car is allowed to overtake along the
:py:attr:`Speaker.middle_line`. | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | overtaking_zones | KITcar-Team/kitcar-gazebo-simulation | python | @functools.cached_property
def overtaking_zones(self) -> List[Tuple[(float, float)]]:
'Intervals in which the car is allowed to overtake along the\n :py:attr:`Speaker.middle_line`.'
obstacles = list((lp.frame for sec in self.sections if (sec.type != road_section_type.PARKING_AREA) for lp in self.get_obst... |
def _intersection_yield_zones(self, rule: int) -> List[Tuple[(float, float)]]:
'Intervals in which the car is supposed to halt/stop (in front of intersections).\n\n Args:\n rule: only intersections with this rule are considered\n '
intervals = []
for sec in self.sections:
if... | -4,008,736,530,455,372,000 | Intervals in which the car is supposed to halt/stop (in front of intersections).
Args:
rule: only intersections with this rule are considered | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | _intersection_yield_zones | KITcar-Team/kitcar-gazebo-simulation | python | def _intersection_yield_zones(self, rule: int) -> List[Tuple[(float, float)]]:
'Intervals in which the car is supposed to halt/stop (in front of intersections).\n\n Args:\n rule: only intersections with this rule are considered\n '
intervals = []
for sec in self.sections:
if... |
@functools.cached_property
def stop_zones(self) -> List[Tuple[(float, float)]]:
'Intervals in which the car is supposed to stop (in front of intersections).'
return self._intersection_yield_zones(groundtruth_srv.IntersectionSrvResponse.STOP) | -5,583,183,039,907,304,000 | Intervals in which the car is supposed to stop (in front of intersections). | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | stop_zones | KITcar-Team/kitcar-gazebo-simulation | python | @functools.cached_property
def stop_zones(self) -> List[Tuple[(float, float)]]:
return self._intersection_yield_zones(groundtruth_srv.IntersectionSrvResponse.STOP) |
@functools.cached_property
def halt_zones(self) -> List[Tuple[(float, float)]]:
'Intervals in which the car is supposed to halt (in front of intersections).'
return self._intersection_yield_zones(groundtruth_srv.IntersectionSrvResponse.YIELD) | -2,817,556,553,382,975,500 | Intervals in which the car is supposed to halt (in front of intersections). | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | halt_zones | KITcar-Team/kitcar-gazebo-simulation | python | @functools.cached_property
def halt_zones(self) -> List[Tuple[(float, float)]]:
return self._intersection_yield_zones(groundtruth_srv.IntersectionSrvResponse.YIELD) |
def _inside_any_interval(self, intervals: List[Tuple[(float, float)]]) -> bool:
'Determine if the car is currently in any of the given intervals.'
beginnings = list((interval[0] for interval in intervals))
endings = list((interval[1] for interval in intervals))
b_idx = (bisect.bisect_left(beginnings, se... | 2,748,530,073,120,588,000 | Determine if the car is currently in any of the given intervals. | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | _inside_any_interval | KITcar-Team/kitcar-gazebo-simulation | python | def _inside_any_interval(self, intervals: List[Tuple[(float, float)]]) -> bool:
beginnings = list((interval[0] for interval in intervals))
endings = list((interval[1] for interval in intervals))
b_idx = (bisect.bisect_left(beginnings, self.arc_length) - 1)
e_idx = (bisect.bisect_left(endings, self.... |
def speak(self) -> List[SpeakerMsg]:
'List of speaker msgs.\n\n Contents:\n * beginning of road -> :ref:`Speaker <speaker_msg>`.START_ZONE,\n end of road -> :ref:`Speaker <speaker_msg>`.END_ZONE,\n and in between -> :ref:`Speaker <speaker_msg>`.DRIVING_ZONE,\n ... | -1,370,087,488,015,486,500 | List of speaker msgs.
Contents:
* beginning of road -> :ref:`Speaker <speaker_msg>`.START_ZONE,
end of road -> :ref:`Speaker <speaker_msg>`.END_ZONE,
and in between -> :ref:`Speaker <speaker_msg>`.DRIVING_ZONE,
* close to an obstacle -> :ref:`Speaker <speaker_msg>`.OVERTAKING_ZONE
* before yiel... | simulation/src/simulation_evaluation/src/speaker/speakers/zone.py | speak | KITcar-Team/kitcar-gazebo-simulation | python | def speak(self) -> List[SpeakerMsg]:
'List of speaker msgs.\n\n Contents:\n * beginning of road -> :ref:`Speaker <speaker_msg>`.START_ZONE,\n end of road -> :ref:`Speaker <speaker_msg>`.END_ZONE,\n and in between -> :ref:`Speaker <speaker_msg>`.DRIVING_ZONE,\n ... |
def processMedlineFolder(medlineFolder, outFolder):
'Basic function that iterates through abstracts in a medline file, do a basic word count and save to a file\n\n\tArgs:\n\t\tmedlineFolder (folder): Medline XML folder containing abstracts\n\t\toutFolder (folder): Folder to save output data to\n\tReturns:\n\t\tNoth... | -3,890,652,773,604,347,000 | Basic function that iterates through abstracts in a medline file, do a basic word count and save to a file
Args:
medlineFolder (folder): Medline XML folder containing abstracts
outFolder (folder): Folder to save output data to
Returns:
Nothing | server/tools/CountWordsError/0.1/CountWordsError.py | processMedlineFolder | NCBI-Hackathons/Autoupdating_PubMed_Corpus_for_NLP | python | def processMedlineFolder(medlineFolder, outFolder):
'Basic function that iterates through abstracts in a medline file, do a basic word count and save to a file\n\n\tArgs:\n\t\tmedlineFolder (folder): Medline XML folder containing abstracts\n\t\toutFolder (folder): Folder to save output data to\n\tReturns:\n\t\tNoth... |
def mock_match(A, B):
'\n Checked for params on a mocked function is as expected\n\n It is necesary as sometimes we get a tuple and at the mock data we have\n lists.\n\n Examples:\n ```\n >>> mock_match("A", "A")\n True\n >>> mock_match("A", "B")\n False\n >>> mock_match(["A", "B", "C"... | 3,523,939,949,677,141,000 | Checked for params on a mocked function is as expected
It is necesary as sometimes we get a tuple and at the mock data we have
lists.
Examples:
```
>>> mock_match("A", "A")
True
>>> mock_match("A", "B")
False
>>> mock_match(["A", "B", "C"], ["A", "B", "C"])
True
>>> mock_match(["A", "B", "C"], "*")
True
``` | smock.py | mock_match | serverboards/serverboards-plugin-google-drive | python | def mock_match(A, B):
'\n Checked for params on a mocked function is as expected\n\n It is necesary as sometimes we get a tuple and at the mock data we have\n lists.\n\n Examples:\n ```\n >>> mock_match("A", "A")\n True\n >>> mock_match("A", "B")\n False\n >>> mock_match(["A", "B", "C"... |
def mock_res(name, data, args=[], kwargs={}):
'\n Given a name, data and call parameters, returns the mocked response\n\n If there is no matching response, raises an exception that can be used to\n prepare the mock data.\n\n This can be used for situations where you mock some function like data;\n fo... | -572,512,122,099,329,150 | Given a name, data and call parameters, returns the mocked response
If there is no matching response, raises an exception that can be used to
prepare the mock data.
This can be used for situations where you mock some function like data;
for example at [Serverboards](https://serverboards.io), we use it to
mock RPC cal... | smock.py | mock_res | serverboards/serverboards-plugin-google-drive | python | def mock_res(name, data, args=[], kwargs={}):
'\n Given a name, data and call parameters, returns the mocked response\n\n If there is no matching response, raises an exception that can be used to\n prepare the mock data.\n\n This can be used for situations where you mock some function like data;\n fo... |
def mock_method(name, data):
'\n Returns a function that mocks an original function.\n '
def mockf(*args, **kwargs):
return mock_res(name, data, args, kwargs)
return mockf | -4,421,090,414,963,443,700 | Returns a function that mocks an original function. | smock.py | mock_method | serverboards/serverboards-plugin-google-drive | python | def mock_method(name, data):
'\n \n '
def mockf(*args, **kwargs):
return mock_res(name, data, args, kwargs)
return mockf |
def mock_method_async(name, data):
'\n Returns an async function that mocks an original async function\n '
async def mockf(*args, **kwargs):
return mock_res(name, data, args, kwargs)
return mockf | -1,206,476,654,475,070,200 | Returns an async function that mocks an original async function | smock.py | mock_method_async | serverboards/serverboards-plugin-google-drive | python | def mock_method_async(name, data):
'\n \n '
async def mockf(*args, **kwargs):
return mock_res(name, data, args, kwargs)
return mockf |
def mock_res(self, name, args=[], kwargs={}):
'\n Calls `mock_res`\n\n Mock by args:\n ```\n >>> smock = SMock("tests/data.yaml")\n >>> res = smock.mock_res("requests.get", ["https://mocked.url"])\n >>> res.status_code\n 200\n\n ```\n\n Using "*" as arg... | -3,832,548,049,595,161,600 | Calls `mock_res`
Mock by args:
```
>>> smock = SMock("tests/data.yaml")
>>> res = smock.mock_res("requests.get", ["https://mocked.url"])
>>> res.status_code
200
```
Using "*" as args, as fallback. As there is no kwargs, use default:
```
>>> res = smock.mock_res("requests.get", ["https://error.mocked.url"])
>>> res.s... | smock.py | mock_res | serverboards/serverboards-plugin-google-drive | python | def mock_res(self, name, args=[], kwargs={}):
'\n Calls `mock_res`\n\n Mock by args:\n ```\n >>> smock = SMock("tests/data.yaml")\n >>> res = smock.mock_res("requests.get", ["https://mocked.url"])\n >>> res.status_code\n 200\n\n ```\n\n Using "*" as arg... |
def mock_method(self, name):
'\n Calls `mock_method`\n '
return mock_method(name, self._data) | -1,218,371,837,164,396,000 | Calls `mock_method` | smock.py | mock_method | serverboards/serverboards-plugin-google-drive | python | def mock_method(self, name):
'\n \n '
return mock_method(name, self._data) |
async def mock_method_async(self, name):
'\n Calls `mock_method_async`\n '
return (await mock_method_async(name, self._data)) | -6,066,488,485,754,488,000 | Calls `mock_method_async` | smock.py | mock_method_async | serverboards/serverboards-plugin-google-drive | python | async def mock_method_async(self, name):
'\n \n '
return (await mock_method_async(name, self._data)) |
def canon(smiles):
'Canonicalize SMILES for safety. If canonicalization ever changes this should remain consistent'
return Chem.MolToSmiles(Chem.MolFromSmiles(smiles)) | 7,068,212,702,851,012,000 | Canonicalize SMILES for safety. If canonicalization ever changes this should remain consistent | tests/core/test_fragment.py | canon | trumanw/ScaffoldGraph | python | def canon(smiles):
return Chem.MolToSmiles(Chem.MolFromSmiles(smiles)) |
def __init__(self, **kwargs):
'Initialise the behaviour.'
services_interval = kwargs.pop('services_interval', DEFAULT_SERVICES_INTERVAL)
super().__init__(tick_interval=services_interval, **kwargs) | -4,745,902,468,775,918,000 | Initialise the behaviour. | packages/fetchai/skills/generic_seller/behaviours.py | __init__ | ejfitzgerald/agents-aea | python | def __init__(self, **kwargs):
services_interval = kwargs.pop('services_interval', DEFAULT_SERVICES_INTERVAL)
super().__init__(tick_interval=services_interval, **kwargs) |
def setup(self) -> None:
'\n Implement the setup.\n\n :return: None\n '
strategy = cast(GenericStrategy, self.context.strategy)
if strategy.is_ledger_tx:
ledger_api_dialogues = cast(LedgerApiDialogues, self.context.ledger_api_dialogues)
ledger_api_msg = LedgerApiMessage(... | 3,916,772,024,572,210,000 | Implement the setup.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | setup | ejfitzgerald/agents-aea | python | def setup(self) -> None:
'\n Implement the setup.\n\n :return: None\n '
strategy = cast(GenericStrategy, self.context.strategy)
if strategy.is_ledger_tx:
ledger_api_dialogues = cast(LedgerApiDialogues, self.context.ledger_api_dialogues)
ledger_api_msg = LedgerApiMessage(... |
def act(self) -> None:
'\n Implement the act.\n\n :return: None\n ' | 2,904,657,344,585,305,000 | Implement the act.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | act | ejfitzgerald/agents-aea | python | def act(self) -> None:
'\n Implement the act.\n\n :return: None\n ' |
def teardown(self) -> None:
'\n Implement the task teardown.\n\n :return: None\n '
self._unregister_service()
self._unregister_agent() | -6,772,910,277,003,518,000 | Implement the task teardown.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | teardown | ejfitzgerald/agents-aea | python | def teardown(self) -> None:
'\n Implement the task teardown.\n\n :return: None\n '
self._unregister_service()
self._unregister_agent() |
def _register_agent(self) -> None:
"\n Register the agent's location.\n\n :return: None\n "
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_location_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_search_dialogues)
... | -4,562,887,594,799,922,000 | Register the agent's location.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | _register_agent | ejfitzgerald/agents-aea | python | def _register_agent(self) -> None:
"\n Register the agent's location.\n\n :return: None\n "
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_location_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_search_dialogues)
... |
def _register_service(self) -> None:
"\n Register the agent's service.\n\n :return: None\n "
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_register_service_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_search_dial... | 7,120,971,471,438,055,000 | Register the agent's service.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | _register_service | ejfitzgerald/agents-aea | python | def _register_service(self) -> None:
"\n Register the agent's service.\n\n :return: None\n "
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_register_service_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_search_dial... |
def _unregister_service(self) -> None:
'\n Unregister service from the SOEF.\n\n :return: None\n '
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_unregister_service_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_sea... | 6,959,750,327,624,035,000 | Unregister service from the SOEF.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | _unregister_service | ejfitzgerald/agents-aea | python | def _unregister_service(self) -> None:
'\n Unregister service from the SOEF.\n\n :return: None\n '
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_unregister_service_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_sea... |
def _unregister_agent(self) -> None:
'\n Unregister agent from the SOEF.\n\n :return: None\n '
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_location_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_search_dialogues)... | 6,479,599,807,067,536,000 | Unregister agent from the SOEF.
:return: None | packages/fetchai/skills/generic_seller/behaviours.py | _unregister_agent | ejfitzgerald/agents-aea | python | def _unregister_agent(self) -> None:
'\n Unregister agent from the SOEF.\n\n :return: None\n '
strategy = cast(GenericStrategy, self.context.strategy)
description = strategy.get_location_description()
oef_search_dialogues = cast(OefSearchDialogues, self.context.oef_search_dialogues)... |
def make_data(T=20):
'\n Sample data from a HMM model and compute associated CRF potentials.\n '
random_state = np.random.RandomState(0)
d = 0.2
e = 0.1
transition_matrix = np.array([[(1 - (2 * d)), d, d], [(1 - e), e, 0], [(1 - e), 0, e]])
means = np.array([[0, 0], [10, 0], [5, (- 5)]])
... | 2,518,043,038,236,105,000 | Sample data from a HMM model and compute associated CRF potentials. | deepblast/utils.py | make_data | VGligorijevic/deepblast | python | def make_data(T=20):
'\n \n '
random_state = np.random.RandomState(0)
d = 0.2
e = 0.1
transition_matrix = np.array([[(1 - (2 * d)), d, d], [(1 - e), e, 0], [(1 - e), 0, e]])
means = np.array([[0, 0], [10, 0], [5, (- 5)]])
covs = np.array([[[1, 0], [0, 1]], [[0.2, 0], [0, 0.3]], [[2, 0]... |
def get_data_path(fn, subfolder='data'):
"Return path to filename ``fn`` in the data folder.\n During testing it is often necessary to load data files. This\n function returns the full path to files in the ``data`` subfolder\n by default.\n Parameters\n ----------\n fn : str\n File name.\n ... | -3,221,232,984,871,560,700 | Return path to filename ``fn`` in the data folder.
During testing it is often necessary to load data files. This
function returns the full path to files in the ``data`` subfolder
by default.
Parameters
----------
fn : str
File name.
subfolder : str, defaults to ``data``
Name of the subfolder that contains the d... | deepblast/utils.py | get_data_path | VGligorijevic/deepblast | python | def get_data_path(fn, subfolder='data'):
"Return path to filename ``fn`` in the data folder.\n During testing it is often necessary to load data files. This\n function returns the full path to files in the ``data`` subfolder\n by default.\n Parameters\n ----------\n fn : str\n File name.\n ... |
def _get_best_axes(first_pos, axes):
"\n Determine the best pair of inertial axes so that we don't get large-scale breakdowns from the choice of embedding\n\n :param first_pos:\n :type first_pos:\n :param axes:\n :type axes:\n :return:\n :rtype:\n "
if (axes.ndim > 2):
axes = axe... | 8,478,339,699,627,261,000 | Determine the best pair of inertial axes so that we don't get large-scale breakdowns from the choice of embedding
:param first_pos:
:type first_pos:
:param axes:
:type axes:
:return:
:rtype: | Psience/Molecools/CoordinateSystems.py | _get_best_axes | McCoyGroup/Coordinerds | python | def _get_best_axes(first_pos, axes):
"\n Determine the best pair of inertial axes so that we don't get large-scale breakdowns from the choice of embedding\n\n :param first_pos:\n :type first_pos:\n :param axes:\n :type axes:\n :return:\n :rtype:\n "
if (axes.ndim > 2):
axes = axe... |
def __init__(self, molecule, converter_options=None, **opts):
'\n\n :param molecule:\n :type molecule: AbstractMolecule\n :param converter_options:\n :type converter_options:\n :param opts:\n :type opts:\n '
self.molecule = molecule
if (converter_options is N... | 5,347,917,261,416,498,000 | :param molecule:
:type molecule: AbstractMolecule
:param converter_options:
:type converter_options:
:param opts:
:type opts: | Psience/Molecools/CoordinateSystems.py | __init__ | McCoyGroup/Coordinerds | python | def __init__(self, molecule, converter_options=None, **opts):
'\n\n :param molecule:\n :type molecule: AbstractMolecule\n :param converter_options:\n :type converter_options:\n :param opts:\n :type opts:\n '
self.molecule = molecule
if (converter_options is N... |
def __init__(self, molecule, converter_options=None, **opts):
'\n\n :param molecule:\n :type molecule: AbstractMolecule\n :param converter_options:\n :type converter_options:\n :param opts:\n :type opts:\n '
self.molecule = molecule
nats = len(self.molecule.a... | -4,174,696,519,157,836,000 | :param molecule:
:type molecule: AbstractMolecule
:param converter_options:
:type converter_options:
:param opts:
:type opts: | Psience/Molecools/CoordinateSystems.py | __init__ | McCoyGroup/Coordinerds | python | def __init__(self, molecule, converter_options=None, **opts):
'\n\n :param molecule:\n :type molecule: AbstractMolecule\n :param converter_options:\n :type converter_options:\n :param opts:\n :type opts:\n '
self.molecule = molecule
nats = len(self.molecule.a... |
def set_embedding(self):
'\n Sets up the embedding options...\n :return:\n :rtype:\n '
molecule = self.molecule
com = molecule.center_of_mass
axes = molecule.inertial_axes
converter_options = self.converter_options
if ('ordering' in converter_options):
orderin... | 2,874,453,837,014,049,000 | Sets up the embedding options...
:return:
:rtype: | Psience/Molecools/CoordinateSystems.py | set_embedding | McCoyGroup/Coordinerds | python | def set_embedding(self):
'\n Sets up the embedding options...\n :return:\n :rtype:\n '
molecule = self.molecule
com = molecule.center_of_mass
axes = molecule.inertial_axes
converter_options = self.converter_options
if ('ordering' in converter_options):
orderin... |
def convert(self, coords, molecule=None, origins=None, axes=None, ordering=None, **kwargs):
'\n Converts from Cartesian to ZMatrix coords, preserving the embedding\n :param coords:\n :type coords: CoordinateSet\n :param molecule:\n :type molecule:\n :param origins:\n ... | -603,779,603,356,333,300 | Converts from Cartesian to ZMatrix coords, preserving the embedding
:param coords:
:type coords: CoordinateSet
:param molecule:
:type molecule:
:param origins:
:type origins:
:param axes:
:type axes:
:param ordering:
:type ordering:
:param kwargs:
:type kwargs:
:return:
:rtype: | Psience/Molecools/CoordinateSystems.py | convert | McCoyGroup/Coordinerds | python | def convert(self, coords, molecule=None, origins=None, axes=None, ordering=None, **kwargs):
'\n Converts from Cartesian to ZMatrix coords, preserving the embedding\n :param coords:\n :type coords: CoordinateSet\n :param molecule:\n :type molecule:\n :param origins:\n ... |
def convert_many(self, coords, molecule=None, origins=None, axes=None, ordering=None, strip_embedding=True, strip_dummies=False, **kwargs):
'\n Converts from Cartesian to ZMatrix coords, preserving the embedding\n\n :param coords: coordinates in Cartesians to convert\n :type coords: np.ndarray\... | 7,629,362,426,075,218,000 | Converts from Cartesian to ZMatrix coords, preserving the embedding
:param coords: coordinates in Cartesians to convert
:type coords: np.ndarray
:param molecule:
:type molecule: AbstractMolecule
:param origins: the origin for each individual structure
:type origins: np.ndarray
:param axes: the axes for each structure
... | Psience/Molecools/CoordinateSystems.py | convert_many | McCoyGroup/Coordinerds | python | def convert_many(self, coords, molecule=None, origins=None, axes=None, ordering=None, strip_embedding=True, strip_dummies=False, **kwargs):
'\n Converts from Cartesian to ZMatrix coords, preserving the embedding\n\n :param coords: coordinates in Cartesians to convert\n :type coords: np.ndarray\... |
def convert_many(self, coords, **kwargs):
'\n Converts from Cartesian to ZMatrix coords, preserving the embedding\n '
return (coords, kwargs) | -2,000,805,347,107,456,500 | Converts from Cartesian to ZMatrix coords, preserving the embedding | Psience/Molecools/CoordinateSystems.py | convert_many | McCoyGroup/Coordinerds | python | def convert_many(self, coords, **kwargs):
'\n \n '
return (coords, kwargs) |
def convert_many(self, coords, molecule=None, origins=None, axes=None, ordering=None, reembed=False, axes_choice=None, return_derivs=None, strip_dummies=False, strip_embedding=True, planar_ref_tolerance=None, **kwargs):
'\n Converts from Cartesian to ZMatrix coords, attempting to preserve the embedding\n ... | 6,613,672,720,733,352,000 | Converts from Cartesian to ZMatrix coords, attempting to preserve the embedding | Psience/Molecools/CoordinateSystems.py | convert_many | McCoyGroup/Coordinerds | python | def convert_many(self, coords, molecule=None, origins=None, axes=None, ordering=None, reembed=False, axes_choice=None, return_derivs=None, strip_dummies=False, strip_embedding=True, planar_ref_tolerance=None, **kwargs):
'\n \n '
from .Molecule import Molecule
n_sys = coords.shape[0]
n_coor... |
def format_cfg(cfg):
'Format experiment config for friendly display'
def list2str(cfg):
for (key, value) in cfg.items():
if isinstance(value, dict):
cfg[key] = list2str(value)
elif isinstance(value, list):
if ((len(value) == 0) or isinstance(value... | 9,010,408,615,262,421,000 | Format experiment config for friendly display | up/utils/general/cfg_helper.py | format_cfg | ModelTC/EOD | python | def format_cfg(cfg):
def list2str(cfg):
for (key, value) in cfg.items():
if isinstance(value, dict):
cfg[key] = list2str(value)
elif isinstance(value, list):
if ((len(value) == 0) or isinstance(value[0], (int, float))):
cfg[ke... |
def try_decode(val):
'bool, int, float, or str'
if (val.upper() == 'FALSE'):
return False
elif (val.upper() == 'TRUE'):
return True
if val.isdigit():
return int(val)
if is_number(val):
return float(val)
return val | 9,010,911,974,271,447,000 | bool, int, float, or str | up/utils/general/cfg_helper.py | try_decode | ModelTC/EOD | python | def try_decode(val):
if (val.upper() == 'FALSE'):
return False
elif (val.upper() == 'TRUE'):
return True
if val.isdigit():
return int(val)
if is_number(val):
return float(val)
return val |
@sdc_min_version('3.15.0')
def test_runner_metrics_for_init_and_destroy(sdc_builder, sdc_executor):
'Ensure that we properly update metrics when the runner is in starting phase.'
builder = sdc_builder.get_pipeline_builder()
SLEEP_SCRIPT = 'sleep(5*1000)'
source = builder.add_stage('Dev Data Generator')
... | 5,452,576,526,804,625,000 | Ensure that we properly update metrics when the runner is in starting phase. | pipeline/test_metrics.py | test_runner_metrics_for_init_and_destroy | anubandhan/datacollector-tests | python | @sdc_min_version('3.15.0')
def test_runner_metrics_for_init_and_destroy(sdc_builder, sdc_executor):
builder = sdc_builder.get_pipeline_builder()
SLEEP_SCRIPT = 'sleep(5*1000)'
source = builder.add_stage('Dev Data Generator')
groovy = builder.add_stage('Groovy Evaluator', type='processor')
groov... |
def get_task_id(self):
'Property to get the task id of this component'
return self.task_id | -5,503,473,864,786,678,000 | Property to get the task id of this component | heron/instance/src/python/utils/topology/topology_context_impl.py | get_task_id | kalimfaria/heron | python | def get_task_id(self):
return self.task_id |
def get_component_id(self):
'Property to get the component id of this component'
return self.task_to_component_map.get(self.get_task_id()) | 7,983,561,852,347,411,000 | Property to get the component id of this component | heron/instance/src/python/utils/topology/topology_context_impl.py | get_component_id | kalimfaria/heron | python | def get_component_id(self):
return self.task_to_component_map.get(self.get_task_id()) |
def get_cluster_config(self):
'Returns the cluster config for this component\n\n Note that the returned config is auto-typed map: <str -> any Python object>.\n '
return self.config | 2,560,026,072,691,256,000 | Returns the cluster config for this component
Note that the returned config is auto-typed map: <str -> any Python object>. | heron/instance/src/python/utils/topology/topology_context_impl.py | get_cluster_config | kalimfaria/heron | python | def get_cluster_config(self):
'Returns the cluster config for this component\n\n Note that the returned config is auto-typed map: <str -> any Python object>.\n '
return self.config |
def get_topology_name(self):
'Returns the name of the topology\n '
return str(self.topology.name) | -6,082,850,419,871,236,000 | Returns the name of the topology | heron/instance/src/python/utils/topology/topology_context_impl.py | get_topology_name | kalimfaria/heron | python | def get_topology_name(self):
'\n '
return str(self.topology.name) |
def register_metric(self, name, metric, time_bucket_in_sec):
'Registers a new metric to this context'
collector = self.get_metrics_collector()
collector.register_metric(name, metric, time_bucket_in_sec) | -3,845,584,536,058,857,500 | Registers a new metric to this context | heron/instance/src/python/utils/topology/topology_context_impl.py | register_metric | kalimfaria/heron | python | def register_metric(self, name, metric, time_bucket_in_sec):
collector = self.get_metrics_collector()
collector.register_metric(name, metric, time_bucket_in_sec) |
def get_sources(self, component_id):
'Returns the declared inputs to specified component\n\n :return: map <streamId namedtuple (same structure as protobuf msg) -> gtype>, or\n None if not found\n '
StreamId = namedtuple('StreamId', 'id, component_name')
if (component_id in self.inputs):
... | 9,033,404,051,955,407,000 | Returns the declared inputs to specified component
:return: map <streamId namedtuple (same structure as protobuf msg) -> gtype>, or
None if not found | heron/instance/src/python/utils/topology/topology_context_impl.py | get_sources | kalimfaria/heron | python | def get_sources(self, component_id):
'Returns the declared inputs to specified component\n\n :return: map <streamId namedtuple (same structure as protobuf msg) -> gtype>, or\n None if not found\n '
StreamId = namedtuple('StreamId', 'id, component_name')
if (component_id in self.inputs):
... |
def get_component_tasks(self, component_id):
'Returns the task ids allocated for the given component id'
ret = []
for (task_id, comp_id) in self.task_to_component_map.items():
if (comp_id == component_id):
ret.append(task_id)
return ret | 6,872,420,084,559,937,000 | Returns the task ids allocated for the given component id | heron/instance/src/python/utils/topology/topology_context_impl.py | get_component_tasks | kalimfaria/heron | python | def get_component_tasks(self, component_id):
ret = []
for (task_id, comp_id) in self.task_to_component_map.items():
if (comp_id == component_id):
ret.append(task_id)
return ret |
def add_task_hook(self, task_hook):
'Registers a specified task hook to this context\n\n :type task_hook: heron.instance.src.python.utils.topology.ITaskHook\n :param task_hook: Implementation of ITaskHook\n '
if (not isinstance(task_hook, ITaskHook)):
raise TypeError(('In add_task_hook(): attem... | -3,723,520,555,085,917,000 | Registers a specified task hook to this context
:type task_hook: heron.instance.src.python.utils.topology.ITaskHook
:param task_hook: Implementation of ITaskHook | heron/instance/src/python/utils/topology/topology_context_impl.py | add_task_hook | kalimfaria/heron | python | def add_task_hook(self, task_hook):
'Registers a specified task hook to this context\n\n :type task_hook: heron.instance.src.python.utils.topology.ITaskHook\n :param task_hook: Implementation of ITaskHook\n '
if (not isinstance(task_hook, ITaskHook)):
raise TypeError(('In add_task_hook(): attem... |
def get_topology_pex_path(self):
"Returns the topology's pex file path"
return self.topology_pex_path | 8,942,431,209,857,057,000 | Returns the topology's pex file path | heron/instance/src/python/utils/topology/topology_context_impl.py | get_topology_pex_path | kalimfaria/heron | python | def get_topology_pex_path(self):
return self.topology_pex_path |
def get_metrics_collector(self):
"Returns this context's metrics collector"
if ((self.metrics_collector is None) or (not isinstance(self.metrics_collector, MetricsCollector))):
raise RuntimeError('Metrics collector is not registered in this context')
return self.metrics_collector | -5,930,916,964,854,351,000 | Returns this context's metrics collector | heron/instance/src/python/utils/topology/topology_context_impl.py | get_metrics_collector | kalimfaria/heron | python | def get_metrics_collector(self):
if ((self.metrics_collector is None) or (not isinstance(self.metrics_collector, MetricsCollector))):
raise RuntimeError('Metrics collector is not registered in this context')
return self.metrics_collector |
def invoke_hook_prepare(self):
"invoke task hooks for after the spout/bolt's initialize() method"
for task_hook in self.task_hooks:
task_hook.prepare(self.get_cluster_config(), self) | -8,962,826,239,330,806,000 | invoke task hooks for after the spout/bolt's initialize() method | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_prepare | kalimfaria/heron | python | def invoke_hook_prepare(self):
for task_hook in self.task_hooks:
task_hook.prepare(self.get_cluster_config(), self) |
def invoke_hook_cleanup(self):
"invoke task hooks for just before the spout/bolt's cleanup method"
for task_hook in self.task_hooks:
task_hook.clean_up() | -149,705,493,558,228,670 | invoke task hooks for just before the spout/bolt's cleanup method | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_cleanup | kalimfaria/heron | python | def invoke_hook_cleanup(self):
for task_hook in self.task_hooks:
task_hook.clean_up() |
def invoke_hook_emit(self, values, stream_id, out_tasks):
'invoke task hooks for every time a tuple is emitted in spout/bolt\n\n :type values: list\n :param values: values emitted\n :type stream_id: str\n :param stream_id: stream id into which tuple is emitted\n :type out_tasks: list\n :param out_... | -8,364,756,215,371,977,000 | invoke task hooks for every time a tuple is emitted in spout/bolt
:type values: list
:param values: values emitted
:type stream_id: str
:param stream_id: stream id into which tuple is emitted
:type out_tasks: list
:param out_tasks: list of custom grouping target task id | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_emit | kalimfaria/heron | python | def invoke_hook_emit(self, values, stream_id, out_tasks):
'invoke task hooks for every time a tuple is emitted in spout/bolt\n\n :type values: list\n :param values: values emitted\n :type stream_id: str\n :param stream_id: stream id into which tuple is emitted\n :type out_tasks: list\n :param out_... |
def invoke_hook_spout_ack(self, message_id, complete_latency_ns):
'invoke task hooks for every time spout acks a tuple\n\n :type message_id: str\n :param message_id: message id to which an acked tuple was anchored\n :type complete_latency_ns: float\n :param complete_latency_ns: complete latency in nano ... | 9,094,690,015,681,524,000 | invoke task hooks for every time spout acks a tuple
:type message_id: str
:param message_id: message id to which an acked tuple was anchored
:type complete_latency_ns: float
:param complete_latency_ns: complete latency in nano seconds | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_spout_ack | kalimfaria/heron | python | def invoke_hook_spout_ack(self, message_id, complete_latency_ns):
'invoke task hooks for every time spout acks a tuple\n\n :type message_id: str\n :param message_id: message id to which an acked tuple was anchored\n :type complete_latency_ns: float\n :param complete_latency_ns: complete latency in nano ... |
def invoke_hook_spout_fail(self, message_id, fail_latency_ns):
'invoke task hooks for every time spout fails a tuple\n\n :type message_id: str\n :param message_id: message id to which a failed tuple was anchored\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds\n '... | 2,162,557,886,614,590,500 | invoke task hooks for every time spout fails a tuple
:type message_id: str
:param message_id: message id to which a failed tuple was anchored
:type fail_latency_ns: float
:param fail_latency_ns: fail latency in nano seconds | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_spout_fail | kalimfaria/heron | python | def invoke_hook_spout_fail(self, message_id, fail_latency_ns):
'invoke task hooks for every time spout fails a tuple\n\n :type message_id: str\n :param message_id: message id to which a failed tuple was anchored\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds\n '... |
def invoke_hook_bolt_execute(self, heron_tuple, execute_latency_ns):
'invoke task hooks for every time bolt processes a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is executed\n :type execute_latency_ns: float\n :param execute_latency_ns: execute latency in nano seconds\n ... | 5,612,779,335,163,985,000 | invoke task hooks for every time bolt processes a tuple
:type heron_tuple: HeronTuple
:param heron_tuple: tuple that is executed
:type execute_latency_ns: float
:param execute_latency_ns: execute latency in nano seconds | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_bolt_execute | kalimfaria/heron | python | def invoke_hook_bolt_execute(self, heron_tuple, execute_latency_ns):
'invoke task hooks for every time bolt processes a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is executed\n :type execute_latency_ns: float\n :param execute_latency_ns: execute latency in nano seconds\n ... |
def invoke_hook_bolt_ack(self, heron_tuple, process_latency_ns):
'invoke task hooks for every time bolt acks a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is acked\n :type process_latency_ns: float\n :param process_latency_ns: process latency in nano seconds\n '
if (l... | -8,833,921,388,376,193,000 | invoke task hooks for every time bolt acks a tuple
:type heron_tuple: HeronTuple
:param heron_tuple: tuple that is acked
:type process_latency_ns: float
:param process_latency_ns: process latency in nano seconds | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_bolt_ack | kalimfaria/heron | python | def invoke_hook_bolt_ack(self, heron_tuple, process_latency_ns):
'invoke task hooks for every time bolt acks a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is acked\n :type process_latency_ns: float\n :param process_latency_ns: process latency in nano seconds\n '
if (l... |
def invoke_hook_bolt_fail(self, heron_tuple, fail_latency_ns):
'invoke task hooks for every time bolt fails a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is failed\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds\n '
if (len(self.t... | -1,143,828,988,706,259,700 | invoke task hooks for every time bolt fails a tuple
:type heron_tuple: HeronTuple
:param heron_tuple: tuple that is failed
:type fail_latency_ns: float
:param fail_latency_ns: fail latency in nano seconds | heron/instance/src/python/utils/topology/topology_context_impl.py | invoke_hook_bolt_fail | kalimfaria/heron | python | def invoke_hook_bolt_fail(self, heron_tuple, fail_latency_ns):
'invoke task hooks for every time bolt fails a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is failed\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds\n '
if (len(self.t... |
def powerset(lst):
'returns the power set of the list - the set of all subsets of the list'
if (lst == []):
return [[]]
lose_it = powerset(lst[1:])
use_it = map((lambda subset: ([lst[0]] + subset)), lose_it)
return (lose_it + use_it) | 5,827,662,631,286,967,000 | returns the power set of the list - the set of all subsets of the list | use_it_or_lose_it.py | powerset | jschmidtnj/CS115 | python | def powerset(lst):
if (lst == []):
return [[]]
lose_it = powerset(lst[1:])
use_it = map((lambda subset: ([lst[0]] + subset)), lose_it)
return (lose_it + use_it) |
def subset(target, lst):
'determines whether or not it is possible to create target sum using the\n values in the list. Values in teh list can be positive, negative, or zero.'
if (target == 0):
return True
if (lst == []):
return False
'and and or are short-cut operators in python. THe... | -8,148,369,691,226,853,000 | determines whether or not it is possible to create target sum using the
values in the list. Values in teh list can be positive, negative, or zero. | use_it_or_lose_it.py | subset | jschmidtnj/CS115 | python | def subset(target, lst):
'determines whether or not it is possible to create target sum using the\n values in the list. Values in teh list can be positive, negative, or zero.'
if (target == 0):
return True
if (lst == []):
return False
'and and or are short-cut operators in python. THe... |
def subset_with_values(target, lst):
'Determines whether or not it is possible to create the target sum using\n values in the list. Values in the list can be positive, negative, or zero.\n The function returns a tuple of exactly two items. The first is a boolean,\n that indicates true if the sum is possibl... | 8,734,992,016,607,454,000 | Determines whether or not it is possible to create the target sum using
values in the list. Values in the list can be positive, negative, or zero.
The function returns a tuple of exactly two items. The first is a boolean,
that indicates true if the sum is possible and false if it is not. The second
element in the tuple... | use_it_or_lose_it.py | subset_with_values | jschmidtnj/CS115 | python | def subset_with_values(target, lst):
'Determines whether or not it is possible to create the target sum using\n values in the list. Values in the list can be positive, negative, or zero.\n The function returns a tuple of exactly two items. The first is a boolean,\n that indicates true if the sum is possibl... |
def LCSWithValues(S1, S2):
'returns the longest common string'
if ((S1 == '') or (S2 == '')):
return (0, '')
if (S1[0] == S2[0]):
result = LCSWithValues(S1[1:], S2[1:])
return ((1 + result[0]), (S1[0] + result[1]))
useS1 = LCSWithValues(S1, S2[1:])
useS2 = LCSWithValues(S1[1:... | -1,862,823,565,770,770,700 | returns the longest common string | use_it_or_lose_it.py | LCSWithValues | jschmidtnj/CS115 | python | def LCSWithValues(S1, S2):
if ((S1 == ) or (S2 == )):
return (0, )
if (S1[0] == S2[0]):
result = LCSWithValues(S1[1:], S2[1:])
return ((1 + result[0]), (S1[0] + result[1]))
useS1 = LCSWithValues(S1, S2[1:])
useS2 = LCSWithValues(S1[1:], S2)
if (useS1[0] > useS2[0]):
... |
def _shuffle_inputs(input_tensors, capacity, min_after_dequeue, num_threads):
'Shuffles tensors in `input_tensors`, maintaining grouping.'
shuffle_queue = tf.RandomShuffleQueue(capacity, min_after_dequeue, dtypes=[t.dtype for t in input_tensors])
enqueue_op = shuffle_queue.enqueue(input_tensors)
runner ... | 2,964,247,498,888,477,700 | Shuffles tensors in `input_tensors`, maintaining grouping. | magenta/common/sequence_example_lib.py | _shuffle_inputs | KenniVelez/magenta | python | def _shuffle_inputs(input_tensors, capacity, min_after_dequeue, num_threads):
shuffle_queue = tf.RandomShuffleQueue(capacity, min_after_dequeue, dtypes=[t.dtype for t in input_tensors])
enqueue_op = shuffle_queue.enqueue(input_tensors)
runner = tf.train.QueueRunner(shuffle_queue, ([enqueue_op] * num_th... |
def get_padded_batch(file_list, batch_size, input_size, label_shape=None, num_enqueuing_threads=4, shuffle=False):
'Reads batches of SequenceExamples from TFRecords and pads them.\n\n Can deal with variable length SequenceExamples by padding each batch to the\n length of the longest sequence with zeros.\n\n Args... | 4,064,438,629,566,444,000 | Reads batches of SequenceExamples from TFRecords and pads them.
Can deal with variable length SequenceExamples by padding each batch to the
length of the longest sequence with zeros.
Args:
file_list: A list of paths to TFRecord files containing SequenceExamples.
batch_size: The number of SequenceExamples to inclu... | magenta/common/sequence_example_lib.py | get_padded_batch | KenniVelez/magenta | python | def get_padded_batch(file_list, batch_size, input_size, label_shape=None, num_enqueuing_threads=4, shuffle=False):
'Reads batches of SequenceExamples from TFRecords and pads them.\n\n Can deal with variable length SequenceExamples by padding each batch to the\n length of the longest sequence with zeros.\n\n Args... |
def count_records(file_list, stop_at=None):
'Counts number of records in files from `file_list` up to `stop_at`.\n\n Args:\n file_list: List of TFRecord files to count records in.\n stop_at: Optional number of records to stop counting at.\n\n Returns:\n Integer number of records in files from `file_list`... | 5,925,921,993,372,783,000 | Counts number of records in files from `file_list` up to `stop_at`.
Args:
file_list: List of TFRecord files to count records in.
stop_at: Optional number of records to stop counting at.
Returns:
Integer number of records in files from `file_list` up to `stop_at`. | magenta/common/sequence_example_lib.py | count_records | KenniVelez/magenta | python | def count_records(file_list, stop_at=None):
'Counts number of records in files from `file_list` up to `stop_at`.\n\n Args:\n file_list: List of TFRecord files to count records in.\n stop_at: Optional number of records to stop counting at.\n\n Returns:\n Integer number of records in files from `file_list`... |
def flatten_maybe_padded_sequences(maybe_padded_sequences, lengths=None):
'Flattens the batch of sequences, removing padding (if applicable).\n\n Args:\n maybe_padded_sequences: A tensor of possibly padded sequences to flatten,\n sized `[N, M, ...]` where M = max(lengths).\n lengths: Optional length o... | -4,121,728,141,681,414,700 | Flattens the batch of sequences, removing padding (if applicable).
Args:
maybe_padded_sequences: A tensor of possibly padded sequences to flatten,
sized `[N, M, ...]` where M = max(lengths).
lengths: Optional length of each sequence, sized `[N]`. If None, assumes no
padding.
Returns:
flatten_maybe_... | magenta/common/sequence_example_lib.py | flatten_maybe_padded_sequences | KenniVelez/magenta | python | def flatten_maybe_padded_sequences(maybe_padded_sequences, lengths=None):
'Flattens the batch of sequences, removing padding (if applicable).\n\n Args:\n maybe_padded_sequences: A tensor of possibly padded sequences to flatten,\n sized `[N, M, ...]` where M = max(lengths).\n lengths: Optional length o... |
def _decode(loc: torch.Tensor, priors: torch.Tensor, variances: List[float]) -> torch.Tensor:
'Decode locations from predictions using priors to undo the encoding we did for offset regression at train\n time.\n\n Args:\n loc:location predictions for loc layers. Shape: [num_priors,4].\n priors: P... | 9,088,783,656,471,098,000 | Decode locations from predictions using priors to undo the encoding we did for offset regression at train
time.
Args:
loc:location predictions for loc layers. Shape: [num_priors,4].
priors: Prior boxes in center-offset form. Shape: [num_priors,4].
variances: (list[float]) Variances of priorboxes.
Return:
... | kornia/contrib/face_detection.py | _decode | Abdelrhman-Hosny/kornia | python | def _decode(loc: torch.Tensor, priors: torch.Tensor, variances: List[float]) -> torch.Tensor:
'Decode locations from predictions using priors to undo the encoding we did for offset regression at train\n time.\n\n Args:\n loc:location predictions for loc layers. Shape: [num_priors,4].\n priors: P... |
def to(self, device: Optional[torch.device]=None, dtype: Optional[torch.dtype]=None) -> 'FaceDetectorResult':
'Like :func:`torch.nn.Module.to()` method.'
self._data = self._data.to(device=device, dtype=dtype)
return self | -3,692,803,267,318,518,300 | Like :func:`torch.nn.Module.to()` method. | kornia/contrib/face_detection.py | to | Abdelrhman-Hosny/kornia | python | def to(self, device: Optional[torch.device]=None, dtype: Optional[torch.dtype]=None) -> 'FaceDetectorResult':
self._data = self._data.to(device=device, dtype=dtype)
return self |
@property
def xmin(self) -> torch.Tensor:
'The bounding box top-left x-coordinate.'
return self._data[(..., 0)] | -3,581,150,055,712,332,000 | The bounding box top-left x-coordinate. | kornia/contrib/face_detection.py | xmin | Abdelrhman-Hosny/kornia | python | @property
def xmin(self) -> torch.Tensor:
return self._data[(..., 0)] |
@property
def ymin(self) -> torch.Tensor:
'The bounding box top-left y-coordinate.'
return self._data[(..., 1)] | 8,193,021,596,356,398,000 | The bounding box top-left y-coordinate. | kornia/contrib/face_detection.py | ymin | Abdelrhman-Hosny/kornia | python | @property
def ymin(self) -> torch.Tensor:
return self._data[(..., 1)] |
@property
def xmax(self) -> torch.Tensor:
'The bounding box bottom-right x-coordinate.'
return self._data[(..., 2)] | 3,209,420,580,495,309,000 | The bounding box bottom-right x-coordinate. | kornia/contrib/face_detection.py | xmax | Abdelrhman-Hosny/kornia | python | @property
def xmax(self) -> torch.Tensor:
return self._data[(..., 2)] |
@property
def ymax(self) -> torch.Tensor:
'The bounding box bottom-right y-coordinate.'
return self._data[(..., 3)] | 9,078,629,932,612,555,000 | The bounding box bottom-right y-coordinate. | kornia/contrib/face_detection.py | ymax | Abdelrhman-Hosny/kornia | python | @property
def ymax(self) -> torch.Tensor:
return self._data[(..., 3)] |
def get_keypoint(self, keypoint: FaceKeypoint) -> torch.Tensor:
'The [x y] position of a given facial keypoint.\n\n Args:\n keypoint: the keypoint type to return the position.\n '
if (keypoint == FaceKeypoint.EYE_LEFT):
out = self._data[(..., (4, 5))]
elif (keypoint == FaceK... | 5,815,797,914,079,903,000 | The [x y] position of a given facial keypoint.
Args:
keypoint: the keypoint type to return the position. | kornia/contrib/face_detection.py | get_keypoint | Abdelrhman-Hosny/kornia | python | def get_keypoint(self, keypoint: FaceKeypoint) -> torch.Tensor:
'The [x y] position of a given facial keypoint.\n\n Args:\n keypoint: the keypoint type to return the position.\n '
if (keypoint == FaceKeypoint.EYE_LEFT):
out = self._data[(..., (4, 5))]
elif (keypoint == FaceK... |
@property
def score(self) -> torch.Tensor:
'The detection score.'
return self._data[(..., 14)] | -1,282,487,293,321,369,600 | The detection score. | kornia/contrib/face_detection.py | score | Abdelrhman-Hosny/kornia | python | @property
def score(self) -> torch.Tensor:
return self._data[(..., 14)] |
@property
def width(self) -> torch.Tensor:
'The bounding box width.'
return (self.xmax - self.xmin) | -3,775,788,693,311,651,300 | The bounding box width. | kornia/contrib/face_detection.py | width | Abdelrhman-Hosny/kornia | python | @property
def width(self) -> torch.Tensor:
return (self.xmax - self.xmin) |
@property
def height(self) -> torch.Tensor:
'The bounding box height.'
return (self.ymax - self.ymin) | 1,337,078,370,723,638,500 | The bounding box height. | kornia/contrib/face_detection.py | height | Abdelrhman-Hosny/kornia | python | @property
def height(self) -> torch.Tensor:
return (self.ymax - self.ymin) |
@property
def top_left(self) -> torch.Tensor:
'The [x y] position of the top-left coordinate of the bounding box.'
return self._data[(..., (0, 1))] | 8,133,284,690,489,061,000 | The [x y] position of the top-left coordinate of the bounding box. | kornia/contrib/face_detection.py | top_left | Abdelrhman-Hosny/kornia | python | @property
def top_left(self) -> torch.Tensor:
return self._data[(..., (0, 1))] |
@property
def top_right(self) -> torch.Tensor:
'The [x y] position of the top-left coordinate of the bounding box.'
out = self.top_left
out[(..., 0)] += self.width
return out | -266,048,192,071,190,720 | The [x y] position of the top-left coordinate of the bounding box. | kornia/contrib/face_detection.py | top_right | Abdelrhman-Hosny/kornia | python | @property
def top_right(self) -> torch.Tensor:
out = self.top_left
out[(..., 0)] += self.width
return out |
@property
def bottom_right(self) -> torch.Tensor:
'The [x y] position of the bottom-right coordinate of the bounding box.'
return self._data[(..., (2, 3))] | 1,580,686,018,896,368,400 | The [x y] position of the bottom-right coordinate of the bounding box. | kornia/contrib/face_detection.py | bottom_right | Abdelrhman-Hosny/kornia | python | @property
def bottom_right(self) -> torch.Tensor:
return self._data[(..., (2, 3))] |
@property
def bottom_left(self) -> torch.Tensor:
'The [x y] position of the top-left coordinate of the bounding box.'
out = self.top_left
out[(..., 1)] += self.height
return out | -7,967,264,993,067,659,000 | The [x y] position of the top-left coordinate of the bounding box. | kornia/contrib/face_detection.py | bottom_left | Abdelrhman-Hosny/kornia | python | @property
def bottom_left(self) -> torch.Tensor:
out = self.top_left
out[(..., 1)] += self.height
return out |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.