code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def get_sender(self, message_timeout=0, session=None, **kwargs): """Get a Sender for the Service Bus endpoint. A Sender represents a single open connection within which multiple send operations can be made. :param message_timeout: The period in seconds during which messages sent with this Sender must be sent. If the send is not completed in this time it will fail. :type message_timeout: int :param session: An optional session ID. If supplied this session ID will be applied to every outgoing message sent with this Sender. If an individual message already has a session ID, that will be used instead. If no session ID is supplied here, nor set on an outgoing message, a ValueError will be raised if the entity is sessionful. :type session: str or ~uuid.Guid :returns: A Sender instance with an unopened connection. :rtype: ~azure.servicebus.aio.async_send_handler.Sender Example: .. literalinclude:: ../examples/async_examples/test_examples_async.py :start-after: [START open_close_sender_context] :end-before: [END open_close_sender_context] :language: python :dedent: 4 :caption: Send multiple messages with a Sender. """ handler_id = str(uuid.uuid4()) if self.entity and self.requires_session: return SessionSender( handler_id, self.entity_uri, self.auth_config, session=session, loop=self.loop, debug=self.debug, msg_timeout=message_timeout, **kwargs) return Sender( handler_id, self.entity_uri, self.auth_config, session=session, loop=self.loop, debug=self.debug, msg_timeout=message_timeout, **kwargs)
Get a Sender for the Service Bus endpoint. A Sender represents a single open connection within which multiple send operations can be made. :param message_timeout: The period in seconds during which messages sent with this Sender must be sent. If the send is not completed in this time it will fail. :type message_timeout: int :param session: An optional session ID. If supplied this session ID will be applied to every outgoing message sent with this Sender. If an individual message already has a session ID, that will be used instead. If no session ID is supplied here, nor set on an outgoing message, a ValueError will be raised if the entity is sessionful. :type session: str or ~uuid.Guid :returns: A Sender instance with an unopened connection. :rtype: ~azure.servicebus.aio.async_send_handler.Sender Example: .. literalinclude:: ../examples/async_examples/test_examples_async.py :start-after: [START open_close_sender_context] :end-before: [END open_close_sender_context] :language: python :dedent: 4 :caption: Send multiple messages with a Sender.
Below is the the instruction that describes the task: ### Input: Get a Sender for the Service Bus endpoint. A Sender represents a single open connection within which multiple send operations can be made. :param message_timeout: The period in seconds during which messages sent with this Sender must be sent. If the send is not completed in this time it will fail. :type message_timeout: int :param session: An optional session ID. If supplied this session ID will be applied to every outgoing message sent with this Sender. If an individual message already has a session ID, that will be used instead. If no session ID is supplied here, nor set on an outgoing message, a ValueError will be raised if the entity is sessionful. :type session: str or ~uuid.Guid :returns: A Sender instance with an unopened connection. :rtype: ~azure.servicebus.aio.async_send_handler.Sender Example: .. literalinclude:: ../examples/async_examples/test_examples_async.py :start-after: [START open_close_sender_context] :end-before: [END open_close_sender_context] :language: python :dedent: 4 :caption: Send multiple messages with a Sender. ### Response: def get_sender(self, message_timeout=0, session=None, **kwargs): """Get a Sender for the Service Bus endpoint. A Sender represents a single open connection within which multiple send operations can be made. :param message_timeout: The period in seconds during which messages sent with this Sender must be sent. If the send is not completed in this time it will fail. :type message_timeout: int :param session: An optional session ID. If supplied this session ID will be applied to every outgoing message sent with this Sender. If an individual message already has a session ID, that will be used instead. If no session ID is supplied here, nor set on an outgoing message, a ValueError will be raised if the entity is sessionful. :type session: str or ~uuid.Guid :returns: A Sender instance with an unopened connection. :rtype: ~azure.servicebus.aio.async_send_handler.Sender Example: .. literalinclude:: ../examples/async_examples/test_examples_async.py :start-after: [START open_close_sender_context] :end-before: [END open_close_sender_context] :language: python :dedent: 4 :caption: Send multiple messages with a Sender. """ handler_id = str(uuid.uuid4()) if self.entity and self.requires_session: return SessionSender( handler_id, self.entity_uri, self.auth_config, session=session, loop=self.loop, debug=self.debug, msg_timeout=message_timeout, **kwargs) return Sender( handler_id, self.entity_uri, self.auth_config, session=session, loop=self.loop, debug=self.debug, msg_timeout=message_timeout, **kwargs)
def main(net): ''' calculate pvalue of category closeness ''' # calculate the distance between the data points within the same category and # compare to null distribution for inst_rc in ['row', 'col']: inst_nodes = deepcopy(net.dat['nodes'][inst_rc]) inst_index = deepcopy(net.dat['node_info'][inst_rc]['clust']) # reorder based on clustered order inst_nodes = [ inst_nodes[i] for i in inst_index] # make distance matrix dataframe dm = dist_matrix_lattice(inst_nodes) node_infos = list(net.dat['node_info'][inst_rc].keys()) all_cats = [] for inst_info in node_infos: if 'dict_cat_' in inst_info: all_cats.append(inst_info) for cat_dict in all_cats: tmp_dict = net.dat['node_info'][inst_rc][cat_dict] pval_name = cat_dict.replace('dict_','pval_') net.dat['node_info'][inst_rc][pval_name] = {} for cat_name in tmp_dict: subset = tmp_dict[cat_name] inst_median = calc_median_dist_subset(dm, subset) hist = calc_hist_distances(dm, subset, inst_nodes) pval = 0 for i in range(len(hist['prob'])): if i == 0: pval = hist['prob'][i] if i >= 1: if inst_median >= hist['bins'][i]: pval = pval + hist['prob'][i] net.dat['node_info'][inst_rc][pval_name][cat_name] = pval
calculate pvalue of category closeness
Below is the the instruction that describes the task: ### Input: calculate pvalue of category closeness ### Response: def main(net): ''' calculate pvalue of category closeness ''' # calculate the distance between the data points within the same category and # compare to null distribution for inst_rc in ['row', 'col']: inst_nodes = deepcopy(net.dat['nodes'][inst_rc]) inst_index = deepcopy(net.dat['node_info'][inst_rc]['clust']) # reorder based on clustered order inst_nodes = [ inst_nodes[i] for i in inst_index] # make distance matrix dataframe dm = dist_matrix_lattice(inst_nodes) node_infos = list(net.dat['node_info'][inst_rc].keys()) all_cats = [] for inst_info in node_infos: if 'dict_cat_' in inst_info: all_cats.append(inst_info) for cat_dict in all_cats: tmp_dict = net.dat['node_info'][inst_rc][cat_dict] pval_name = cat_dict.replace('dict_','pval_') net.dat['node_info'][inst_rc][pval_name] = {} for cat_name in tmp_dict: subset = tmp_dict[cat_name] inst_median = calc_median_dist_subset(dm, subset) hist = calc_hist_distances(dm, subset, inst_nodes) pval = 0 for i in range(len(hist['prob'])): if i == 0: pval = hist['prob'][i] if i >= 1: if inst_median >= hist['bins'][i]: pval = pval + hist['prob'][i] net.dat['node_info'][inst_rc][pval_name][cat_name] = pval
def reloader_thread(softexit=False): """If ``soft_exit`` is True, we use sys.exit(); otherwise ``os_exit`` will be used to end the process. """ while RUN_RELOADER: if code_changed(): # force reload if softexit: sys.exit(3) else: os._exit(3) time.sleep(1)
If ``soft_exit`` is True, we use sys.exit(); otherwise ``os_exit`` will be used to end the process.
Below is the the instruction that describes the task: ### Input: If ``soft_exit`` is True, we use sys.exit(); otherwise ``os_exit`` will be used to end the process. ### Response: def reloader_thread(softexit=False): """If ``soft_exit`` is True, we use sys.exit(); otherwise ``os_exit`` will be used to end the process. """ while RUN_RELOADER: if code_changed(): # force reload if softexit: sys.exit(3) else: os._exit(3) time.sleep(1)
def banner(): """ Display a product banner :return: a delightful Mesosphere logo rendered in unicode :rtype: str """ banner_dict = { 'a0': click.style(chr(9601), fg='magenta'), 'a1': click.style(chr(9601), fg='magenta', bold=True), 'b0': click.style(chr(9616), fg='magenta'), 'c0': click.style(chr(9626), fg='magenta'), 'c1': click.style(chr(9626), fg='magenta', bold=True), 'd0': click.style(chr(9622), fg='magenta'), 'd1': click.style(chr(9622), fg='magenta', bold=True), 'e0': click.style(chr(9623), fg='magenta'), 'e1': click.style(chr(9623), fg='magenta', bold=True), 'f0': click.style(chr(9630), fg='magenta'), 'f1': click.style(chr(9630), fg='magenta', bold=True), 'g1': click.style(chr(9612), fg='magenta', bold=True), 'h0': click.style(chr(9624), fg='magenta'), 'h1': click.style(chr(9624), fg='magenta', bold=True), 'i0': click.style(chr(9629), fg='magenta'), 'i1': click.style(chr(9629), fg='magenta', bold=True), 'j0': click.style(fchr('>>'), fg='magenta'), 'k0': click.style(chr(9473), fg='magenta'), 'l0': click.style('_', fg='magenta'), 'l1': click.style('_', fg='magenta', bold=True), 'v0': click.style('mesosphere', fg='magenta'), 'x1': click.style('shakedown', fg='magenta', bold=True), 'y0': click.style('v' + shakedown.VERSION, fg='magenta'), 'z0': chr(32) } banner_map = [ " %(z0)s%(z0)s%(l0)s%(l0)s%(l1)s%(l0)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s", " %(z0)s%(b0)s%(z0)s%(c0)s%(z0)s%(d0)s%(z0)s%(z0)s%(z0)s%(z0)s%(e1)s%(z0)s%(f1)s%(z0)s%(g1)s", " %(z0)s%(b0)s%(z0)s%(z0)s%(c0)s%(z0)s%(h0)s%(e0)s%(d1)s%(i1)s%(z0)s%(f1)s%(z0)s%(z0)s%(g1)s%(z0)s%(j0)s%(v0)s %(x1)s %(y0)s", " %(z0)s%(b0)s%(z0)s%(z0)s%(f0)s%(c0)s%(i0)s%(z0)s%(z0)s%(h1)s%(f1)s%(c1)s%(z0)s%(z0)s%(g1)s%(z0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(z0)s%(k0)s%(k0)s%(z0)s%(z0)s%(k0)s", " %(z0)s%(i0)s%(f0)s%(h0)s%(z0)s%(z0)s%(c0)s%(z0)s%(z0)s%(f0)s%(z0)s%(z0)s%(i1)s%(c1)s%(h1)s", " %(z0)s%(z0)s%(z0)s%(z0)s%(z0)s%(z0)s%(z0)s%(c0)s%(f0)s", ] if 'TERM' in os.environ and os.environ['TERM'] in ('velocity', 'xterm', 'xterm-256color', 'xterm-color'): return echo("\n".join(banner_map) % banner_dict) else: return echo(fchr('>>') + 'mesosphere shakedown v' + shakedown.VERSION, b=True)
Display a product banner :return: a delightful Mesosphere logo rendered in unicode :rtype: str
Below is the the instruction that describes the task: ### Input: Display a product banner :return: a delightful Mesosphere logo rendered in unicode :rtype: str ### Response: def banner(): """ Display a product banner :return: a delightful Mesosphere logo rendered in unicode :rtype: str """ banner_dict = { 'a0': click.style(chr(9601), fg='magenta'), 'a1': click.style(chr(9601), fg='magenta', bold=True), 'b0': click.style(chr(9616), fg='magenta'), 'c0': click.style(chr(9626), fg='magenta'), 'c1': click.style(chr(9626), fg='magenta', bold=True), 'd0': click.style(chr(9622), fg='magenta'), 'd1': click.style(chr(9622), fg='magenta', bold=True), 'e0': click.style(chr(9623), fg='magenta'), 'e1': click.style(chr(9623), fg='magenta', bold=True), 'f0': click.style(chr(9630), fg='magenta'), 'f1': click.style(chr(9630), fg='magenta', bold=True), 'g1': click.style(chr(9612), fg='magenta', bold=True), 'h0': click.style(chr(9624), fg='magenta'), 'h1': click.style(chr(9624), fg='magenta', bold=True), 'i0': click.style(chr(9629), fg='magenta'), 'i1': click.style(chr(9629), fg='magenta', bold=True), 'j0': click.style(fchr('>>'), fg='magenta'), 'k0': click.style(chr(9473), fg='magenta'), 'l0': click.style('_', fg='magenta'), 'l1': click.style('_', fg='magenta', bold=True), 'v0': click.style('mesosphere', fg='magenta'), 'x1': click.style('shakedown', fg='magenta', bold=True), 'y0': click.style('v' + shakedown.VERSION, fg='magenta'), 'z0': chr(32) } banner_map = [ " %(z0)s%(z0)s%(l0)s%(l0)s%(l1)s%(l0)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s%(l1)s", " %(z0)s%(b0)s%(z0)s%(c0)s%(z0)s%(d0)s%(z0)s%(z0)s%(z0)s%(z0)s%(e1)s%(z0)s%(f1)s%(z0)s%(g1)s", " %(z0)s%(b0)s%(z0)s%(z0)s%(c0)s%(z0)s%(h0)s%(e0)s%(d1)s%(i1)s%(z0)s%(f1)s%(z0)s%(z0)s%(g1)s%(z0)s%(j0)s%(v0)s %(x1)s %(y0)s", " %(z0)s%(b0)s%(z0)s%(z0)s%(f0)s%(c0)s%(i0)s%(z0)s%(z0)s%(h1)s%(f1)s%(c1)s%(z0)s%(z0)s%(g1)s%(z0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(k0)s%(z0)s%(k0)s%(k0)s%(z0)s%(z0)s%(k0)s", " %(z0)s%(i0)s%(f0)s%(h0)s%(z0)s%(z0)s%(c0)s%(z0)s%(z0)s%(f0)s%(z0)s%(z0)s%(i1)s%(c1)s%(h1)s", " %(z0)s%(z0)s%(z0)s%(z0)s%(z0)s%(z0)s%(z0)s%(c0)s%(f0)s", ] if 'TERM' in os.environ and os.environ['TERM'] in ('velocity', 'xterm', 'xterm-256color', 'xterm-color'): return echo("\n".join(banner_map) % banner_dict) else: return echo(fchr('>>') + 'mesosphere shakedown v' + shakedown.VERSION, b=True)
def main(): """Print the RedBaron syntax tree for a Python module.""" parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.add_argument("path", help="Python module path") args = parser.parse_args() r = d1_dev.util.redbaron_module_path_to_tree(args.path) print(r.help(True))
Print the RedBaron syntax tree for a Python module.
Below is the the instruction that describes the task: ### Input: Print the RedBaron syntax tree for a Python module. ### Response: def main(): """Print the RedBaron syntax tree for a Python module.""" parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.add_argument("path", help="Python module path") args = parser.parse_args() r = d1_dev.util.redbaron_module_path_to_tree(args.path) print(r.help(True))
def diag_ksl(A, y0, tau, verb=1, scheme='symm', space=8, rmax=2000): """ Dynamical tensor-train approximation based on projector splitting This function performs one step of dynamical tensor-train approximation with diagonal matrix, i.e. it solves the equation for the equation .. math :: \\frac{dy}{dt} = V y, \\quad y(0) = y_0 and outputs approximation for :math:`y(\\tau)` :References: 1. Christian Lubich, Ivan Oseledets, and Bart Vandereycken. Time integration of tensor trains. arXiv preprint 1407.2042, 2014. http://arxiv.org/abs/1407.2042 2. Christian Lubich and Ivan V. Oseledets. A projector-splitting integrator for dynamical low-rank approximation. BIT, 54(1):171-188, 2014. http://dx.doi.org/10.1007/s10543-013-0454-0 :param A: Matrix in the TT-format :type A: matrix :param y0: Initial condition in the TT-format, :type y0: tensor :param tau: Timestep :type tau: float :param scheme: The integration scheme, possible values: 'symm' -- second order, 'first' -- first order :type scheme: str :param space: Maximal dimension of the Krylov space for the local EXPOKIT solver. :type space: int :rtype: tensor :Example: >>> import tt >>> import tt.ksl >>> import numpy as np >>> d = 8 >>> a = tt.qlaplace_dd([d, d, d]) >>> y0, ev = tt.eigb.eigb(a, tt.rand(2 , 24, 2), 1e-6, verb=0) Solving a block eigenvalue problem Looking for 1 eigenvalues with accuracy 1E-06 swp: 1 er = 1.1408 rmax:2 swp: 2 er = 190.01 rmax:2 swp: 3 er = 2.72582E-08 rmax:2 Total number of matvecs: 0 >>> y1 = tt.ksl.ksl(a, y0, 1e-2) Solving a real-valued dynamical problem with tau=1E-02 >>> print tt.dot(y1, y0) / (y1.norm() * y0.norm()) - 1 #Eigenvectors should not change 0.0 """ y0 = y0.round(1e-14) # This will fix ranks # to be no more than maximal reasonable. # Fortran part doesn't handle excessive ranks ry = y0.r.copy() if scheme is 'symm': tp = 2 else: tp = 1 # Check for dtype y = tt.vector() if np.iscomplex(A.core).any() or np.iscomplex(y0.core).any(): dyn_tt.dyn_diag_tt.ztt_diag_ksl( y0.d, A.n, A.r, A.core + 0j, y0.core + 0j, ry, tau, rmax, 0, 10, verb, tp, space) y.core = dyn_tt.dyn_diag_tt.zresult_core.copy() else: A.core = np.real(A.core) y0.core = np.real(y0.core) dyn_tt.dyn_diag_tt.dtt_diag_ksl( y0.d, A.n, A.r, A.core, y0.core, ry, tau, rmax, 0, 10, verb, tp, space) y.core = dyn_tt.dyn_diag_tt.dresult_core.copy() dyn_tt.dyn_diag_tt.deallocate_result() y.d = y0.d y.n = A.n.copy() y.r = ry y.get_ps() return y
Dynamical tensor-train approximation based on projector splitting This function performs one step of dynamical tensor-train approximation with diagonal matrix, i.e. it solves the equation for the equation .. math :: \\frac{dy}{dt} = V y, \\quad y(0) = y_0 and outputs approximation for :math:`y(\\tau)` :References: 1. Christian Lubich, Ivan Oseledets, and Bart Vandereycken. Time integration of tensor trains. arXiv preprint 1407.2042, 2014. http://arxiv.org/abs/1407.2042 2. Christian Lubich and Ivan V. Oseledets. A projector-splitting integrator for dynamical low-rank approximation. BIT, 54(1):171-188, 2014. http://dx.doi.org/10.1007/s10543-013-0454-0 :param A: Matrix in the TT-format :type A: matrix :param y0: Initial condition in the TT-format, :type y0: tensor :param tau: Timestep :type tau: float :param scheme: The integration scheme, possible values: 'symm' -- second order, 'first' -- first order :type scheme: str :param space: Maximal dimension of the Krylov space for the local EXPOKIT solver. :type space: int :rtype: tensor :Example: >>> import tt >>> import tt.ksl >>> import numpy as np >>> d = 8 >>> a = tt.qlaplace_dd([d, d, d]) >>> y0, ev = tt.eigb.eigb(a, tt.rand(2 , 24, 2), 1e-6, verb=0) Solving a block eigenvalue problem Looking for 1 eigenvalues with accuracy 1E-06 swp: 1 er = 1.1408 rmax:2 swp: 2 er = 190.01 rmax:2 swp: 3 er = 2.72582E-08 rmax:2 Total number of matvecs: 0 >>> y1 = tt.ksl.ksl(a, y0, 1e-2) Solving a real-valued dynamical problem with tau=1E-02 >>> print tt.dot(y1, y0) / (y1.norm() * y0.norm()) - 1 #Eigenvectors should not change 0.0
Below is the the instruction that describes the task: ### Input: Dynamical tensor-train approximation based on projector splitting This function performs one step of dynamical tensor-train approximation with diagonal matrix, i.e. it solves the equation for the equation .. math :: \\frac{dy}{dt} = V y, \\quad y(0) = y_0 and outputs approximation for :math:`y(\\tau)` :References: 1. Christian Lubich, Ivan Oseledets, and Bart Vandereycken. Time integration of tensor trains. arXiv preprint 1407.2042, 2014. http://arxiv.org/abs/1407.2042 2. Christian Lubich and Ivan V. Oseledets. A projector-splitting integrator for dynamical low-rank approximation. BIT, 54(1):171-188, 2014. http://dx.doi.org/10.1007/s10543-013-0454-0 :param A: Matrix in the TT-format :type A: matrix :param y0: Initial condition in the TT-format, :type y0: tensor :param tau: Timestep :type tau: float :param scheme: The integration scheme, possible values: 'symm' -- second order, 'first' -- first order :type scheme: str :param space: Maximal dimension of the Krylov space for the local EXPOKIT solver. :type space: int :rtype: tensor :Example: >>> import tt >>> import tt.ksl >>> import numpy as np >>> d = 8 >>> a = tt.qlaplace_dd([d, d, d]) >>> y0, ev = tt.eigb.eigb(a, tt.rand(2 , 24, 2), 1e-6, verb=0) Solving a block eigenvalue problem Looking for 1 eigenvalues with accuracy 1E-06 swp: 1 er = 1.1408 rmax:2 swp: 2 er = 190.01 rmax:2 swp: 3 er = 2.72582E-08 rmax:2 Total number of matvecs: 0 >>> y1 = tt.ksl.ksl(a, y0, 1e-2) Solving a real-valued dynamical problem with tau=1E-02 >>> print tt.dot(y1, y0) / (y1.norm() * y0.norm()) - 1 #Eigenvectors should not change 0.0 ### Response: def diag_ksl(A, y0, tau, verb=1, scheme='symm', space=8, rmax=2000): """ Dynamical tensor-train approximation based on projector splitting This function performs one step of dynamical tensor-train approximation with diagonal matrix, i.e. it solves the equation for the equation .. math :: \\frac{dy}{dt} = V y, \\quad y(0) = y_0 and outputs approximation for :math:`y(\\tau)` :References: 1. Christian Lubich, Ivan Oseledets, and Bart Vandereycken. Time integration of tensor trains. arXiv preprint 1407.2042, 2014. http://arxiv.org/abs/1407.2042 2. Christian Lubich and Ivan V. Oseledets. A projector-splitting integrator for dynamical low-rank approximation. BIT, 54(1):171-188, 2014. http://dx.doi.org/10.1007/s10543-013-0454-0 :param A: Matrix in the TT-format :type A: matrix :param y0: Initial condition in the TT-format, :type y0: tensor :param tau: Timestep :type tau: float :param scheme: The integration scheme, possible values: 'symm' -- second order, 'first' -- first order :type scheme: str :param space: Maximal dimension of the Krylov space for the local EXPOKIT solver. :type space: int :rtype: tensor :Example: >>> import tt >>> import tt.ksl >>> import numpy as np >>> d = 8 >>> a = tt.qlaplace_dd([d, d, d]) >>> y0, ev = tt.eigb.eigb(a, tt.rand(2 , 24, 2), 1e-6, verb=0) Solving a block eigenvalue problem Looking for 1 eigenvalues with accuracy 1E-06 swp: 1 er = 1.1408 rmax:2 swp: 2 er = 190.01 rmax:2 swp: 3 er = 2.72582E-08 rmax:2 Total number of matvecs: 0 >>> y1 = tt.ksl.ksl(a, y0, 1e-2) Solving a real-valued dynamical problem with tau=1E-02 >>> print tt.dot(y1, y0) / (y1.norm() * y0.norm()) - 1 #Eigenvectors should not change 0.0 """ y0 = y0.round(1e-14) # This will fix ranks # to be no more than maximal reasonable. # Fortran part doesn't handle excessive ranks ry = y0.r.copy() if scheme is 'symm': tp = 2 else: tp = 1 # Check for dtype y = tt.vector() if np.iscomplex(A.core).any() or np.iscomplex(y0.core).any(): dyn_tt.dyn_diag_tt.ztt_diag_ksl( y0.d, A.n, A.r, A.core + 0j, y0.core + 0j, ry, tau, rmax, 0, 10, verb, tp, space) y.core = dyn_tt.dyn_diag_tt.zresult_core.copy() else: A.core = np.real(A.core) y0.core = np.real(y0.core) dyn_tt.dyn_diag_tt.dtt_diag_ksl( y0.d, A.n, A.r, A.core, y0.core, ry, tau, rmax, 0, 10, verb, tp, space) y.core = dyn_tt.dyn_diag_tt.dresult_core.copy() dyn_tt.dyn_diag_tt.deallocate_result() y.d = y0.d y.n = A.n.copy() y.r = ry y.get_ps() return y
def find_file(self, path, tgt_env): ''' Find the specified file in the specified environment ''' tree = self.get_tree(tgt_env) if not tree: # Branch/tag/SHA not found in repo return None, None, None blob = None depth = 0 while True: depth += 1 if depth > SYMLINK_RECURSE_DEPTH: blob = None break try: file_blob = tree / path if stat.S_ISLNK(file_blob.mode): # Path is a symlink. The blob data corresponding to # this path's object ID will be the target of the # symlink. Follow the symlink and set path to the # location indicated in the blob data. stream = six.StringIO() file_blob.stream_data(stream) stream.seek(0) link_tgt = stream.read() stream.close() path = salt.utils.path.join( os.path.dirname(path), link_tgt, use_posixpath=True) else: blob = file_blob if isinstance(blob, git.Tree): # Path is a directory, not a file. blob = None break except KeyError: # File not found or repo_path points to a directory blob = None break if isinstance(blob, git.Blob): return blob, blob.hexsha, blob.mode return None, None, None
Find the specified file in the specified environment
Below is the the instruction that describes the task: ### Input: Find the specified file in the specified environment ### Response: def find_file(self, path, tgt_env): ''' Find the specified file in the specified environment ''' tree = self.get_tree(tgt_env) if not tree: # Branch/tag/SHA not found in repo return None, None, None blob = None depth = 0 while True: depth += 1 if depth > SYMLINK_RECURSE_DEPTH: blob = None break try: file_blob = tree / path if stat.S_ISLNK(file_blob.mode): # Path is a symlink. The blob data corresponding to # this path's object ID will be the target of the # symlink. Follow the symlink and set path to the # location indicated in the blob data. stream = six.StringIO() file_blob.stream_data(stream) stream.seek(0) link_tgt = stream.read() stream.close() path = salt.utils.path.join( os.path.dirname(path), link_tgt, use_posixpath=True) else: blob = file_blob if isinstance(blob, git.Tree): # Path is a directory, not a file. blob = None break except KeyError: # File not found or repo_path points to a directory blob = None break if isinstance(blob, git.Blob): return blob, blob.hexsha, blob.mode return None, None, None
def initialize_model(self, root_node): """ Initializes the Model using given root node. :param root_node: Graph root node. :type root_node: DefaultNode :return: Method success :rtype: bool """ LOGGER.debug("> Initializing model with '{0}' root node.".format(root_node)) self.beginResetModel() self.root_node = root_node self.enable_model_triggers(True) self.endResetModel() return True
Initializes the Model using given root node. :param root_node: Graph root node. :type root_node: DefaultNode :return: Method success :rtype: bool
Below is the the instruction that describes the task: ### Input: Initializes the Model using given root node. :param root_node: Graph root node. :type root_node: DefaultNode :return: Method success :rtype: bool ### Response: def initialize_model(self, root_node): """ Initializes the Model using given root node. :param root_node: Graph root node. :type root_node: DefaultNode :return: Method success :rtype: bool """ LOGGER.debug("> Initializing model with '{0}' root node.".format(root_node)) self.beginResetModel() self.root_node = root_node self.enable_model_triggers(True) self.endResetModel() return True
def get_service_password(service, username, oracle=None, interactive=False): """ Retrieve the sensitive password for a service by: * retrieving password from a secure store (@oracle:use_keyring, default) * asking the password from the user (@oracle:ask_password, interactive) * executing a command and use the output as password (@oracle:eval:<command>) Note that the keyring may or may not be locked which requires that the user provides a password (interactive mode). :param service: Service name, may be key into secure store (as string). :param username: Username for the service (as string). :param oracle: Hint which password oracle strategy to use. :return: Retrieved password (as string) .. seealso:: https://bitbucket.org/kang/python-keyring-lib """ import getpass password = None if not oracle or oracle == "@oracle:use_keyring": keyring = get_keyring() password = keyring.get_password(service, username) if interactive and password is None: # -- LEARNING MODE: Password is not stored in keyring yet. oracle = "@oracle:ask_password" password = get_service_password(service, username, oracle, interactive=True) if password: keyring.set_password(service, username, password) elif interactive and oracle == "@oracle:ask_password": prompt = "%s password: " % service password = getpass.getpass(prompt) elif oracle.startswith('@oracle:eval:'): command = oracle[13:] return oracle_eval(command) if password is None: die("MISSING PASSWORD: oracle='%s', interactive=%s for service=%s" % (oracle, interactive, service)) return password
Retrieve the sensitive password for a service by: * retrieving password from a secure store (@oracle:use_keyring, default) * asking the password from the user (@oracle:ask_password, interactive) * executing a command and use the output as password (@oracle:eval:<command>) Note that the keyring may or may not be locked which requires that the user provides a password (interactive mode). :param service: Service name, may be key into secure store (as string). :param username: Username for the service (as string). :param oracle: Hint which password oracle strategy to use. :return: Retrieved password (as string) .. seealso:: https://bitbucket.org/kang/python-keyring-lib
Below is the the instruction that describes the task: ### Input: Retrieve the sensitive password for a service by: * retrieving password from a secure store (@oracle:use_keyring, default) * asking the password from the user (@oracle:ask_password, interactive) * executing a command and use the output as password (@oracle:eval:<command>) Note that the keyring may or may not be locked which requires that the user provides a password (interactive mode). :param service: Service name, may be key into secure store (as string). :param username: Username for the service (as string). :param oracle: Hint which password oracle strategy to use. :return: Retrieved password (as string) .. seealso:: https://bitbucket.org/kang/python-keyring-lib ### Response: def get_service_password(service, username, oracle=None, interactive=False): """ Retrieve the sensitive password for a service by: * retrieving password from a secure store (@oracle:use_keyring, default) * asking the password from the user (@oracle:ask_password, interactive) * executing a command and use the output as password (@oracle:eval:<command>) Note that the keyring may or may not be locked which requires that the user provides a password (interactive mode). :param service: Service name, may be key into secure store (as string). :param username: Username for the service (as string). :param oracle: Hint which password oracle strategy to use. :return: Retrieved password (as string) .. seealso:: https://bitbucket.org/kang/python-keyring-lib """ import getpass password = None if not oracle or oracle == "@oracle:use_keyring": keyring = get_keyring() password = keyring.get_password(service, username) if interactive and password is None: # -- LEARNING MODE: Password is not stored in keyring yet. oracle = "@oracle:ask_password" password = get_service_password(service, username, oracle, interactive=True) if password: keyring.set_password(service, username, password) elif interactive and oracle == "@oracle:ask_password": prompt = "%s password: " % service password = getpass.getpass(prompt) elif oracle.startswith('@oracle:eval:'): command = oracle[13:] return oracle_eval(command) if password is None: die("MISSING PASSWORD: oracle='%s', interactive=%s for service=%s" % (oracle, interactive, service)) return password
def debug_validator(validator: ValidatorType) -> ValidatorType: """ Use as a wrapper around a validator, e.g. .. code-block:: python self.validator = debug_validator(OneOf(["some", "values"])) If you do this, the log will show the thinking of the validator (what it's trying to validate, and whether it accepted or rejected the value). """ def _validate(node: SchemaNode, value: Any) -> None: log.debug("Validating: {!r}", value) try: validator(node, value) log.debug("... accepted") except Invalid: log.debug("... rejected") raise return _validate
Use as a wrapper around a validator, e.g. .. code-block:: python self.validator = debug_validator(OneOf(["some", "values"])) If you do this, the log will show the thinking of the validator (what it's trying to validate, and whether it accepted or rejected the value).
Below is the the instruction that describes the task: ### Input: Use as a wrapper around a validator, e.g. .. code-block:: python self.validator = debug_validator(OneOf(["some", "values"])) If you do this, the log will show the thinking of the validator (what it's trying to validate, and whether it accepted or rejected the value). ### Response: def debug_validator(validator: ValidatorType) -> ValidatorType: """ Use as a wrapper around a validator, e.g. .. code-block:: python self.validator = debug_validator(OneOf(["some", "values"])) If you do this, the log will show the thinking of the validator (what it's trying to validate, and whether it accepted or rejected the value). """ def _validate(node: SchemaNode, value: Any) -> None: log.debug("Validating: {!r}", value) try: validator(node, value) log.debug("... accepted") except Invalid: log.debug("... rejected") raise return _validate
def launch_experiment(args, experiment_config, mode, config_file_name, experiment_id=None): '''follow steps to start rest server and start experiment''' nni_config = Config(config_file_name) # check packages for tuner if experiment_config.get('tuner') and experiment_config['tuner'].get('builtinTunerName'): tuner_name = experiment_config['tuner']['builtinTunerName'] module_name = ModuleName[tuner_name] try: check_call([sys.executable, '-c', 'import %s'%(module_name)]) except ModuleNotFoundError as e: print_error('The tuner %s should be installed through nnictl'%(tuner_name)) exit(1) log_dir = experiment_config['logDir'] if experiment_config.get('logDir') else None log_level = experiment_config['logLevel'] if experiment_config.get('logLevel') else None if log_level not in ['trace', 'debug'] and args.debug: log_level = 'debug' # start rest server rest_process, start_time = start_rest_server(args.port, experiment_config['trainingServicePlatform'], mode, config_file_name, experiment_id, log_dir, log_level) nni_config.set_config('restServerPid', rest_process.pid) # Deal with annotation if experiment_config.get('useAnnotation'): path = os.path.join(tempfile.gettempdir(), get_user(), 'nni', 'annotation') if not os.path.isdir(path): os.makedirs(path) path = tempfile.mkdtemp(dir=path) code_dir = expand_annotations(experiment_config['trial']['codeDir'], path) experiment_config['trial']['codeDir'] = code_dir search_space = generate_search_space(code_dir) experiment_config['searchSpace'] = json.dumps(search_space) assert search_space, ERROR_INFO % 'Generated search space is empty' elif experiment_config.get('searchSpacePath'): search_space = get_json_content(experiment_config.get('searchSpacePath')) experiment_config['searchSpace'] = json.dumps(search_space) else: experiment_config['searchSpace'] = json.dumps('') # check rest server running, _ = check_rest_server(args.port) if running: print_normal('Successfully started Restful server!') else: print_error('Restful server start failed!') print_log_content(config_file_name) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Rest server stopped!') exit(1) # set remote config if experiment_config['trainingServicePlatform'] == 'remote': print_normal('Setting remote config...') config_result, err_msg = set_remote_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set remote config!') else: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Rest server stopped!') exit(1) # set local config if experiment_config['trainingServicePlatform'] == 'local': print_normal('Setting local config...') if set_local_config(experiment_config, args.port, config_file_name): print_normal('Successfully set local config!') else: print_error('Set local config failed!') try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Rest server stopped!') exit(1) #set pai config if experiment_config['trainingServicePlatform'] == 'pai': print_normal('Setting pai config...') config_result, err_msg = set_pai_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set pai config!') else: if err_msg: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) #set kubeflow config if experiment_config['trainingServicePlatform'] == 'kubeflow': print_normal('Setting kubeflow config...') config_result, err_msg = set_kubeflow_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set kubeflow config!') else: if err_msg: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) #set kubeflow config if experiment_config['trainingServicePlatform'] == 'frameworkcontroller': print_normal('Setting frameworkcontroller config...') config_result, err_msg = set_frameworkcontroller_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set frameworkcontroller config!') else: if err_msg: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) # start a new experiment print_normal('Starting experiment...') # set debug configuration if experiment_config.get('debug') is None: experiment_config['debug'] = args.debug response = set_experiment(experiment_config, mode, args.port, config_file_name) if response: if experiment_id is None: experiment_id = json.loads(response.text).get('experiment_id') nni_config.set_config('experimentId', experiment_id) else: print_error('Start experiment failed!') print_log_content(config_file_name) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) if experiment_config.get('nniManagerIp'): web_ui_url_list = ['{0}:{1}'.format(experiment_config['nniManagerIp'], str(args.port))] else: web_ui_url_list = get_local_urls(args.port) nni_config.set_config('webuiUrl', web_ui_url_list) #save experiment information nnictl_experiment_config = Experiments() nnictl_experiment_config.add_experiment(experiment_id, args.port, start_time, config_file_name, experiment_config['trainingServicePlatform']) print_normal(EXPERIMENT_SUCCESS_INFO % (experiment_id, ' '.join(web_ui_url_list)))
follow steps to start rest server and start experiment
Below is the the instruction that describes the task: ### Input: follow steps to start rest server and start experiment ### Response: def launch_experiment(args, experiment_config, mode, config_file_name, experiment_id=None): '''follow steps to start rest server and start experiment''' nni_config = Config(config_file_name) # check packages for tuner if experiment_config.get('tuner') and experiment_config['tuner'].get('builtinTunerName'): tuner_name = experiment_config['tuner']['builtinTunerName'] module_name = ModuleName[tuner_name] try: check_call([sys.executable, '-c', 'import %s'%(module_name)]) except ModuleNotFoundError as e: print_error('The tuner %s should be installed through nnictl'%(tuner_name)) exit(1) log_dir = experiment_config['logDir'] if experiment_config.get('logDir') else None log_level = experiment_config['logLevel'] if experiment_config.get('logLevel') else None if log_level not in ['trace', 'debug'] and args.debug: log_level = 'debug' # start rest server rest_process, start_time = start_rest_server(args.port, experiment_config['trainingServicePlatform'], mode, config_file_name, experiment_id, log_dir, log_level) nni_config.set_config('restServerPid', rest_process.pid) # Deal with annotation if experiment_config.get('useAnnotation'): path = os.path.join(tempfile.gettempdir(), get_user(), 'nni', 'annotation') if not os.path.isdir(path): os.makedirs(path) path = tempfile.mkdtemp(dir=path) code_dir = expand_annotations(experiment_config['trial']['codeDir'], path) experiment_config['trial']['codeDir'] = code_dir search_space = generate_search_space(code_dir) experiment_config['searchSpace'] = json.dumps(search_space) assert search_space, ERROR_INFO % 'Generated search space is empty' elif experiment_config.get('searchSpacePath'): search_space = get_json_content(experiment_config.get('searchSpacePath')) experiment_config['searchSpace'] = json.dumps(search_space) else: experiment_config['searchSpace'] = json.dumps('') # check rest server running, _ = check_rest_server(args.port) if running: print_normal('Successfully started Restful server!') else: print_error('Restful server start failed!') print_log_content(config_file_name) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Rest server stopped!') exit(1) # set remote config if experiment_config['trainingServicePlatform'] == 'remote': print_normal('Setting remote config...') config_result, err_msg = set_remote_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set remote config!') else: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Rest server stopped!') exit(1) # set local config if experiment_config['trainingServicePlatform'] == 'local': print_normal('Setting local config...') if set_local_config(experiment_config, args.port, config_file_name): print_normal('Successfully set local config!') else: print_error('Set local config failed!') try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Rest server stopped!') exit(1) #set pai config if experiment_config['trainingServicePlatform'] == 'pai': print_normal('Setting pai config...') config_result, err_msg = set_pai_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set pai config!') else: if err_msg: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) #set kubeflow config if experiment_config['trainingServicePlatform'] == 'kubeflow': print_normal('Setting kubeflow config...') config_result, err_msg = set_kubeflow_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set kubeflow config!') else: if err_msg: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) #set kubeflow config if experiment_config['trainingServicePlatform'] == 'frameworkcontroller': print_normal('Setting frameworkcontroller config...') config_result, err_msg = set_frameworkcontroller_config(experiment_config, args.port, config_file_name) if config_result: print_normal('Successfully set frameworkcontroller config!') else: if err_msg: print_error('Failed! Error is: {}'.format(err_msg)) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) # start a new experiment print_normal('Starting experiment...') # set debug configuration if experiment_config.get('debug') is None: experiment_config['debug'] = args.debug response = set_experiment(experiment_config, mode, args.port, config_file_name) if response: if experiment_id is None: experiment_id = json.loads(response.text).get('experiment_id') nni_config.set_config('experimentId', experiment_id) else: print_error('Start experiment failed!') print_log_content(config_file_name) try: kill_command(rest_process.pid) except Exception: raise Exception(ERROR_INFO % 'Restful server stopped!') exit(1) if experiment_config.get('nniManagerIp'): web_ui_url_list = ['{0}:{1}'.format(experiment_config['nniManagerIp'], str(args.port))] else: web_ui_url_list = get_local_urls(args.port) nni_config.set_config('webuiUrl', web_ui_url_list) #save experiment information nnictl_experiment_config = Experiments() nnictl_experiment_config.add_experiment(experiment_id, args.port, start_time, config_file_name, experiment_config['trainingServicePlatform']) print_normal(EXPERIMENT_SUCCESS_INFO % (experiment_id, ' '.join(web_ui_url_list)))
def categories(self): ''' List of categories of a page. ''' if not getattr(self, '_categories', False): self._categories = [re.sub(r'^Category:', '', x) for x in [link['title'] for link in self.__continued_query({ 'prop': 'categories', 'cllimit': 'max' }) ]] return self._categories
List of categories of a page.
Below is the the instruction that describes the task: ### Input: List of categories of a page. ### Response: def categories(self): ''' List of categories of a page. ''' if not getattr(self, '_categories', False): self._categories = [re.sub(r'^Category:', '', x) for x in [link['title'] for link in self.__continued_query({ 'prop': 'categories', 'cllimit': 'max' }) ]] return self._categories
def _fix_weights(weight_fun, *args): """Ensure random weight matrix is valid. TODO: The diagonally dominant tuning currently doesn't make sense. Our weight matrix has zeros along the diagonal, so multiplying by a diagonal matrix results in a zero-matrix. """ weights = weight_fun(*args) # TODO: fix this # disable checks for now return weights # if positive semidefinite, then we're good as is if _check_psd(weights): return weights # make diagonally dominant off_diag_sums = np.sum(weights, axis=1) # NOTE: assumes diag is zero mod_mat = np.linalg.inv(np.sqrt(np.diag(off_diag_sums))) return np.dot(mod_mat, weights, mod_mat)
Ensure random weight matrix is valid. TODO: The diagonally dominant tuning currently doesn't make sense. Our weight matrix has zeros along the diagonal, so multiplying by a diagonal matrix results in a zero-matrix.
Below is the the instruction that describes the task: ### Input: Ensure random weight matrix is valid. TODO: The diagonally dominant tuning currently doesn't make sense. Our weight matrix has zeros along the diagonal, so multiplying by a diagonal matrix results in a zero-matrix. ### Response: def _fix_weights(weight_fun, *args): """Ensure random weight matrix is valid. TODO: The diagonally dominant tuning currently doesn't make sense. Our weight matrix has zeros along the diagonal, so multiplying by a diagonal matrix results in a zero-matrix. """ weights = weight_fun(*args) # TODO: fix this # disable checks for now return weights # if positive semidefinite, then we're good as is if _check_psd(weights): return weights # make diagonally dominant off_diag_sums = np.sum(weights, axis=1) # NOTE: assumes diag is zero mod_mat = np.linalg.inv(np.sqrt(np.diag(off_diag_sums))) return np.dot(mod_mat, weights, mod_mat)
def safe_indicator(self, indicator, errors='strict'): """Indicator encode value for safe HTTP request. Args: indicator (string): Indicator to URL Encode errors (string): The error handler type. Returns: (string): The urlencoded string """ if indicator is not None: try: indicator = quote(self.s(str(indicator), errors=errors), safe='~') except KeyError: indicator = quote(bytes(indicator), safe='~') return indicator
Indicator encode value for safe HTTP request. Args: indicator (string): Indicator to URL Encode errors (string): The error handler type. Returns: (string): The urlencoded string
Below is the the instruction that describes the task: ### Input: Indicator encode value for safe HTTP request. Args: indicator (string): Indicator to URL Encode errors (string): The error handler type. Returns: (string): The urlencoded string ### Response: def safe_indicator(self, indicator, errors='strict'): """Indicator encode value for safe HTTP request. Args: indicator (string): Indicator to URL Encode errors (string): The error handler type. Returns: (string): The urlencoded string """ if indicator is not None: try: indicator = quote(self.s(str(indicator), errors=errors), safe='~') except KeyError: indicator = quote(bytes(indicator), safe='~') return indicator
def _effective_view_filter(self): """Returns the mongodb relationship filter for effective views""" if self._effective_view == EFFECTIVE: now = datetime.datetime.utcnow() return {'startDate': {'$$lte': now}, 'endDate': {'$$gte': now}} return {}
Returns the mongodb relationship filter for effective views
Below is the the instruction that describes the task: ### Input: Returns the mongodb relationship filter for effective views ### Response: def _effective_view_filter(self): """Returns the mongodb relationship filter for effective views""" if self._effective_view == EFFECTIVE: now = datetime.datetime.utcnow() return {'startDate': {'$$lte': now}, 'endDate': {'$$gte': now}} return {}
def run(self, host: str = "0.0.0.0", port: int = 5000) -> None: """ debug run :param host: the hostname to listen on, default is ``'0.0.0.0'`` :param port: the port of the server, default id ``5000`` """ loop = cast(asyncio.AbstractEventLoop, self._loop) listen = self.listen(host=host, port=port) server = loop.run_until_complete(listen) def close() -> None: """ 关闭回调 """ server.close() loop.stop() # print(type(server)) loop.add_signal_handler(SIGTERM, close) loop.add_signal_handler(SIGINT, close) loop.run_forever()
debug run :param host: the hostname to listen on, default is ``'0.0.0.0'`` :param port: the port of the server, default id ``5000``
Below is the the instruction that describes the task: ### Input: debug run :param host: the hostname to listen on, default is ``'0.0.0.0'`` :param port: the port of the server, default id ``5000`` ### Response: def run(self, host: str = "0.0.0.0", port: int = 5000) -> None: """ debug run :param host: the hostname to listen on, default is ``'0.0.0.0'`` :param port: the port of the server, default id ``5000`` """ loop = cast(asyncio.AbstractEventLoop, self._loop) listen = self.listen(host=host, port=port) server = loop.run_until_complete(listen) def close() -> None: """ 关闭回调 """ server.close() loop.stop() # print(type(server)) loop.add_signal_handler(SIGTERM, close) loop.add_signal_handler(SIGINT, close) loop.run_forever()
def could_scope_out(self): """ could bubble up from current scope :return: """ return not self.waiting_for or \ isinstance(self.waiting_for, callable.EndOfStory) or \ self.is_breaking_a_loop()
could bubble up from current scope :return:
Below is the the instruction that describes the task: ### Input: could bubble up from current scope :return: ### Response: def could_scope_out(self): """ could bubble up from current scope :return: """ return not self.waiting_for or \ isinstance(self.waiting_for, callable.EndOfStory) or \ self.is_breaking_a_loop()
def upath(path): """ Always return a unicode path. """ if six.PY2 and not isinstance(path, six.text_type): return path.decode(fs_encoding) return path
Always return a unicode path.
Below is the the instruction that describes the task: ### Input: Always return a unicode path. ### Response: def upath(path): """ Always return a unicode path. """ if six.PY2 and not isinstance(path, six.text_type): return path.decode(fs_encoding) return path
def _prepare_facet_field_spies(self, facets): """ Returns a list of spies based on the facets used to count frequencies. """ spies = [] for facet in facets: slot = self.column[facet] spy = xapian.ValueCountMatchSpy(slot) # add attribute "slot" to know which column this spy is targeting. spy.slot = slot spies.append(spy) return spies
Returns a list of spies based on the facets used to count frequencies.
Below is the the instruction that describes the task: ### Input: Returns a list of spies based on the facets used to count frequencies. ### Response: def _prepare_facet_field_spies(self, facets): """ Returns a list of spies based on the facets used to count frequencies. """ spies = [] for facet in facets: slot = self.column[facet] spy = xapian.ValueCountMatchSpy(slot) # add attribute "slot" to know which column this spy is targeting. spy.slot = slot spies.append(spy) return spies
def get_me(self, *args, **kwargs): """See :func:`get_me`""" return get_me(*args, **self._merge_overrides(**kwargs)).run()
See :func:`get_me`
Below is the the instruction that describes the task: ### Input: See :func:`get_me` ### Response: def get_me(self, *args, **kwargs): """See :func:`get_me`""" return get_me(*args, **self._merge_overrides(**kwargs)).run()
def create_or_update(ctx, model, xmlid, values): """ Create or update a record matching xmlid with values """ if isinstance(model, basestring): model = ctx.env[model] record = ctx.env.ref(xmlid, raise_if_not_found=False) if record: record.update(values) else: record = model.create(values) add_xmlid(ctx, record, xmlid) return record
Create or update a record matching xmlid with values
Below is the the instruction that describes the task: ### Input: Create or update a record matching xmlid with values ### Response: def create_or_update(ctx, model, xmlid, values): """ Create or update a record matching xmlid with values """ if isinstance(model, basestring): model = ctx.env[model] record = ctx.env.ref(xmlid, raise_if_not_found=False) if record: record.update(values) else: record = model.create(values) add_xmlid(ctx, record, xmlid) return record
def show_context_menu(self, item, mouse_pos=None): "Open a popup menu with options regarding the selected object" if item: d = self.tree.GetItemData(item) if d: obj = d.GetData() if obj: # highligh and store the selected object: self.highlight(obj.wx_obj) self.obj = obj # make the context menu menu = wx.Menu() id_del, id_dup, id_raise, id_lower = [wx.NewId() for i in range(4)] menu.Append(id_del, "Delete") menu.Append(id_dup, "Duplicate") menu.Append(id_raise, "Bring to Front") menu.Append(id_lower, "Send to Back") # make submenu! sm = wx.Menu() for ctrl in sorted(obj._meta.valid_children, key=lambda c: registry.ALL.index(c._meta.name)): new_id = wx.NewId() sm.Append(new_id, ctrl._meta.name) self.Bind(wx.EVT_MENU, lambda evt, ctrl=ctrl: self.add_child(ctrl, mouse_pos), id=new_id) menu.AppendMenu(wx.NewId(), "Add child", sm) self.Bind(wx.EVT_MENU, self.delete, id=id_del) self.Bind(wx.EVT_MENU, self.duplicate, id=id_dup) self.Bind(wx.EVT_MENU, self.bring_to_front, id=id_raise) self.Bind(wx.EVT_MENU, self.send_to_back, id=id_lower) self.PopupMenu(menu) menu.Destroy() self.load_object(self.root_obj)
Open a popup menu with options regarding the selected object
Below is the the instruction that describes the task: ### Input: Open a popup menu with options regarding the selected object ### Response: def show_context_menu(self, item, mouse_pos=None): "Open a popup menu with options regarding the selected object" if item: d = self.tree.GetItemData(item) if d: obj = d.GetData() if obj: # highligh and store the selected object: self.highlight(obj.wx_obj) self.obj = obj # make the context menu menu = wx.Menu() id_del, id_dup, id_raise, id_lower = [wx.NewId() for i in range(4)] menu.Append(id_del, "Delete") menu.Append(id_dup, "Duplicate") menu.Append(id_raise, "Bring to Front") menu.Append(id_lower, "Send to Back") # make submenu! sm = wx.Menu() for ctrl in sorted(obj._meta.valid_children, key=lambda c: registry.ALL.index(c._meta.name)): new_id = wx.NewId() sm.Append(new_id, ctrl._meta.name) self.Bind(wx.EVT_MENU, lambda evt, ctrl=ctrl: self.add_child(ctrl, mouse_pos), id=new_id) menu.AppendMenu(wx.NewId(), "Add child", sm) self.Bind(wx.EVT_MENU, self.delete, id=id_del) self.Bind(wx.EVT_MENU, self.duplicate, id=id_dup) self.Bind(wx.EVT_MENU, self.bring_to_front, id=id_raise) self.Bind(wx.EVT_MENU, self.send_to_back, id=id_lower) self.PopupMenu(menu) menu.Destroy() self.load_object(self.root_obj)
def _get_dvportgroup_out_shaping(pg_name, pg_default_port_config): ''' Returns the out shaping policy of a distributed virtual portgroup pg_name The name of the portgroup pg_default_port_config The dafault port config of the portgroup ''' log.trace('Retrieving portgroup\'s \'%s\' out shaping config', pg_name) out_shaping_policy = pg_default_port_config.outShapingPolicy if not out_shaping_policy: return {} return {'average_bandwidth': out_shaping_policy.averageBandwidth.value, 'burst_size': out_shaping_policy.burstSize.value, 'enabled': out_shaping_policy.enabled.value, 'peak_bandwidth': out_shaping_policy.peakBandwidth.value}
Returns the out shaping policy of a distributed virtual portgroup pg_name The name of the portgroup pg_default_port_config The dafault port config of the portgroup
Below is the the instruction that describes the task: ### Input: Returns the out shaping policy of a distributed virtual portgroup pg_name The name of the portgroup pg_default_port_config The dafault port config of the portgroup ### Response: def _get_dvportgroup_out_shaping(pg_name, pg_default_port_config): ''' Returns the out shaping policy of a distributed virtual portgroup pg_name The name of the portgroup pg_default_port_config The dafault port config of the portgroup ''' log.trace('Retrieving portgroup\'s \'%s\' out shaping config', pg_name) out_shaping_policy = pg_default_port_config.outShapingPolicy if not out_shaping_policy: return {} return {'average_bandwidth': out_shaping_policy.averageBandwidth.value, 'burst_size': out_shaping_policy.burstSize.value, 'enabled': out_shaping_policy.enabled.value, 'peak_bandwidth': out_shaping_policy.peakBandwidth.value}
def set_params(self, arg_params, aux_params, allow_missing=False, force_init=True, allow_extra=False): """Assigns parameter and aux state values. Parameters ---------- arg_params : dict Dictionary of name to value (`NDArray`) mapping. aux_params : dict Dictionary of name to value (`NDArray`) mapping. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. Examples -------- >>> # An example of setting module parameters. >>> sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, n_epoch_load) >>> mod.set_params(arg_params=arg_params, aux_params=aux_params) """ self.init_params(initializer=None, arg_params=arg_params, aux_params=aux_params, allow_missing=allow_missing, force_init=force_init, allow_extra=allow_extra)
Assigns parameter and aux state values. Parameters ---------- arg_params : dict Dictionary of name to value (`NDArray`) mapping. aux_params : dict Dictionary of name to value (`NDArray`) mapping. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. Examples -------- >>> # An example of setting module parameters. >>> sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, n_epoch_load) >>> mod.set_params(arg_params=arg_params, aux_params=aux_params)
Below is the the instruction that describes the task: ### Input: Assigns parameter and aux state values. Parameters ---------- arg_params : dict Dictionary of name to value (`NDArray`) mapping. aux_params : dict Dictionary of name to value (`NDArray`) mapping. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. Examples -------- >>> # An example of setting module parameters. >>> sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, n_epoch_load) >>> mod.set_params(arg_params=arg_params, aux_params=aux_params) ### Response: def set_params(self, arg_params, aux_params, allow_missing=False, force_init=True, allow_extra=False): """Assigns parameter and aux state values. Parameters ---------- arg_params : dict Dictionary of name to value (`NDArray`) mapping. aux_params : dict Dictionary of name to value (`NDArray`) mapping. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. Examples -------- >>> # An example of setting module parameters. >>> sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, n_epoch_load) >>> mod.set_params(arg_params=arg_params, aux_params=aux_params) """ self.init_params(initializer=None, arg_params=arg_params, aux_params=aux_params, allow_missing=allow_missing, force_init=force_init, allow_extra=allow_extra)
def folder_get(self, token, folder_id): """ Get the attributes of the specified folder. :param token: A valid token for the user in question. :type token: string :param folder_id: The id of the requested folder. :type folder_id: int | long :returns: Dictionary of the folder attributes. :rtype: dict """ parameters = dict() parameters['token'] = token parameters['id'] = folder_id response = self.request('midas.folder.get', parameters) return response
Get the attributes of the specified folder. :param token: A valid token for the user in question. :type token: string :param folder_id: The id of the requested folder. :type folder_id: int | long :returns: Dictionary of the folder attributes. :rtype: dict
Below is the the instruction that describes the task: ### Input: Get the attributes of the specified folder. :param token: A valid token for the user in question. :type token: string :param folder_id: The id of the requested folder. :type folder_id: int | long :returns: Dictionary of the folder attributes. :rtype: dict ### Response: def folder_get(self, token, folder_id): """ Get the attributes of the specified folder. :param token: A valid token for the user in question. :type token: string :param folder_id: The id of the requested folder. :type folder_id: int | long :returns: Dictionary of the folder attributes. :rtype: dict """ parameters = dict() parameters['token'] = token parameters['id'] = folder_id response = self.request('midas.folder.get', parameters) return response
def logL(self, kwargs_lens, kwargs_ps, kwargs_cosmo): """ routine to compute the log likelihood of the time delay distance :param kwargs_lens: lens model kwargs list :param kwargs_ps: point source kwargs list :param kwargs_cosmo: cosmology and other kwargs :return: log likelihood of the model given the time delay data """ x_pos, y_pos = self._pointSource.image_position(kwargs_ps=kwargs_ps, kwargs_lens=kwargs_lens) x_pos, y_pos = self._param.real_image_positions(x_pos[0], y_pos[0], kwargs_cosmo) x_source, y_source = self._lensModel.ray_shooting(x_pos, y_pos, kwargs_lens) delay_arcsec = self._lensModel.fermat_potential(x_pos, y_pos, x_source, y_source, kwargs_lens) D_dt_model = kwargs_cosmo['D_dt'] delay_days = const.delay_arcsec2days(delay_arcsec, D_dt_model) logL = self._logL_delays(delay_days, self._delays_measured, self._delays_errors) return logL
routine to compute the log likelihood of the time delay distance :param kwargs_lens: lens model kwargs list :param kwargs_ps: point source kwargs list :param kwargs_cosmo: cosmology and other kwargs :return: log likelihood of the model given the time delay data
Below is the the instruction that describes the task: ### Input: routine to compute the log likelihood of the time delay distance :param kwargs_lens: lens model kwargs list :param kwargs_ps: point source kwargs list :param kwargs_cosmo: cosmology and other kwargs :return: log likelihood of the model given the time delay data ### Response: def logL(self, kwargs_lens, kwargs_ps, kwargs_cosmo): """ routine to compute the log likelihood of the time delay distance :param kwargs_lens: lens model kwargs list :param kwargs_ps: point source kwargs list :param kwargs_cosmo: cosmology and other kwargs :return: log likelihood of the model given the time delay data """ x_pos, y_pos = self._pointSource.image_position(kwargs_ps=kwargs_ps, kwargs_lens=kwargs_lens) x_pos, y_pos = self._param.real_image_positions(x_pos[0], y_pos[0], kwargs_cosmo) x_source, y_source = self._lensModel.ray_shooting(x_pos, y_pos, kwargs_lens) delay_arcsec = self._lensModel.fermat_potential(x_pos, y_pos, x_source, y_source, kwargs_lens) D_dt_model = kwargs_cosmo['D_dt'] delay_days = const.delay_arcsec2days(delay_arcsec, D_dt_model) logL = self._logL_delays(delay_days, self._delays_measured, self._delays_errors) return logL
def shutdown(self, join=True, timeout=None): """ Clean shutdown of the node. :param join: optionally wait for the process to end (default : True) :return: last exitcode from update method """ if self.interface is not None: self.interface.stop() return super(PyrosBase, self).shutdown(join, timeout=timeout)
Clean shutdown of the node. :param join: optionally wait for the process to end (default : True) :return: last exitcode from update method
Below is the the instruction that describes the task: ### Input: Clean shutdown of the node. :param join: optionally wait for the process to end (default : True) :return: last exitcode from update method ### Response: def shutdown(self, join=True, timeout=None): """ Clean shutdown of the node. :param join: optionally wait for the process to end (default : True) :return: last exitcode from update method """ if self.interface is not None: self.interface.stop() return super(PyrosBase, self).shutdown(join, timeout=timeout)
def install(name=None, refresh=False, skip_verify=False, pkgs=None, sources=None, downloadonly=False, reinstall=False, normalize=True, update_holds=False, saltenv='base', ignore_epoch=False, **kwargs): ''' .. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 On minions running systemd>=205, `systemd-run(1)`_ is now used to isolate commands which modify installed packages from the ``salt-minion`` daemon's control group. This is done to keep systemd from killing any yum/dnf commands spawned by Salt when the ``salt-minion`` service is restarted. (see ``KillMode`` in the `systemd.kill(5)`_ manpage for more information). If desired, usage of `systemd-run(1)`_ can be suppressed by setting a :mod:`config option <salt.modules.config.get>` called ``systemd.scope``, with a value of ``False`` (no quotes). .. _`systemd-run(1)`: https://www.freedesktop.org/software/systemd/man/systemd-run.html .. _`systemd.kill(5)`: https://www.freedesktop.org/software/systemd/man/systemd.kill.html Install the passed package(s), add refresh=True to clean the yum database before package is installed. name The name of the package to be installed. Note that this parameter is ignored if either "pkgs" or "sources" is passed. Additionally, please note that this option can only be used to install packages from a software repository. To install a package file manually, use the "sources" option. 32-bit packages can be installed on 64-bit systems by appending the architecture designation (``.i686``, ``.i586``, etc.) to the end of the package name. CLI Example: .. code-block:: bash salt '*' pkg.install <package name> refresh Whether or not to update the yum database before executing. reinstall Specifying reinstall=True will use ``yum reinstall`` rather than ``yum install`` for requested packages that are already installed. If a version is specified with the requested package, then ``yum reinstall`` will only be used if the installed version matches the requested version. Works with ``sources`` when the package header of the source can be matched to the name and version of an installed package. .. versionadded:: 2014.7.0 skip_verify Skip the GPG verification check (e.g., ``--nogpgcheck``) downloadonly Only download the packages, do not install. version Install a specific version of the package, e.g. 1.2.3-4.el5. Ignored if "pkgs" or "sources" is passed. .. versionchanged:: 2018.3.0 version can now contain comparison operators (e.g. ``>1.2.3``, ``<=2.0``, etc.) update_holds : False If ``True``, and this function would update the package version, any packages held using the yum/dnf "versionlock" plugin will be unheld so that they can be updated. Otherwise, if this function attempts to update a held package, the held package(s) will be skipped and an error will be raised. .. versionadded:: 2016.11.0 setopt A comma-separated or Python list of key=value options. This list will be expanded and ``--setopt`` prepended to each in the yum/dnf command that is run. CLI Example: .. code-block:: bash salt '*' pkg.install foo setopt='obsoletes=0,plugins=0' .. versionadded:: 2019.2.0 Repository Options: fromrepo Specify a package repository (or repositories) from which to install. (e.g., ``yum --disablerepo='*' --enablerepo='somerepo'``) enablerepo (ignored if ``fromrepo`` is specified) Specify a disabled package repository (or repositories) to enable. (e.g., ``yum --enablerepo='somerepo'``) disablerepo (ignored if ``fromrepo`` is specified) Specify an enabled package repository (or repositories) to disable. (e.g., ``yum --disablerepo='somerepo'``) disableexcludes Disable exclude from main, for a repo or for everything. (e.g., ``yum --disableexcludes='main'``) .. versionadded:: 2014.7.0 ignore_epoch : False Only used when the version of a package is specified using a comparison operator (e.g. ``>4.1``). If set to ``True``, then the epoch will be ignored when comparing the currently-installed version to the desired version. .. versionadded:: 2018.3.0 Multiple Package Installation Options: pkgs A list of packages to install from a software repository. Must be passed as a python list. A specific version number can be specified by using a single-element dict representing the package and its version. CLI Examples: .. code-block:: bash salt '*' pkg.install pkgs='["foo", "bar"]' salt '*' pkg.install pkgs='["foo", {"bar": "1.2.3-4.el5"}]' sources A list of RPM packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. CLI Example: .. code-block:: bash salt '*' pkg.install sources='[{"foo": "salt://foo.rpm"}, {"bar": "salt://bar.rpm"}]' normalize : True Normalize the package name by removing the architecture. This is useful for poorly created packages which might include the architecture as an actual part of the name such as kernel modules which match a specific kernel version. .. code-block:: bash salt -G role:nsd pkg.install gpfs.gplbin-2.6.32-279.31.1.el6.x86_64 normalize=False .. versionadded:: 2014.7.0 diff_attr: If a list of package attributes is specified, returned value will contain them, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} Valid attributes are: ``epoch``, ``version``, ``release``, ``arch``, ``install_date``, ``install_date_time_t``. If ``all`` is specified, all valid attributes will be returned. .. versionadded:: 2018.3.0 Returns a dict containing the new package names and versions:: {'<package>': {'old': '<old-version>', 'new': '<new-version>'}} If an attribute list in diff_attr is specified, the dict will also contain any specified attribute, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} ''' options = _get_options(**kwargs) if salt.utils.data.is_true(refresh): refresh_db(**kwargs) reinstall = salt.utils.data.is_true(reinstall) try: pkg_params, pkg_type = __salt__['pkg_resource.parse_targets']( name, pkgs, sources, saltenv=saltenv, normalize=normalize, **kwargs ) except MinionError as exc: raise CommandExecutionError(exc) if not pkg_params: return {} version_num = kwargs.get('version') diff_attr = kwargs.get('diff_attr') old = list_pkgs(versions_as_list=False, attr=diff_attr) if not downloadonly else list_downloaded() # Use of __context__ means no duplicate work here, just accessing # information already in __context__ from the previous call to list_pkgs() old_as_list = list_pkgs(versions_as_list=True) if not downloadonly else list_downloaded() to_install = [] to_downgrade = [] to_reinstall = [] _available = {} # The above three lists will be populated with tuples containing the # package name and the string being used for this particular package # modification. The reason for this method is that the string we use for # installation, downgrading, or reinstallation will be different than the # package name in a couple cases: # # 1) A specific version is being targeted. In this case the string being # passed to install/downgrade/reinstall will contain the version # information after the package name. # 2) A binary package is being installed via the "sources" param. In this # case the string being passed will be the path to the local copy of # the package in the minion cachedir. # # The reason that we need both items is to be able to modify the installed # version of held packages. if pkg_type == 'repository': has_wildcards = [] has_comparison = [] for pkgname, pkgver in six.iteritems(pkg_params): try: if '*' in pkgver: has_wildcards.append(pkgname) elif pkgver.startswith('<') or pkgver.startswith('>'): has_comparison.append(pkgname) except (TypeError, ValueError): continue _available = AvailablePackages( *has_wildcards + has_comparison, byrepo=False, **kwargs) pkg_params_items = six.iteritems(pkg_params) elif pkg_type == 'advisory': pkg_params_items = [] cur_patches = list_patches() for advisory_id in pkg_params: if advisory_id not in cur_patches: raise CommandExecutionError( 'Advisory id "{0}" not found'.format(advisory_id) ) else: pkg_params_items.append(advisory_id) else: pkg_params_items = [] for pkg_source in pkg_params: if 'lowpkg.bin_pkg_info' in __salt__: rpm_info = __salt__['lowpkg.bin_pkg_info'](pkg_source) else: rpm_info = None if rpm_info is None: log.error( 'pkg.install: Unable to get rpm information for %s. ' 'Version comparisons will be unavailable, and return ' 'data may be inaccurate if reinstall=True.', pkg_source ) pkg_params_items.append([pkg_source]) else: pkg_params_items.append( [rpm_info['name'], pkg_source, rpm_info['version']] ) errors = [] for pkg_item_list in pkg_params_items: if pkg_type == 'repository': pkgname, version_num = pkg_item_list elif pkg_type == 'advisory': pkgname = pkg_item_list version_num = None else: try: pkgname, pkgpath, version_num = pkg_item_list except ValueError: pkgname = None pkgpath = pkg_item_list[0] version_num = None if version_num is None: if pkg_type == 'repository': if reinstall and pkgname in old: to_reinstall.append((pkgname, pkgname)) else: to_install.append((pkgname, pkgname)) elif pkg_type == 'advisory': to_install.append((pkgname, pkgname)) else: to_install.append((pkgname, pkgpath)) else: # If we are installing a package file and not one from the repo, # and version_num is not None, then we can assume that pkgname is # not None, since the only way version_num is not None is if RPM # metadata parsing was successful. if pkg_type == 'repository': # yum/dnf does not support comparison operators. If the version # starts with an equals sign, ignore it. version_num = version_num.lstrip('=') if pkgname in has_comparison: candidates = _available.get(pkgname, []) target = salt.utils.pkg.match_version( version_num, candidates, cmp_func=version_cmp, ignore_epoch=ignore_epoch, ) if target is None: errors.append( 'No version matching \'{0}{1}\' could be found ' '(available: {2})'.format( pkgname, version_num, ', '.join(candidates) if candidates else None ) ) continue else: version_num = target if _yum() == 'yum': # yum install does not support epoch without the arch, and # we won't know what the arch will be when it's not # provided. It could either be the OS architecture, or # 'noarch', and we don't make that distinction in the # pkg.list_pkgs return data. if ignore_epoch is True: version_num = version_num.split(':', 1)[-1] arch = '' try: namepart, archpart = pkgname.rsplit('.', 1) except ValueError: pass else: if archpart in salt.utils.pkg.rpm.ARCHES: arch = '.' + archpart pkgname = namepart if '*' in version_num: # Resolve wildcard matches candidates = _available.get(pkgname, []) match = salt.utils.itertools.fnmatch_multiple(candidates, version_num) if match is not None: version_num = match else: errors.append( 'No version matching \'{0}\' found for package ' '\'{1}\' (available: {2})'.format( version_num, pkgname, ', '.join(candidates) if candidates else 'none' ) ) continue if ignore_epoch is True: pkgstr = '{0}-{1}{2}'.format(pkgname, version_num, arch) else: pkgstr = '{0}-{1}{2}'.format(pkgname, version_num.split(':', 1)[-1], arch) else: pkgstr = pkgpath # Lambda to trim the epoch from the currently-installed version if # no epoch is specified in the specified version cver = old_as_list.get(pkgname, []) if reinstall and cver: for ver in cver: if salt.utils.versions.compare(ver1=version_num, oper='==', ver2=ver, cmp_func=version_cmp, ignore_epoch=ignore_epoch): # This version is already installed, so we need to # reinstall. to_reinstall.append((pkgname, pkgstr)) break else: if not cver: to_install.append((pkgname, pkgstr)) else: for ver in cver: if salt.utils.versions.compare(ver1=version_num, oper='>=', ver2=ver, cmp_func=version_cmp, ignore_epoch=ignore_epoch): to_install.append((pkgname, pkgstr)) break else: if pkgname is not None: if re.match('^kernel(|-devel)$', pkgname): # kernel and kernel-devel support multiple # installs as their paths do not conflict. # Performing a yum/dnf downgrade will be a # no-op so just do an install instead. It will # fail if there are other interdependencies # that have conflicts, and that's OK. We don't # want to force anything, we just want to # properly handle it if someone tries to # install a kernel/kernel-devel of a lower # version than the currently-installed one. # TODO: find a better way to determine if a # package supports multiple installs. to_install.append((pkgname, pkgstr)) else: # None of the currently-installed versions are # greater than the specified version, so this # is a downgrade. to_downgrade.append((pkgname, pkgstr)) def _add_common_args(cmd): ''' DRY function to add args common to all yum/dnf commands ''' cmd.extend(options) if skip_verify: cmd.append('--nogpgcheck') if downloadonly: cmd.append('--downloadonly') try: holds = list_holds(full=False) except SaltInvocationError: holds = [] log.debug( 'Failed to get holds, versionlock plugin is probably not ' 'installed' ) unhold_prevented = [] @contextlib.contextmanager def _temporarily_unhold(pkgs, targets): ''' Temporarily unhold packages that need to be updated. Add any successfully-removed ones (and any packages not in the list of current holds) to the list of targets. ''' to_unhold = {} for pkgname, pkgstr in pkgs: if pkgname in holds: if update_holds: to_unhold[pkgname] = pkgstr else: unhold_prevented.append(pkgname) else: targets.append(pkgstr) if not to_unhold: yield else: log.debug('Unholding packages: %s', ', '.join(to_unhold)) try: # Using list() here for python3 compatibility, dict.keys() no # longer returns a list in python3. unhold_names = list(to_unhold.keys()) for unheld_pkg, outcome in \ six.iteritems(unhold(pkgs=unhold_names)): if outcome['result']: # Package was successfully unheld, add to targets targets.append(to_unhold[unheld_pkg]) else: # Failed to unhold package errors.append(unheld_pkg) yield except Exception as exc: errors.append( 'Error encountered unholding packages {0}: {1}' .format(', '.join(to_unhold), exc) ) finally: hold(pkgs=unhold_names) targets = [] with _temporarily_unhold(to_install, targets): if targets: if pkg_type == 'advisory': targets = ["--advisory={0}".format(t) for t in targets] cmd = ['-y'] if _yum() == 'dnf': cmd.extend(['--best', '--allowerasing']) _add_common_args(cmd) cmd.append('install' if pkg_type != 'advisory' else 'update') cmd.extend(targets) out = _call_yum(cmd, ignore_retcode=False, redirect_stderr=True) if out['retcode'] != 0: errors.append(out['stdout']) targets = [] with _temporarily_unhold(to_downgrade, targets): if targets: cmd = ['-y'] _add_common_args(cmd) cmd.append('downgrade') cmd.extend(targets) out = _call_yum(cmd) if out['retcode'] != 0: errors.append(out['stdout']) targets = [] with _temporarily_unhold(to_reinstall, targets): if targets: cmd = ['-y'] _add_common_args(cmd) cmd.append('reinstall') cmd.extend(targets) out = _call_yum(cmd) if out['retcode'] != 0: errors.append(out['stdout']) __context__.pop('pkg.list_pkgs', None) new = list_pkgs(versions_as_list=False, attr=diff_attr) if not downloadonly else list_downloaded() ret = salt.utils.data.compare_dicts(old, new) for pkgname, _ in to_reinstall: if pkgname not in ret or pkgname in old: ret.update({pkgname: {'old': old.get(pkgname, ''), 'new': new.get(pkgname, '')}}) if unhold_prevented: errors.append( 'The following package(s) could not be updated because they are ' 'being held: {0}. Set \'update_holds\' to True to temporarily ' 'unhold these packages so that they can be updated.'.format( ', '.join(unhold_prevented) ) ) if errors: raise CommandExecutionError( 'Error occurred installing{0} package(s)'.format( '/reinstalling' if to_reinstall else '' ), info={'errors': errors, 'changes': ret} ) return ret
.. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 On minions running systemd>=205, `systemd-run(1)`_ is now used to isolate commands which modify installed packages from the ``salt-minion`` daemon's control group. This is done to keep systemd from killing any yum/dnf commands spawned by Salt when the ``salt-minion`` service is restarted. (see ``KillMode`` in the `systemd.kill(5)`_ manpage for more information). If desired, usage of `systemd-run(1)`_ can be suppressed by setting a :mod:`config option <salt.modules.config.get>` called ``systemd.scope``, with a value of ``False`` (no quotes). .. _`systemd-run(1)`: https://www.freedesktop.org/software/systemd/man/systemd-run.html .. _`systemd.kill(5)`: https://www.freedesktop.org/software/systemd/man/systemd.kill.html Install the passed package(s), add refresh=True to clean the yum database before package is installed. name The name of the package to be installed. Note that this parameter is ignored if either "pkgs" or "sources" is passed. Additionally, please note that this option can only be used to install packages from a software repository. To install a package file manually, use the "sources" option. 32-bit packages can be installed on 64-bit systems by appending the architecture designation (``.i686``, ``.i586``, etc.) to the end of the package name. CLI Example: .. code-block:: bash salt '*' pkg.install <package name> refresh Whether or not to update the yum database before executing. reinstall Specifying reinstall=True will use ``yum reinstall`` rather than ``yum install`` for requested packages that are already installed. If a version is specified with the requested package, then ``yum reinstall`` will only be used if the installed version matches the requested version. Works with ``sources`` when the package header of the source can be matched to the name and version of an installed package. .. versionadded:: 2014.7.0 skip_verify Skip the GPG verification check (e.g., ``--nogpgcheck``) downloadonly Only download the packages, do not install. version Install a specific version of the package, e.g. 1.2.3-4.el5. Ignored if "pkgs" or "sources" is passed. .. versionchanged:: 2018.3.0 version can now contain comparison operators (e.g. ``>1.2.3``, ``<=2.0``, etc.) update_holds : False If ``True``, and this function would update the package version, any packages held using the yum/dnf "versionlock" plugin will be unheld so that they can be updated. Otherwise, if this function attempts to update a held package, the held package(s) will be skipped and an error will be raised. .. versionadded:: 2016.11.0 setopt A comma-separated or Python list of key=value options. This list will be expanded and ``--setopt`` prepended to each in the yum/dnf command that is run. CLI Example: .. code-block:: bash salt '*' pkg.install foo setopt='obsoletes=0,plugins=0' .. versionadded:: 2019.2.0 Repository Options: fromrepo Specify a package repository (or repositories) from which to install. (e.g., ``yum --disablerepo='*' --enablerepo='somerepo'``) enablerepo (ignored if ``fromrepo`` is specified) Specify a disabled package repository (or repositories) to enable. (e.g., ``yum --enablerepo='somerepo'``) disablerepo (ignored if ``fromrepo`` is specified) Specify an enabled package repository (or repositories) to disable. (e.g., ``yum --disablerepo='somerepo'``) disableexcludes Disable exclude from main, for a repo or for everything. (e.g., ``yum --disableexcludes='main'``) .. versionadded:: 2014.7.0 ignore_epoch : False Only used when the version of a package is specified using a comparison operator (e.g. ``>4.1``). If set to ``True``, then the epoch will be ignored when comparing the currently-installed version to the desired version. .. versionadded:: 2018.3.0 Multiple Package Installation Options: pkgs A list of packages to install from a software repository. Must be passed as a python list. A specific version number can be specified by using a single-element dict representing the package and its version. CLI Examples: .. code-block:: bash salt '*' pkg.install pkgs='["foo", "bar"]' salt '*' pkg.install pkgs='["foo", {"bar": "1.2.3-4.el5"}]' sources A list of RPM packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. CLI Example: .. code-block:: bash salt '*' pkg.install sources='[{"foo": "salt://foo.rpm"}, {"bar": "salt://bar.rpm"}]' normalize : True Normalize the package name by removing the architecture. This is useful for poorly created packages which might include the architecture as an actual part of the name such as kernel modules which match a specific kernel version. .. code-block:: bash salt -G role:nsd pkg.install gpfs.gplbin-2.6.32-279.31.1.el6.x86_64 normalize=False .. versionadded:: 2014.7.0 diff_attr: If a list of package attributes is specified, returned value will contain them, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} Valid attributes are: ``epoch``, ``version``, ``release``, ``arch``, ``install_date``, ``install_date_time_t``. If ``all`` is specified, all valid attributes will be returned. .. versionadded:: 2018.3.0 Returns a dict containing the new package names and versions:: {'<package>': {'old': '<old-version>', 'new': '<new-version>'}} If an attribute list in diff_attr is specified, the dict will also contain any specified attribute, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}}
Below is the the instruction that describes the task: ### Input: .. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 On minions running systemd>=205, `systemd-run(1)`_ is now used to isolate commands which modify installed packages from the ``salt-minion`` daemon's control group. This is done to keep systemd from killing any yum/dnf commands spawned by Salt when the ``salt-minion`` service is restarted. (see ``KillMode`` in the `systemd.kill(5)`_ manpage for more information). If desired, usage of `systemd-run(1)`_ can be suppressed by setting a :mod:`config option <salt.modules.config.get>` called ``systemd.scope``, with a value of ``False`` (no quotes). .. _`systemd-run(1)`: https://www.freedesktop.org/software/systemd/man/systemd-run.html .. _`systemd.kill(5)`: https://www.freedesktop.org/software/systemd/man/systemd.kill.html Install the passed package(s), add refresh=True to clean the yum database before package is installed. name The name of the package to be installed. Note that this parameter is ignored if either "pkgs" or "sources" is passed. Additionally, please note that this option can only be used to install packages from a software repository. To install a package file manually, use the "sources" option. 32-bit packages can be installed on 64-bit systems by appending the architecture designation (``.i686``, ``.i586``, etc.) to the end of the package name. CLI Example: .. code-block:: bash salt '*' pkg.install <package name> refresh Whether or not to update the yum database before executing. reinstall Specifying reinstall=True will use ``yum reinstall`` rather than ``yum install`` for requested packages that are already installed. If a version is specified with the requested package, then ``yum reinstall`` will only be used if the installed version matches the requested version. Works with ``sources`` when the package header of the source can be matched to the name and version of an installed package. .. versionadded:: 2014.7.0 skip_verify Skip the GPG verification check (e.g., ``--nogpgcheck``) downloadonly Only download the packages, do not install. version Install a specific version of the package, e.g. 1.2.3-4.el5. Ignored if "pkgs" or "sources" is passed. .. versionchanged:: 2018.3.0 version can now contain comparison operators (e.g. ``>1.2.3``, ``<=2.0``, etc.) update_holds : False If ``True``, and this function would update the package version, any packages held using the yum/dnf "versionlock" plugin will be unheld so that they can be updated. Otherwise, if this function attempts to update a held package, the held package(s) will be skipped and an error will be raised. .. versionadded:: 2016.11.0 setopt A comma-separated or Python list of key=value options. This list will be expanded and ``--setopt`` prepended to each in the yum/dnf command that is run. CLI Example: .. code-block:: bash salt '*' pkg.install foo setopt='obsoletes=0,plugins=0' .. versionadded:: 2019.2.0 Repository Options: fromrepo Specify a package repository (or repositories) from which to install. (e.g., ``yum --disablerepo='*' --enablerepo='somerepo'``) enablerepo (ignored if ``fromrepo`` is specified) Specify a disabled package repository (or repositories) to enable. (e.g., ``yum --enablerepo='somerepo'``) disablerepo (ignored if ``fromrepo`` is specified) Specify an enabled package repository (or repositories) to disable. (e.g., ``yum --disablerepo='somerepo'``) disableexcludes Disable exclude from main, for a repo or for everything. (e.g., ``yum --disableexcludes='main'``) .. versionadded:: 2014.7.0 ignore_epoch : False Only used when the version of a package is specified using a comparison operator (e.g. ``>4.1``). If set to ``True``, then the epoch will be ignored when comparing the currently-installed version to the desired version. .. versionadded:: 2018.3.0 Multiple Package Installation Options: pkgs A list of packages to install from a software repository. Must be passed as a python list. A specific version number can be specified by using a single-element dict representing the package and its version. CLI Examples: .. code-block:: bash salt '*' pkg.install pkgs='["foo", "bar"]' salt '*' pkg.install pkgs='["foo", {"bar": "1.2.3-4.el5"}]' sources A list of RPM packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. CLI Example: .. code-block:: bash salt '*' pkg.install sources='[{"foo": "salt://foo.rpm"}, {"bar": "salt://bar.rpm"}]' normalize : True Normalize the package name by removing the architecture. This is useful for poorly created packages which might include the architecture as an actual part of the name such as kernel modules which match a specific kernel version. .. code-block:: bash salt -G role:nsd pkg.install gpfs.gplbin-2.6.32-279.31.1.el6.x86_64 normalize=False .. versionadded:: 2014.7.0 diff_attr: If a list of package attributes is specified, returned value will contain them, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} Valid attributes are: ``epoch``, ``version``, ``release``, ``arch``, ``install_date``, ``install_date_time_t``. If ``all`` is specified, all valid attributes will be returned. .. versionadded:: 2018.3.0 Returns a dict containing the new package names and versions:: {'<package>': {'old': '<old-version>', 'new': '<new-version>'}} If an attribute list in diff_attr is specified, the dict will also contain any specified attribute, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} ### Response: def install(name=None, refresh=False, skip_verify=False, pkgs=None, sources=None, downloadonly=False, reinstall=False, normalize=True, update_holds=False, saltenv='base', ignore_epoch=False, **kwargs): ''' .. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 On minions running systemd>=205, `systemd-run(1)`_ is now used to isolate commands which modify installed packages from the ``salt-minion`` daemon's control group. This is done to keep systemd from killing any yum/dnf commands spawned by Salt when the ``salt-minion`` service is restarted. (see ``KillMode`` in the `systemd.kill(5)`_ manpage for more information). If desired, usage of `systemd-run(1)`_ can be suppressed by setting a :mod:`config option <salt.modules.config.get>` called ``systemd.scope``, with a value of ``False`` (no quotes). .. _`systemd-run(1)`: https://www.freedesktop.org/software/systemd/man/systemd-run.html .. _`systemd.kill(5)`: https://www.freedesktop.org/software/systemd/man/systemd.kill.html Install the passed package(s), add refresh=True to clean the yum database before package is installed. name The name of the package to be installed. Note that this parameter is ignored if either "pkgs" or "sources" is passed. Additionally, please note that this option can only be used to install packages from a software repository. To install a package file manually, use the "sources" option. 32-bit packages can be installed on 64-bit systems by appending the architecture designation (``.i686``, ``.i586``, etc.) to the end of the package name. CLI Example: .. code-block:: bash salt '*' pkg.install <package name> refresh Whether or not to update the yum database before executing. reinstall Specifying reinstall=True will use ``yum reinstall`` rather than ``yum install`` for requested packages that are already installed. If a version is specified with the requested package, then ``yum reinstall`` will only be used if the installed version matches the requested version. Works with ``sources`` when the package header of the source can be matched to the name and version of an installed package. .. versionadded:: 2014.7.0 skip_verify Skip the GPG verification check (e.g., ``--nogpgcheck``) downloadonly Only download the packages, do not install. version Install a specific version of the package, e.g. 1.2.3-4.el5. Ignored if "pkgs" or "sources" is passed. .. versionchanged:: 2018.3.0 version can now contain comparison operators (e.g. ``>1.2.3``, ``<=2.0``, etc.) update_holds : False If ``True``, and this function would update the package version, any packages held using the yum/dnf "versionlock" plugin will be unheld so that they can be updated. Otherwise, if this function attempts to update a held package, the held package(s) will be skipped and an error will be raised. .. versionadded:: 2016.11.0 setopt A comma-separated or Python list of key=value options. This list will be expanded and ``--setopt`` prepended to each in the yum/dnf command that is run. CLI Example: .. code-block:: bash salt '*' pkg.install foo setopt='obsoletes=0,plugins=0' .. versionadded:: 2019.2.0 Repository Options: fromrepo Specify a package repository (or repositories) from which to install. (e.g., ``yum --disablerepo='*' --enablerepo='somerepo'``) enablerepo (ignored if ``fromrepo`` is specified) Specify a disabled package repository (or repositories) to enable. (e.g., ``yum --enablerepo='somerepo'``) disablerepo (ignored if ``fromrepo`` is specified) Specify an enabled package repository (or repositories) to disable. (e.g., ``yum --disablerepo='somerepo'``) disableexcludes Disable exclude from main, for a repo or for everything. (e.g., ``yum --disableexcludes='main'``) .. versionadded:: 2014.7.0 ignore_epoch : False Only used when the version of a package is specified using a comparison operator (e.g. ``>4.1``). If set to ``True``, then the epoch will be ignored when comparing the currently-installed version to the desired version. .. versionadded:: 2018.3.0 Multiple Package Installation Options: pkgs A list of packages to install from a software repository. Must be passed as a python list. A specific version number can be specified by using a single-element dict representing the package and its version. CLI Examples: .. code-block:: bash salt '*' pkg.install pkgs='["foo", "bar"]' salt '*' pkg.install pkgs='["foo", {"bar": "1.2.3-4.el5"}]' sources A list of RPM packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. CLI Example: .. code-block:: bash salt '*' pkg.install sources='[{"foo": "salt://foo.rpm"}, {"bar": "salt://bar.rpm"}]' normalize : True Normalize the package name by removing the architecture. This is useful for poorly created packages which might include the architecture as an actual part of the name such as kernel modules which match a specific kernel version. .. code-block:: bash salt -G role:nsd pkg.install gpfs.gplbin-2.6.32-279.31.1.el6.x86_64 normalize=False .. versionadded:: 2014.7.0 diff_attr: If a list of package attributes is specified, returned value will contain them, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} Valid attributes are: ``epoch``, ``version``, ``release``, ``arch``, ``install_date``, ``install_date_time_t``. If ``all`` is specified, all valid attributes will be returned. .. versionadded:: 2018.3.0 Returns a dict containing the new package names and versions:: {'<package>': {'old': '<old-version>', 'new': '<new-version>'}} If an attribute list in diff_attr is specified, the dict will also contain any specified attribute, eg.:: {'<package>': { 'old': { 'version': '<old-version>', 'arch': '<old-arch>'}, 'new': { 'version': '<new-version>', 'arch': '<new-arch>'}}} ''' options = _get_options(**kwargs) if salt.utils.data.is_true(refresh): refresh_db(**kwargs) reinstall = salt.utils.data.is_true(reinstall) try: pkg_params, pkg_type = __salt__['pkg_resource.parse_targets']( name, pkgs, sources, saltenv=saltenv, normalize=normalize, **kwargs ) except MinionError as exc: raise CommandExecutionError(exc) if not pkg_params: return {} version_num = kwargs.get('version') diff_attr = kwargs.get('diff_attr') old = list_pkgs(versions_as_list=False, attr=diff_attr) if not downloadonly else list_downloaded() # Use of __context__ means no duplicate work here, just accessing # information already in __context__ from the previous call to list_pkgs() old_as_list = list_pkgs(versions_as_list=True) if not downloadonly else list_downloaded() to_install = [] to_downgrade = [] to_reinstall = [] _available = {} # The above three lists will be populated with tuples containing the # package name and the string being used for this particular package # modification. The reason for this method is that the string we use for # installation, downgrading, or reinstallation will be different than the # package name in a couple cases: # # 1) A specific version is being targeted. In this case the string being # passed to install/downgrade/reinstall will contain the version # information after the package name. # 2) A binary package is being installed via the "sources" param. In this # case the string being passed will be the path to the local copy of # the package in the minion cachedir. # # The reason that we need both items is to be able to modify the installed # version of held packages. if pkg_type == 'repository': has_wildcards = [] has_comparison = [] for pkgname, pkgver in six.iteritems(pkg_params): try: if '*' in pkgver: has_wildcards.append(pkgname) elif pkgver.startswith('<') or pkgver.startswith('>'): has_comparison.append(pkgname) except (TypeError, ValueError): continue _available = AvailablePackages( *has_wildcards + has_comparison, byrepo=False, **kwargs) pkg_params_items = six.iteritems(pkg_params) elif pkg_type == 'advisory': pkg_params_items = [] cur_patches = list_patches() for advisory_id in pkg_params: if advisory_id not in cur_patches: raise CommandExecutionError( 'Advisory id "{0}" not found'.format(advisory_id) ) else: pkg_params_items.append(advisory_id) else: pkg_params_items = [] for pkg_source in pkg_params: if 'lowpkg.bin_pkg_info' in __salt__: rpm_info = __salt__['lowpkg.bin_pkg_info'](pkg_source) else: rpm_info = None if rpm_info is None: log.error( 'pkg.install: Unable to get rpm information for %s. ' 'Version comparisons will be unavailable, and return ' 'data may be inaccurate if reinstall=True.', pkg_source ) pkg_params_items.append([pkg_source]) else: pkg_params_items.append( [rpm_info['name'], pkg_source, rpm_info['version']] ) errors = [] for pkg_item_list in pkg_params_items: if pkg_type == 'repository': pkgname, version_num = pkg_item_list elif pkg_type == 'advisory': pkgname = pkg_item_list version_num = None else: try: pkgname, pkgpath, version_num = pkg_item_list except ValueError: pkgname = None pkgpath = pkg_item_list[0] version_num = None if version_num is None: if pkg_type == 'repository': if reinstall and pkgname in old: to_reinstall.append((pkgname, pkgname)) else: to_install.append((pkgname, pkgname)) elif pkg_type == 'advisory': to_install.append((pkgname, pkgname)) else: to_install.append((pkgname, pkgpath)) else: # If we are installing a package file and not one from the repo, # and version_num is not None, then we can assume that pkgname is # not None, since the only way version_num is not None is if RPM # metadata parsing was successful. if pkg_type == 'repository': # yum/dnf does not support comparison operators. If the version # starts with an equals sign, ignore it. version_num = version_num.lstrip('=') if pkgname in has_comparison: candidates = _available.get(pkgname, []) target = salt.utils.pkg.match_version( version_num, candidates, cmp_func=version_cmp, ignore_epoch=ignore_epoch, ) if target is None: errors.append( 'No version matching \'{0}{1}\' could be found ' '(available: {2})'.format( pkgname, version_num, ', '.join(candidates) if candidates else None ) ) continue else: version_num = target if _yum() == 'yum': # yum install does not support epoch without the arch, and # we won't know what the arch will be when it's not # provided. It could either be the OS architecture, or # 'noarch', and we don't make that distinction in the # pkg.list_pkgs return data. if ignore_epoch is True: version_num = version_num.split(':', 1)[-1] arch = '' try: namepart, archpart = pkgname.rsplit('.', 1) except ValueError: pass else: if archpart in salt.utils.pkg.rpm.ARCHES: arch = '.' + archpart pkgname = namepart if '*' in version_num: # Resolve wildcard matches candidates = _available.get(pkgname, []) match = salt.utils.itertools.fnmatch_multiple(candidates, version_num) if match is not None: version_num = match else: errors.append( 'No version matching \'{0}\' found for package ' '\'{1}\' (available: {2})'.format( version_num, pkgname, ', '.join(candidates) if candidates else 'none' ) ) continue if ignore_epoch is True: pkgstr = '{0}-{1}{2}'.format(pkgname, version_num, arch) else: pkgstr = '{0}-{1}{2}'.format(pkgname, version_num.split(':', 1)[-1], arch) else: pkgstr = pkgpath # Lambda to trim the epoch from the currently-installed version if # no epoch is specified in the specified version cver = old_as_list.get(pkgname, []) if reinstall and cver: for ver in cver: if salt.utils.versions.compare(ver1=version_num, oper='==', ver2=ver, cmp_func=version_cmp, ignore_epoch=ignore_epoch): # This version is already installed, so we need to # reinstall. to_reinstall.append((pkgname, pkgstr)) break else: if not cver: to_install.append((pkgname, pkgstr)) else: for ver in cver: if salt.utils.versions.compare(ver1=version_num, oper='>=', ver2=ver, cmp_func=version_cmp, ignore_epoch=ignore_epoch): to_install.append((pkgname, pkgstr)) break else: if pkgname is not None: if re.match('^kernel(|-devel)$', pkgname): # kernel and kernel-devel support multiple # installs as their paths do not conflict. # Performing a yum/dnf downgrade will be a # no-op so just do an install instead. It will # fail if there are other interdependencies # that have conflicts, and that's OK. We don't # want to force anything, we just want to # properly handle it if someone tries to # install a kernel/kernel-devel of a lower # version than the currently-installed one. # TODO: find a better way to determine if a # package supports multiple installs. to_install.append((pkgname, pkgstr)) else: # None of the currently-installed versions are # greater than the specified version, so this # is a downgrade. to_downgrade.append((pkgname, pkgstr)) def _add_common_args(cmd): ''' DRY function to add args common to all yum/dnf commands ''' cmd.extend(options) if skip_verify: cmd.append('--nogpgcheck') if downloadonly: cmd.append('--downloadonly') try: holds = list_holds(full=False) except SaltInvocationError: holds = [] log.debug( 'Failed to get holds, versionlock plugin is probably not ' 'installed' ) unhold_prevented = [] @contextlib.contextmanager def _temporarily_unhold(pkgs, targets): ''' Temporarily unhold packages that need to be updated. Add any successfully-removed ones (and any packages not in the list of current holds) to the list of targets. ''' to_unhold = {} for pkgname, pkgstr in pkgs: if pkgname in holds: if update_holds: to_unhold[pkgname] = pkgstr else: unhold_prevented.append(pkgname) else: targets.append(pkgstr) if not to_unhold: yield else: log.debug('Unholding packages: %s', ', '.join(to_unhold)) try: # Using list() here for python3 compatibility, dict.keys() no # longer returns a list in python3. unhold_names = list(to_unhold.keys()) for unheld_pkg, outcome in \ six.iteritems(unhold(pkgs=unhold_names)): if outcome['result']: # Package was successfully unheld, add to targets targets.append(to_unhold[unheld_pkg]) else: # Failed to unhold package errors.append(unheld_pkg) yield except Exception as exc: errors.append( 'Error encountered unholding packages {0}: {1}' .format(', '.join(to_unhold), exc) ) finally: hold(pkgs=unhold_names) targets = [] with _temporarily_unhold(to_install, targets): if targets: if pkg_type == 'advisory': targets = ["--advisory={0}".format(t) for t in targets] cmd = ['-y'] if _yum() == 'dnf': cmd.extend(['--best', '--allowerasing']) _add_common_args(cmd) cmd.append('install' if pkg_type != 'advisory' else 'update') cmd.extend(targets) out = _call_yum(cmd, ignore_retcode=False, redirect_stderr=True) if out['retcode'] != 0: errors.append(out['stdout']) targets = [] with _temporarily_unhold(to_downgrade, targets): if targets: cmd = ['-y'] _add_common_args(cmd) cmd.append('downgrade') cmd.extend(targets) out = _call_yum(cmd) if out['retcode'] != 0: errors.append(out['stdout']) targets = [] with _temporarily_unhold(to_reinstall, targets): if targets: cmd = ['-y'] _add_common_args(cmd) cmd.append('reinstall') cmd.extend(targets) out = _call_yum(cmd) if out['retcode'] != 0: errors.append(out['stdout']) __context__.pop('pkg.list_pkgs', None) new = list_pkgs(versions_as_list=False, attr=diff_attr) if not downloadonly else list_downloaded() ret = salt.utils.data.compare_dicts(old, new) for pkgname, _ in to_reinstall: if pkgname not in ret or pkgname in old: ret.update({pkgname: {'old': old.get(pkgname, ''), 'new': new.get(pkgname, '')}}) if unhold_prevented: errors.append( 'The following package(s) could not be updated because they are ' 'being held: {0}. Set \'update_holds\' to True to temporarily ' 'unhold these packages so that they can be updated.'.format( ', '.join(unhold_prevented) ) ) if errors: raise CommandExecutionError( 'Error occurred installing{0} package(s)'.format( '/reinstalling' if to_reinstall else '' ), info={'errors': errors, 'changes': ret} ) return ret
def compose(*funcs): """ chained function composition wrapper creates function f, where f(x) = arg0(arg1(arg2(...argN(x)))) if *funcs is empty, an identity function is returned. Args: *funcs: list of functions to chain Returns: a new function composed of chained calls to *args """ if not funcs: return lambda *args: args[0] if args else None if len(funcs) == 1: return funcs[0] last = funcs[-1] rest = funcs[0:-1] return lambda *args: reduce(lambda ax, func: func(ax), reversed(rest), last(*args))
chained function composition wrapper creates function f, where f(x) = arg0(arg1(arg2(...argN(x)))) if *funcs is empty, an identity function is returned. Args: *funcs: list of functions to chain Returns: a new function composed of chained calls to *args
Below is the the instruction that describes the task: ### Input: chained function composition wrapper creates function f, where f(x) = arg0(arg1(arg2(...argN(x)))) if *funcs is empty, an identity function is returned. Args: *funcs: list of functions to chain Returns: a new function composed of chained calls to *args ### Response: def compose(*funcs): """ chained function composition wrapper creates function f, where f(x) = arg0(arg1(arg2(...argN(x)))) if *funcs is empty, an identity function is returned. Args: *funcs: list of functions to chain Returns: a new function composed of chained calls to *args """ if not funcs: return lambda *args: args[0] if args else None if len(funcs) == 1: return funcs[0] last = funcs[-1] rest = funcs[0:-1] return lambda *args: reduce(lambda ax, func: func(ax), reversed(rest), last(*args))
def encrypt_file(self, path, output_path=None, overwrite=False, enable_verbose=True): """ Encrypt a file using rsa. RSA for big file encryption is very slow. For big file, I recommend to use symmetric encryption and use RSA to encrypt the password. """ path, output_path = files.process_dst_overwrite_args( src=path, dst=output_path, overwrite=overwrite, src_to_dst_func=files.get_encrpyted_path, ) with open(path, "rb") as infile, open(output_path, "wb") as outfile: encrypt_bigfile(infile, outfile, self.his_pubkey)
Encrypt a file using rsa. RSA for big file encryption is very slow. For big file, I recommend to use symmetric encryption and use RSA to encrypt the password.
Below is the the instruction that describes the task: ### Input: Encrypt a file using rsa. RSA for big file encryption is very slow. For big file, I recommend to use symmetric encryption and use RSA to encrypt the password. ### Response: def encrypt_file(self, path, output_path=None, overwrite=False, enable_verbose=True): """ Encrypt a file using rsa. RSA for big file encryption is very slow. For big file, I recommend to use symmetric encryption and use RSA to encrypt the password. """ path, output_path = files.process_dst_overwrite_args( src=path, dst=output_path, overwrite=overwrite, src_to_dst_func=files.get_encrpyted_path, ) with open(path, "rb") as infile, open(output_path, "wb") as outfile: encrypt_bigfile(infile, outfile, self.his_pubkey)
def find_duplicates_in_dirs(directories, exclude_dirs=None, exclude_files=None, follow_dirlinks=False): """Recursively scan a list of directories, looking for duplicate files. `exclude_dirs`, if provided, should be a list of glob patterns. Subdirectories whose names match these patterns are excluded from the scan. `exclude_files`, if provided, should be a list of glob patterns. Files whose names match these patterns are excluded from the scan. ``follow_dirlinks`` controls whether to follow symbolic links to subdirectories while crawling. Returns a 2-tuple of two values: ``(duplicate_groups, errors)``. `duplicate_groups` is a (possibly empty) list of lists: the names of files that have at least two copies, grouped together. `errors` is a list of error messages that occurred. If empty, there were no errors. For example, assuming ``./a1`` and ``/dir1/a2`` are identical, ``/dir1/c1`` and ``/dir2/c2`` are identical, ``/dir2/b`` is different from all others, that any subdirectories called ``tmp`` should not be scanned, and that files ending in ``.bak`` should be ignored: >>> dups, errs = find_duplicates_in_dirs(['.', '/dir1', '/dir2'], ['tmp'], ['*.bak']) >>> dups [['./a1', '/dir1/a2'], ['/dir1/c1', '/dir2/c2']] >>> errs [] """ if exclude_dirs is None: exclude_dirs = [] if exclude_files is None: exclude_files = [] errors_in_total = [] files_by_size = {} # First, group all files by size for directory in directories: sub_errors = index_files_by_size(directory, files_by_size, exclude_dirs, exclude_files, follow_dirlinks) errors_in_total += sub_errors all_duplicates = [] # Now, within each file size, check for duplicates. # # We use an iterator over the dict (which gives us the keys), instead # of explicitly accessing dict.keys(). On Python 2, dict.keys() returns # a list copy of the keys, which may be very large. for size in iter(files_by_size): # for large file sizes, divide them further into groups by matching # initial portion; how much of the file is used to match depends on # the file size if size >= PARTIAL_MD5_THRESHOLD: partial_size = min(round_up_to_mult(size // PARTIAL_MD5_READ_RATIO, PARTIAL_MD5_READ_MULT), PARTIAL_MD5_MAX_READ) possible_duplicates_list, sub_errors = find_duplicates(files_by_size[size], partial_size) errors_in_total += sub_errors else: # small file size, group them all together and do full MD5s possible_duplicates_list = [files_by_size[size]] # Do full MD5 scan on suspected duplicates. calculate_md5 (and # therefore find_duplicates) needs to know how many bytes to scan. # We're using the file's size, as per stat(); this is a problem if # the file is growing. We'll only scan up to the size the file had # when we indexed. Would be better to somehow tell calculate_md5 to # scan until EOF (e.g. give it a negative size). for possible_duplicates in possible_duplicates_list: duplicates, sub_errors = find_duplicates(possible_duplicates, size) all_duplicates += duplicates errors_in_total += sub_errors return all_duplicates, errors_in_total
Recursively scan a list of directories, looking for duplicate files. `exclude_dirs`, if provided, should be a list of glob patterns. Subdirectories whose names match these patterns are excluded from the scan. `exclude_files`, if provided, should be a list of glob patterns. Files whose names match these patterns are excluded from the scan. ``follow_dirlinks`` controls whether to follow symbolic links to subdirectories while crawling. Returns a 2-tuple of two values: ``(duplicate_groups, errors)``. `duplicate_groups` is a (possibly empty) list of lists: the names of files that have at least two copies, grouped together. `errors` is a list of error messages that occurred. If empty, there were no errors. For example, assuming ``./a1`` and ``/dir1/a2`` are identical, ``/dir1/c1`` and ``/dir2/c2`` are identical, ``/dir2/b`` is different from all others, that any subdirectories called ``tmp`` should not be scanned, and that files ending in ``.bak`` should be ignored: >>> dups, errs = find_duplicates_in_dirs(['.', '/dir1', '/dir2'], ['tmp'], ['*.bak']) >>> dups [['./a1', '/dir1/a2'], ['/dir1/c1', '/dir2/c2']] >>> errs []
Below is the the instruction that describes the task: ### Input: Recursively scan a list of directories, looking for duplicate files. `exclude_dirs`, if provided, should be a list of glob patterns. Subdirectories whose names match these patterns are excluded from the scan. `exclude_files`, if provided, should be a list of glob patterns. Files whose names match these patterns are excluded from the scan. ``follow_dirlinks`` controls whether to follow symbolic links to subdirectories while crawling. Returns a 2-tuple of two values: ``(duplicate_groups, errors)``. `duplicate_groups` is a (possibly empty) list of lists: the names of files that have at least two copies, grouped together. `errors` is a list of error messages that occurred. If empty, there were no errors. For example, assuming ``./a1`` and ``/dir1/a2`` are identical, ``/dir1/c1`` and ``/dir2/c2`` are identical, ``/dir2/b`` is different from all others, that any subdirectories called ``tmp`` should not be scanned, and that files ending in ``.bak`` should be ignored: >>> dups, errs = find_duplicates_in_dirs(['.', '/dir1', '/dir2'], ['tmp'], ['*.bak']) >>> dups [['./a1', '/dir1/a2'], ['/dir1/c1', '/dir2/c2']] >>> errs [] ### Response: def find_duplicates_in_dirs(directories, exclude_dirs=None, exclude_files=None, follow_dirlinks=False): """Recursively scan a list of directories, looking for duplicate files. `exclude_dirs`, if provided, should be a list of glob patterns. Subdirectories whose names match these patterns are excluded from the scan. `exclude_files`, if provided, should be a list of glob patterns. Files whose names match these patterns are excluded from the scan. ``follow_dirlinks`` controls whether to follow symbolic links to subdirectories while crawling. Returns a 2-tuple of two values: ``(duplicate_groups, errors)``. `duplicate_groups` is a (possibly empty) list of lists: the names of files that have at least two copies, grouped together. `errors` is a list of error messages that occurred. If empty, there were no errors. For example, assuming ``./a1`` and ``/dir1/a2`` are identical, ``/dir1/c1`` and ``/dir2/c2`` are identical, ``/dir2/b`` is different from all others, that any subdirectories called ``tmp`` should not be scanned, and that files ending in ``.bak`` should be ignored: >>> dups, errs = find_duplicates_in_dirs(['.', '/dir1', '/dir2'], ['tmp'], ['*.bak']) >>> dups [['./a1', '/dir1/a2'], ['/dir1/c1', '/dir2/c2']] >>> errs [] """ if exclude_dirs is None: exclude_dirs = [] if exclude_files is None: exclude_files = [] errors_in_total = [] files_by_size = {} # First, group all files by size for directory in directories: sub_errors = index_files_by_size(directory, files_by_size, exclude_dirs, exclude_files, follow_dirlinks) errors_in_total += sub_errors all_duplicates = [] # Now, within each file size, check for duplicates. # # We use an iterator over the dict (which gives us the keys), instead # of explicitly accessing dict.keys(). On Python 2, dict.keys() returns # a list copy of the keys, which may be very large. for size in iter(files_by_size): # for large file sizes, divide them further into groups by matching # initial portion; how much of the file is used to match depends on # the file size if size >= PARTIAL_MD5_THRESHOLD: partial_size = min(round_up_to_mult(size // PARTIAL_MD5_READ_RATIO, PARTIAL_MD5_READ_MULT), PARTIAL_MD5_MAX_READ) possible_duplicates_list, sub_errors = find_duplicates(files_by_size[size], partial_size) errors_in_total += sub_errors else: # small file size, group them all together and do full MD5s possible_duplicates_list = [files_by_size[size]] # Do full MD5 scan on suspected duplicates. calculate_md5 (and # therefore find_duplicates) needs to know how many bytes to scan. # We're using the file's size, as per stat(); this is a problem if # the file is growing. We'll only scan up to the size the file had # when we indexed. Would be better to somehow tell calculate_md5 to # scan until EOF (e.g. give it a negative size). for possible_duplicates in possible_duplicates_list: duplicates, sub_errors = find_duplicates(possible_duplicates, size) all_duplicates += duplicates errors_in_total += sub_errors return all_duplicates, errors_in_total
def GetServices(): """ Obtains all the connected eDNA services. :return: A pandas DataFrame of connected eDNA services in the form [Name, Description, Type, Status] """ # Define all required variables in the correct ctypes format pulKey = c_ulong(0) szType = c_char_p("".encode('utf-8')) szStartSvcName = c_char_p("".encode('utf-8')) szSvcName, szSvcDesc = create_string_buffer(30), create_string_buffer(90) szSvcType, szSvcStat = create_string_buffer(30), create_string_buffer(30) szSvcName2, szSvcDesc2 = create_string_buffer(30), create_string_buffer(90) szSvcType2, szSvcStat2 = create_string_buffer(30), create_string_buffer(30) nSvcName, nSvcDesc = c_ushort(30), c_ushort(90) nSvcType, nSvcStat = c_ushort(30), c_ushort(30) # Call the eDNA function. nRet is zero if the function is successful. services = [] nRet = dna_dll.DnaGetServiceEntry(szType, szStartSvcName, byref(pulKey), byref(szSvcName), nSvcName, byref(szSvcDesc), nSvcDesc, byref(szSvcType), nSvcType, byref(szSvcStat), nSvcStat) serv = _FormatServices(szSvcName, szSvcDesc, szSvcType, szSvcStat) if serv: services.append(serv) # Iterate across all the returned services while nRet == 0: nRet = dna_dll.DnaGetNextServiceEntry(pulKey, byref(szSvcName2), nSvcName, byref(szSvcDesc2), nSvcDesc, byref(szSvcType2), nSvcType, byref(szSvcStat2), nSvcStat) # We want to ensure only UTF-8 characters are returned. Ignoring # characters is slightly unsafe, but they should only occur in the # units or description, so it's not a huge issue. serv = _FormatServices(szSvcName2, szSvcDesc2, szSvcType2, szSvcStat2) if serv: services.append(serv) # If no results were returned, raise a warning df = pd.DataFrame() if services: df = pd.DataFrame(services, columns=["Name", "Description", "Type", "Status"]) else: warnings.warn("WARNING- No connected eDNA services detected. Check " + "your DNASys.ini file and your network connection.") return df
Obtains all the connected eDNA services. :return: A pandas DataFrame of connected eDNA services in the form [Name, Description, Type, Status]
Below is the the instruction that describes the task: ### Input: Obtains all the connected eDNA services. :return: A pandas DataFrame of connected eDNA services in the form [Name, Description, Type, Status] ### Response: def GetServices(): """ Obtains all the connected eDNA services. :return: A pandas DataFrame of connected eDNA services in the form [Name, Description, Type, Status] """ # Define all required variables in the correct ctypes format pulKey = c_ulong(0) szType = c_char_p("".encode('utf-8')) szStartSvcName = c_char_p("".encode('utf-8')) szSvcName, szSvcDesc = create_string_buffer(30), create_string_buffer(90) szSvcType, szSvcStat = create_string_buffer(30), create_string_buffer(30) szSvcName2, szSvcDesc2 = create_string_buffer(30), create_string_buffer(90) szSvcType2, szSvcStat2 = create_string_buffer(30), create_string_buffer(30) nSvcName, nSvcDesc = c_ushort(30), c_ushort(90) nSvcType, nSvcStat = c_ushort(30), c_ushort(30) # Call the eDNA function. nRet is zero if the function is successful. services = [] nRet = dna_dll.DnaGetServiceEntry(szType, szStartSvcName, byref(pulKey), byref(szSvcName), nSvcName, byref(szSvcDesc), nSvcDesc, byref(szSvcType), nSvcType, byref(szSvcStat), nSvcStat) serv = _FormatServices(szSvcName, szSvcDesc, szSvcType, szSvcStat) if serv: services.append(serv) # Iterate across all the returned services while nRet == 0: nRet = dna_dll.DnaGetNextServiceEntry(pulKey, byref(szSvcName2), nSvcName, byref(szSvcDesc2), nSvcDesc, byref(szSvcType2), nSvcType, byref(szSvcStat2), nSvcStat) # We want to ensure only UTF-8 characters are returned. Ignoring # characters is slightly unsafe, but they should only occur in the # units or description, so it's not a huge issue. serv = _FormatServices(szSvcName2, szSvcDesc2, szSvcType2, szSvcStat2) if serv: services.append(serv) # If no results were returned, raise a warning df = pd.DataFrame() if services: df = pd.DataFrame(services, columns=["Name", "Description", "Type", "Status"]) else: warnings.warn("WARNING- No connected eDNA services detected. Check " + "your DNASys.ini file and your network connection.") return df
def package_load_instructions(inst_distributions): """Load instructions, displayed in the package notes""" per_package_inst = '' for dist in inst_distributions: if dist.type == 'zip': per_package_inst += dedent( """ # Loading the ZIP Package Zip packages are compressed, so large resources may load faster. import metapack as mp pkg = mp.open_package('{url}') """.format(url=dist.package_url.inner)) elif dist.type == 'csv': per_package_inst += dedent( """ # Loading the CSV Package CSV packages load resources individually, so small resources may load faster. import metapack as mp pkg = mp.open_package('{url}') """.format(url=dist.package_url.inner)) if per_package_inst: return '\n---\n'+per_package_inst else: return ''
Load instructions, displayed in the package notes
Below is the the instruction that describes the task: ### Input: Load instructions, displayed in the package notes ### Response: def package_load_instructions(inst_distributions): """Load instructions, displayed in the package notes""" per_package_inst = '' for dist in inst_distributions: if dist.type == 'zip': per_package_inst += dedent( """ # Loading the ZIP Package Zip packages are compressed, so large resources may load faster. import metapack as mp pkg = mp.open_package('{url}') """.format(url=dist.package_url.inner)) elif dist.type == 'csv': per_package_inst += dedent( """ # Loading the CSV Package CSV packages load resources individually, so small resources may load faster. import metapack as mp pkg = mp.open_package('{url}') """.format(url=dist.package_url.inner)) if per_package_inst: return '\n---\n'+per_package_inst else: return ''
def _req(self): """ List of required fields for each template. Format is [tmpl_idx, "all"|"any", [req_field_1, req_field_2, ...]]. Partial reimplementation of req computing logic from Anki. We use pystache instead of Anki's custom mustache implementation. The goal is to figure out which fields are "required", i.e. if they are missing then the front side of the note doesn't contain any meaningful content. """ sentinel = 'SeNtInEl' field_names = [field['name'] for field in self.fields] req = [] for template_ord, template in enumerate(self.templates): field_values = {field: sentinel for field in field_names} required_fields = [] for field_ord, field in enumerate(field_names): fvcopy = copy(field_values) fvcopy[field] = '' rendered = pystache.render(template['qfmt'], fvcopy) if sentinel not in rendered: # when this field is missing, there is no meaningful content (no field values) in the question, so this field # is required required_fields.append(field_ord) if required_fields: req.append([template_ord, 'all', required_fields]) continue # there are no required fields, so an "all" is not appropriate, switch to checking for "any" field_values = {field: '' for field in field_names} for field_ord, field in enumerate(field_names): fvcopy = copy(field_values) fvcopy[field] = sentinel rendered = pystache.render(template['qfmt'], fvcopy) if sentinel in rendered: # when this field is present, there is meaningful content in the question required_fields.append(field_ord) if not required_fields: raise Exception( 'Could not compute required fields for this template; please check the formatting of "qfmt": {}'.format( template)) req.append([template_ord, 'any', required_fields]) return req
List of required fields for each template. Format is [tmpl_idx, "all"|"any", [req_field_1, req_field_2, ...]]. Partial reimplementation of req computing logic from Anki. We use pystache instead of Anki's custom mustache implementation. The goal is to figure out which fields are "required", i.e. if they are missing then the front side of the note doesn't contain any meaningful content.
Below is the the instruction that describes the task: ### Input: List of required fields for each template. Format is [tmpl_idx, "all"|"any", [req_field_1, req_field_2, ...]]. Partial reimplementation of req computing logic from Anki. We use pystache instead of Anki's custom mustache implementation. The goal is to figure out which fields are "required", i.e. if they are missing then the front side of the note doesn't contain any meaningful content. ### Response: def _req(self): """ List of required fields for each template. Format is [tmpl_idx, "all"|"any", [req_field_1, req_field_2, ...]]. Partial reimplementation of req computing logic from Anki. We use pystache instead of Anki's custom mustache implementation. The goal is to figure out which fields are "required", i.e. if they are missing then the front side of the note doesn't contain any meaningful content. """ sentinel = 'SeNtInEl' field_names = [field['name'] for field in self.fields] req = [] for template_ord, template in enumerate(self.templates): field_values = {field: sentinel for field in field_names} required_fields = [] for field_ord, field in enumerate(field_names): fvcopy = copy(field_values) fvcopy[field] = '' rendered = pystache.render(template['qfmt'], fvcopy) if sentinel not in rendered: # when this field is missing, there is no meaningful content (no field values) in the question, so this field # is required required_fields.append(field_ord) if required_fields: req.append([template_ord, 'all', required_fields]) continue # there are no required fields, so an "all" is not appropriate, switch to checking for "any" field_values = {field: '' for field in field_names} for field_ord, field in enumerate(field_names): fvcopy = copy(field_values) fvcopy[field] = sentinel rendered = pystache.render(template['qfmt'], fvcopy) if sentinel in rendered: # when this field is present, there is meaningful content in the question required_fields.append(field_ord) if not required_fields: raise Exception( 'Could not compute required fields for this template; please check the formatting of "qfmt": {}'.format( template)) req.append([template_ord, 'any', required_fields]) return req
def stats(self, input_filepath): '''Display time domain statistical information about the audio channels. Audio is passed unmodified through the SoX processing chain. Statistics are calculated and displayed for each audio channel Unlike other Transformer methods, this does not modify the transformer effects chain. Instead it computes statistics on the output file that would be created if the build command were invoked. Note: The file is downmixed to mono prior to computation. Parameters ---------- input_filepath : str Path to input file to compute stats on. Returns ------- stats_dict : dict List of frequency (Hz), amplitude pairs. See Also -------- stat, sox.file_info ''' effect_args = ['channels', '1', 'stats'] _, _, stats_output = self.build( input_filepath, None, extra_args=effect_args, return_output=True ) stats_dict = {} lines = stats_output.split('\n') for line in lines: split_line = line.split() if len(split_line) == 0: continue value = split_line[-1] key = ' '.join(split_line[:-1]) stats_dict[key] = value return stats_dict
Display time domain statistical information about the audio channels. Audio is passed unmodified through the SoX processing chain. Statistics are calculated and displayed for each audio channel Unlike other Transformer methods, this does not modify the transformer effects chain. Instead it computes statistics on the output file that would be created if the build command were invoked. Note: The file is downmixed to mono prior to computation. Parameters ---------- input_filepath : str Path to input file to compute stats on. Returns ------- stats_dict : dict List of frequency (Hz), amplitude pairs. See Also -------- stat, sox.file_info
Below is the the instruction that describes the task: ### Input: Display time domain statistical information about the audio channels. Audio is passed unmodified through the SoX processing chain. Statistics are calculated and displayed for each audio channel Unlike other Transformer methods, this does not modify the transformer effects chain. Instead it computes statistics on the output file that would be created if the build command were invoked. Note: The file is downmixed to mono prior to computation. Parameters ---------- input_filepath : str Path to input file to compute stats on. Returns ------- stats_dict : dict List of frequency (Hz), amplitude pairs. See Also -------- stat, sox.file_info ### Response: def stats(self, input_filepath): '''Display time domain statistical information about the audio channels. Audio is passed unmodified through the SoX processing chain. Statistics are calculated and displayed for each audio channel Unlike other Transformer methods, this does not modify the transformer effects chain. Instead it computes statistics on the output file that would be created if the build command were invoked. Note: The file is downmixed to mono prior to computation. Parameters ---------- input_filepath : str Path to input file to compute stats on. Returns ------- stats_dict : dict List of frequency (Hz), amplitude pairs. See Also -------- stat, sox.file_info ''' effect_args = ['channels', '1', 'stats'] _, _, stats_output = self.build( input_filepath, None, extra_args=effect_args, return_output=True ) stats_dict = {} lines = stats_output.split('\n') for line in lines: split_line = line.split() if len(split_line) == 0: continue value = split_line[-1] key = ' '.join(split_line[:-1]) stats_dict[key] = value return stats_dict
def GetRosettaResidueMap(self, ConvertMSEToAtom = False, RemoveIncompleteFinalResidues = False, RemoveIncompleteResidues = False): '''Note: This function ignores any DNA.''' raise Exception('This code looks to be deprecated. Use construct_pdb_to_rosetta_residue_map instead.') chain = None sequences = {} residue_map = {} resid_set = set() resid_list = [] DNA_residues = set([' DA', ' DC', ' DG', ' DT']) chains = [] self.RAW_ATOM_SEQUENCE = [] essential_atoms_1 = set(['CA', 'C', 'N'])#, 'O']) essential_atoms_2 = set(['CA', 'C', 'N'])#, 'OG']) current_atoms = set() atoms_read = {} oldchainID = None removed_residue = {} for line in self.lines: if line[0:4] == 'ATOM' or (ConvertMSEToAtom and (line[0:6] == 'HETATM') and (line[17:20] == 'MSE')): chainID = line[21] if missing_chain_ids.get(self.pdb_id): chainID = missing_chain_ids[self.pdb_id] if chainID not in chains: chains.append(chainID) residue_longname = line[17:20] if residue_longname in DNA_residues: # Skip DNA continue if residue_longname == 'UNK': # Skip unknown residues continue if residue_longname not in allowed_PDB_residues_types and not(ConvertMSEToAtom and residue_longname == 'MSE'): if not self.strict: # Skip unknown residues continue else: raise NonCanonicalResidueException("Residue %s encountered: %s" % (line[17:20], line)) else: resid = line[21:27] #print(chainID, residue_longname, resid) #print(line) #print(resid_list) if resid not in resid_set: removed_residue[chainID] = False add_residue = True if current_atoms: if RemoveIncompleteResidues and essential_atoms_1.intersection(current_atoms) != essential_atoms_1 and essential_atoms_2.intersection(current_atoms) != essential_atoms_2: oldChain = resid_list[-1][0] oldResidueID = resid_list[-1][1:] print("The last residue '%s', %s, in chain %s is missing these atoms: %s." % (resid_list[-1], residue_longname, oldChain, essential_atoms_1.difference(current_atoms) or essential_atoms_2.difference(current_atoms))) resid_set.remove(resid_list[-1]) #print("".join(resid_list)) resid_list = resid_list[:-1] if oldchainID: removed_residue[oldchainID] = True #print("".join(resid_list)) #print(sequences[oldChain]) if sequences.get(oldChain): sequences[oldChain] = sequences[oldChain][:-1] if residue_map.get(oldChain): residue_map[oldChain] = residue_map[oldChain][:-1] #print(sequences[oldChain] else: assert(not(resid_set)) current_atoms = set() atoms_read[chainID] = set() atoms_read[chainID].add(line[12:15].strip()) resid_set.add(resid) resid_list.append(resid) chainID = line[21] sequences[chainID] = sequences.get(chainID, []) if residue_longname in non_canonical_amino_acids: sequences[chainID].append(non_canonical_amino_acids[residue_longname]) else: sequences[chainID].append(residue_type_3to1_map[residue_longname]) residue_map[chainID] = residue_map.get(chainID, []) if residue_longname in non_canonical_amino_acids: residue_map[chainID].append((resid, non_canonical_amino_acids[residue_longname])) else: residue_map[chainID].append((resid, residue_type_3to1_map[residue_longname])) oldchainID = chainID else: #atoms_read[chainID] = atoms_read.get(chainID, set()) atoms_read[chainID].add(line[12:15].strip()) current_atoms.add(line[12:15].strip()) if RemoveIncompleteFinalResidues: # These are (probably) necessary for Rosetta to keep the residue. Rosetta does throw away residues where only the N atom is present if that residue is at the end of a chain. for chainID, sequence_list in sequences.iteritems(): if not(removed_residue[chainID]): if essential_atoms_1.intersection(atoms_read[chainID]) != essential_atoms_1 and essential_atoms_2.intersection(atoms_read[chainID]) != essential_atoms_2: print("The last residue %s of chain %s is missing these atoms: %s." % (sequence_list[-1], chainID, essential_atoms_1.difference(atoms_read[chainID]) or essential_atoms_2.difference(atoms_read[chainID]))) oldResidueID = sequence_list[-1][1:] residue_map[chainID] = residue_map[chainID][0:-1] sequences[chainID] = sequence_list[0:-1] for chainID, sequence_list in sequences.iteritems(): sequences[chainID] = "".join(sequence_list) assert(sequences[chainID] == "".join([res_details[1] for res_details in residue_map[chainID]])) for chainID in chains: for a_acid in sequences.get(chainID, ""): self.RAW_ATOM_SEQUENCE.append((chainID, a_acid)) residue_objects = {} for chainID in residue_map.keys(): residue_objects[chainID] = [] for chainID, residue_list in residue_map.iteritems(): for res_pair in residue_list: resid = res_pair[0] resaa = res_pair[1] assert(resid[0] == chainID) residue_objects[chainID].append((resid[1:].strip(), resaa)) return sequences, residue_objects
Note: This function ignores any DNA.
Below is the the instruction that describes the task: ### Input: Note: This function ignores any DNA. ### Response: def GetRosettaResidueMap(self, ConvertMSEToAtom = False, RemoveIncompleteFinalResidues = False, RemoveIncompleteResidues = False): '''Note: This function ignores any DNA.''' raise Exception('This code looks to be deprecated. Use construct_pdb_to_rosetta_residue_map instead.') chain = None sequences = {} residue_map = {} resid_set = set() resid_list = [] DNA_residues = set([' DA', ' DC', ' DG', ' DT']) chains = [] self.RAW_ATOM_SEQUENCE = [] essential_atoms_1 = set(['CA', 'C', 'N'])#, 'O']) essential_atoms_2 = set(['CA', 'C', 'N'])#, 'OG']) current_atoms = set() atoms_read = {} oldchainID = None removed_residue = {} for line in self.lines: if line[0:4] == 'ATOM' or (ConvertMSEToAtom and (line[0:6] == 'HETATM') and (line[17:20] == 'MSE')): chainID = line[21] if missing_chain_ids.get(self.pdb_id): chainID = missing_chain_ids[self.pdb_id] if chainID not in chains: chains.append(chainID) residue_longname = line[17:20] if residue_longname in DNA_residues: # Skip DNA continue if residue_longname == 'UNK': # Skip unknown residues continue if residue_longname not in allowed_PDB_residues_types and not(ConvertMSEToAtom and residue_longname == 'MSE'): if not self.strict: # Skip unknown residues continue else: raise NonCanonicalResidueException("Residue %s encountered: %s" % (line[17:20], line)) else: resid = line[21:27] #print(chainID, residue_longname, resid) #print(line) #print(resid_list) if resid not in resid_set: removed_residue[chainID] = False add_residue = True if current_atoms: if RemoveIncompleteResidues and essential_atoms_1.intersection(current_atoms) != essential_atoms_1 and essential_atoms_2.intersection(current_atoms) != essential_atoms_2: oldChain = resid_list[-1][0] oldResidueID = resid_list[-1][1:] print("The last residue '%s', %s, in chain %s is missing these atoms: %s." % (resid_list[-1], residue_longname, oldChain, essential_atoms_1.difference(current_atoms) or essential_atoms_2.difference(current_atoms))) resid_set.remove(resid_list[-1]) #print("".join(resid_list)) resid_list = resid_list[:-1] if oldchainID: removed_residue[oldchainID] = True #print("".join(resid_list)) #print(sequences[oldChain]) if sequences.get(oldChain): sequences[oldChain] = sequences[oldChain][:-1] if residue_map.get(oldChain): residue_map[oldChain] = residue_map[oldChain][:-1] #print(sequences[oldChain] else: assert(not(resid_set)) current_atoms = set() atoms_read[chainID] = set() atoms_read[chainID].add(line[12:15].strip()) resid_set.add(resid) resid_list.append(resid) chainID = line[21] sequences[chainID] = sequences.get(chainID, []) if residue_longname in non_canonical_amino_acids: sequences[chainID].append(non_canonical_amino_acids[residue_longname]) else: sequences[chainID].append(residue_type_3to1_map[residue_longname]) residue_map[chainID] = residue_map.get(chainID, []) if residue_longname in non_canonical_amino_acids: residue_map[chainID].append((resid, non_canonical_amino_acids[residue_longname])) else: residue_map[chainID].append((resid, residue_type_3to1_map[residue_longname])) oldchainID = chainID else: #atoms_read[chainID] = atoms_read.get(chainID, set()) atoms_read[chainID].add(line[12:15].strip()) current_atoms.add(line[12:15].strip()) if RemoveIncompleteFinalResidues: # These are (probably) necessary for Rosetta to keep the residue. Rosetta does throw away residues where only the N atom is present if that residue is at the end of a chain. for chainID, sequence_list in sequences.iteritems(): if not(removed_residue[chainID]): if essential_atoms_1.intersection(atoms_read[chainID]) != essential_atoms_1 and essential_atoms_2.intersection(atoms_read[chainID]) != essential_atoms_2: print("The last residue %s of chain %s is missing these atoms: %s." % (sequence_list[-1], chainID, essential_atoms_1.difference(atoms_read[chainID]) or essential_atoms_2.difference(atoms_read[chainID]))) oldResidueID = sequence_list[-1][1:] residue_map[chainID] = residue_map[chainID][0:-1] sequences[chainID] = sequence_list[0:-1] for chainID, sequence_list in sequences.iteritems(): sequences[chainID] = "".join(sequence_list) assert(sequences[chainID] == "".join([res_details[1] for res_details in residue_map[chainID]])) for chainID in chains: for a_acid in sequences.get(chainID, ""): self.RAW_ATOM_SEQUENCE.append((chainID, a_acid)) residue_objects = {} for chainID in residue_map.keys(): residue_objects[chainID] = [] for chainID, residue_list in residue_map.iteritems(): for res_pair in residue_list: resid = res_pair[0] resaa = res_pair[1] assert(resid[0] == chainID) residue_objects[chainID].append((resid[1:].strip(), resaa)) return sequences, residue_objects
async def on_connect(self): """ Initialize the connection, authenticate and select a database and send READONLY if it is set during object initialization. """ if self.db: warnings.warn('SELECT DB is not allowed in cluster mode') self.db = '' await super(ClusterConnection, self).on_connect() if self.readonly: await self.send_command('READONLY') if nativestr(await self.read_response()) != 'OK': raise ConnectionError('READONLY command failed')
Initialize the connection, authenticate and select a database and send READONLY if it is set during object initialization.
Below is the the instruction that describes the task: ### Input: Initialize the connection, authenticate and select a database and send READONLY if it is set during object initialization. ### Response: async def on_connect(self): """ Initialize the connection, authenticate and select a database and send READONLY if it is set during object initialization. """ if self.db: warnings.warn('SELECT DB is not allowed in cluster mode') self.db = '' await super(ClusterConnection, self).on_connect() if self.readonly: await self.send_command('READONLY') if nativestr(await self.read_response()) != 'OK': raise ConnectionError('READONLY command failed')
def get_summary_files(dirnames): """Gets the TeX summary files for each test. :param dirnames: the list of directories to merge data from. :type dirnames: list :returns: a list of summary file names. :rtype: list """ # A useful regular expression to get step number in the current directory step_nb_re = re.compile(r"^([0-9]+)_\S+") # The final list of summary files final_summary_files = [] # For each of the directory for dn in dirnames: # Getting the step directories step_dir = [ n for n in os.listdir(dn) if os.path.isdir(os.path.join(dn, n)) and step_nb_re.match(n) ] # Sorting the step directories step_dir.sort(key=lambda x: int(step_nb_re.match(x).group(1))) # Getting the name of the summary file for each of the step directory step_summary_files = [ glob(os.path.join(dn, sn, "*.summary.tex")) for sn in step_dir ] # Checking we have only one summary file for summary_file in step_summary_files: if len(summary_file) > 1: raise ProgramError("{}: multiple summary files".format( os.path.join(dn, sn), )) if not summary_file: raise ProgramError("{}: missing summary file".format( os.apth.join(dn, sn), )) final_summary_files.extend(i[0] for i in step_summary_files) return [os.path.abspath(fn) for fn in final_summary_files]
Gets the TeX summary files for each test. :param dirnames: the list of directories to merge data from. :type dirnames: list :returns: a list of summary file names. :rtype: list
Below is the the instruction that describes the task: ### Input: Gets the TeX summary files for each test. :param dirnames: the list of directories to merge data from. :type dirnames: list :returns: a list of summary file names. :rtype: list ### Response: def get_summary_files(dirnames): """Gets the TeX summary files for each test. :param dirnames: the list of directories to merge data from. :type dirnames: list :returns: a list of summary file names. :rtype: list """ # A useful regular expression to get step number in the current directory step_nb_re = re.compile(r"^([0-9]+)_\S+") # The final list of summary files final_summary_files = [] # For each of the directory for dn in dirnames: # Getting the step directories step_dir = [ n for n in os.listdir(dn) if os.path.isdir(os.path.join(dn, n)) and step_nb_re.match(n) ] # Sorting the step directories step_dir.sort(key=lambda x: int(step_nb_re.match(x).group(1))) # Getting the name of the summary file for each of the step directory step_summary_files = [ glob(os.path.join(dn, sn, "*.summary.tex")) for sn in step_dir ] # Checking we have only one summary file for summary_file in step_summary_files: if len(summary_file) > 1: raise ProgramError("{}: multiple summary files".format( os.path.join(dn, sn), )) if not summary_file: raise ProgramError("{}: missing summary file".format( os.apth.join(dn, sn), )) final_summary_files.extend(i[0] for i in step_summary_files) return [os.path.abspath(fn) for fn in final_summary_files]
def write_branch_data(self, file): """ Writes branch data to an Excel spreadsheet. """ branch_sheet = self.book.add_sheet("Branches") for i, branch in enumerate(self.case.branches): for j, attr in enumerate(BRANCH_ATTRS): branch_sheet.write(i, j, getattr(branch, attr))
Writes branch data to an Excel spreadsheet.
Below is the the instruction that describes the task: ### Input: Writes branch data to an Excel spreadsheet. ### Response: def write_branch_data(self, file): """ Writes branch data to an Excel spreadsheet. """ branch_sheet = self.book.add_sheet("Branches") for i, branch in enumerate(self.case.branches): for j, attr in enumerate(BRANCH_ATTRS): branch_sheet.write(i, j, getattr(branch, attr))
def login(self, command='su -', user=None, password=None, prompt_prefix=None, expect=None, timeout=shutit_global.shutit_global_object.default_timeout, escape=False, echo=None, note=None, go_home=True, fail_on_fail=True, is_ssh=True, check_sudo=True, loglevel=logging.DEBUG): """Logs user in on default child. """ shutit_global.shutit_global_object.yield_to_draw() shutit_pexpect_session = self.get_current_shutit_pexpect_session() return shutit_pexpect_session.login(ShutItSendSpec(shutit_pexpect_session, user=user, send=command, password=password, prompt_prefix=prompt_prefix, expect=expect, timeout=timeout, escape=escape, echo=echo, note=note, go_home=go_home, fail_on_fail=fail_on_fail, is_ssh=is_ssh, check_sudo=check_sudo, loglevel=loglevel))
Logs user in on default child.
Below is the the instruction that describes the task: ### Input: Logs user in on default child. ### Response: def login(self, command='su -', user=None, password=None, prompt_prefix=None, expect=None, timeout=shutit_global.shutit_global_object.default_timeout, escape=False, echo=None, note=None, go_home=True, fail_on_fail=True, is_ssh=True, check_sudo=True, loglevel=logging.DEBUG): """Logs user in on default child. """ shutit_global.shutit_global_object.yield_to_draw() shutit_pexpect_session = self.get_current_shutit_pexpect_session() return shutit_pexpect_session.login(ShutItSendSpec(shutit_pexpect_session, user=user, send=command, password=password, prompt_prefix=prompt_prefix, expect=expect, timeout=timeout, escape=escape, echo=echo, note=note, go_home=go_home, fail_on_fail=fail_on_fail, is_ssh=is_ssh, check_sudo=check_sudo, loglevel=loglevel))
def salience(S, freqs, h_range, weights=None, aggregate=None, filter_peaks=True, fill_value=np.nan, kind='linear', axis=0): """Harmonic salience function. Parameters ---------- S : np.ndarray [shape=(d, n)] input time frequency magnitude representation (stft, ifgram, etc). Must be real-valued and non-negative. freqs : np.ndarray, shape=(S.shape[axis]) The frequency values corresponding to S's elements along the chosen axis. h_range : list-like, non-negative Harmonics to include in salience computation. The first harmonic (1) corresponds to `S` itself. Values less than one (e.g., 1/2) correspond to sub-harmonics. weights : list-like The weight to apply to each harmonic in the summation. (default: uniform weights). Must be the same length as `harmonics`. aggregate : function aggregation function (default: `np.average`) If `aggregate=np.average`, then a weighted average is computed per-harmonic according to the specified weights. For all other aggregation functions, all harmonics are treated equally. filter_peaks : bool If true, returns harmonic summation only on frequencies of peak magnitude. Otherwise returns harmonic summation over the full spectrum. Defaults to True. fill_value : float The value to fill non-peaks in the output representation. (default: np.nan) Only used if `filter_peaks == True`. kind : str Interpolation type for harmonic estimation. See `scipy.interpolate.interp1d`. axis : int The axis along which to compute harmonics Returns ------- S_sal : np.ndarray, shape=(len(h_range), [x.shape]) `S_sal` will have the same shape as `S`, and measure the overal harmonic energy at each frequency. See Also -------- interp_harmonics Examples -------- >>> y, sr = librosa.load(librosa.util.example_audio_file(), ... duration=15, offset=30) >>> S = np.abs(librosa.stft(y)) >>> freqs = librosa.core.fft_frequencies(sr) >>> harms = [1, 2, 3, 4] >>> weights = [1.0, 0.5, 0.33, 0.25] >>> S_sal = librosa.salience(S, freqs, harms, weights, fill_value=0) >>> print(S_sal.shape) (1025, 646) >>> import matplotlib.pyplot as plt >>> plt.figure() >>> librosa.display.specshow(librosa.amplitude_to_db(S_sal, ... ref=np.max), ... sr=sr, y_axis='log', x_axis='time') >>> plt.colorbar() >>> plt.title('Salience spectrogram') >>> plt.tight_layout() """ if aggregate is None: aggregate = np.average if weights is None: weights = np.ones((len(h_range), )) else: weights = np.array(weights, dtype=float) S_harm = interp_harmonics(S, freqs, h_range, kind=kind, axis=axis) if aggregate is np.average: S_sal = aggregate(S_harm, axis=0, weights=weights) else: S_sal = aggregate(S_harm, axis=0) if filter_peaks: S_peaks = scipy.signal.argrelmax(S, axis=0) S_out = np.empty(S.shape) S_out.fill(fill_value) S_out[S_peaks[0], S_peaks[1]] = S_sal[S_peaks[0], S_peaks[1]] S_sal = S_out return S_sal
Harmonic salience function. Parameters ---------- S : np.ndarray [shape=(d, n)] input time frequency magnitude representation (stft, ifgram, etc). Must be real-valued and non-negative. freqs : np.ndarray, shape=(S.shape[axis]) The frequency values corresponding to S's elements along the chosen axis. h_range : list-like, non-negative Harmonics to include in salience computation. The first harmonic (1) corresponds to `S` itself. Values less than one (e.g., 1/2) correspond to sub-harmonics. weights : list-like The weight to apply to each harmonic in the summation. (default: uniform weights). Must be the same length as `harmonics`. aggregate : function aggregation function (default: `np.average`) If `aggregate=np.average`, then a weighted average is computed per-harmonic according to the specified weights. For all other aggregation functions, all harmonics are treated equally. filter_peaks : bool If true, returns harmonic summation only on frequencies of peak magnitude. Otherwise returns harmonic summation over the full spectrum. Defaults to True. fill_value : float The value to fill non-peaks in the output representation. (default: np.nan) Only used if `filter_peaks == True`. kind : str Interpolation type for harmonic estimation. See `scipy.interpolate.interp1d`. axis : int The axis along which to compute harmonics Returns ------- S_sal : np.ndarray, shape=(len(h_range), [x.shape]) `S_sal` will have the same shape as `S`, and measure the overal harmonic energy at each frequency. See Also -------- interp_harmonics Examples -------- >>> y, sr = librosa.load(librosa.util.example_audio_file(), ... duration=15, offset=30) >>> S = np.abs(librosa.stft(y)) >>> freqs = librosa.core.fft_frequencies(sr) >>> harms = [1, 2, 3, 4] >>> weights = [1.0, 0.5, 0.33, 0.25] >>> S_sal = librosa.salience(S, freqs, harms, weights, fill_value=0) >>> print(S_sal.shape) (1025, 646) >>> import matplotlib.pyplot as plt >>> plt.figure() >>> librosa.display.specshow(librosa.amplitude_to_db(S_sal, ... ref=np.max), ... sr=sr, y_axis='log', x_axis='time') >>> plt.colorbar() >>> plt.title('Salience spectrogram') >>> plt.tight_layout()
Below is the the instruction that describes the task: ### Input: Harmonic salience function. Parameters ---------- S : np.ndarray [shape=(d, n)] input time frequency magnitude representation (stft, ifgram, etc). Must be real-valued and non-negative. freqs : np.ndarray, shape=(S.shape[axis]) The frequency values corresponding to S's elements along the chosen axis. h_range : list-like, non-negative Harmonics to include in salience computation. The first harmonic (1) corresponds to `S` itself. Values less than one (e.g., 1/2) correspond to sub-harmonics. weights : list-like The weight to apply to each harmonic in the summation. (default: uniform weights). Must be the same length as `harmonics`. aggregate : function aggregation function (default: `np.average`) If `aggregate=np.average`, then a weighted average is computed per-harmonic according to the specified weights. For all other aggregation functions, all harmonics are treated equally. filter_peaks : bool If true, returns harmonic summation only on frequencies of peak magnitude. Otherwise returns harmonic summation over the full spectrum. Defaults to True. fill_value : float The value to fill non-peaks in the output representation. (default: np.nan) Only used if `filter_peaks == True`. kind : str Interpolation type for harmonic estimation. See `scipy.interpolate.interp1d`. axis : int The axis along which to compute harmonics Returns ------- S_sal : np.ndarray, shape=(len(h_range), [x.shape]) `S_sal` will have the same shape as `S`, and measure the overal harmonic energy at each frequency. See Also -------- interp_harmonics Examples -------- >>> y, sr = librosa.load(librosa.util.example_audio_file(), ... duration=15, offset=30) >>> S = np.abs(librosa.stft(y)) >>> freqs = librosa.core.fft_frequencies(sr) >>> harms = [1, 2, 3, 4] >>> weights = [1.0, 0.5, 0.33, 0.25] >>> S_sal = librosa.salience(S, freqs, harms, weights, fill_value=0) >>> print(S_sal.shape) (1025, 646) >>> import matplotlib.pyplot as plt >>> plt.figure() >>> librosa.display.specshow(librosa.amplitude_to_db(S_sal, ... ref=np.max), ... sr=sr, y_axis='log', x_axis='time') >>> plt.colorbar() >>> plt.title('Salience spectrogram') >>> plt.tight_layout() ### Response: def salience(S, freqs, h_range, weights=None, aggregate=None, filter_peaks=True, fill_value=np.nan, kind='linear', axis=0): """Harmonic salience function. Parameters ---------- S : np.ndarray [shape=(d, n)] input time frequency magnitude representation (stft, ifgram, etc). Must be real-valued and non-negative. freqs : np.ndarray, shape=(S.shape[axis]) The frequency values corresponding to S's elements along the chosen axis. h_range : list-like, non-negative Harmonics to include in salience computation. The first harmonic (1) corresponds to `S` itself. Values less than one (e.g., 1/2) correspond to sub-harmonics. weights : list-like The weight to apply to each harmonic in the summation. (default: uniform weights). Must be the same length as `harmonics`. aggregate : function aggregation function (default: `np.average`) If `aggregate=np.average`, then a weighted average is computed per-harmonic according to the specified weights. For all other aggregation functions, all harmonics are treated equally. filter_peaks : bool If true, returns harmonic summation only on frequencies of peak magnitude. Otherwise returns harmonic summation over the full spectrum. Defaults to True. fill_value : float The value to fill non-peaks in the output representation. (default: np.nan) Only used if `filter_peaks == True`. kind : str Interpolation type for harmonic estimation. See `scipy.interpolate.interp1d`. axis : int The axis along which to compute harmonics Returns ------- S_sal : np.ndarray, shape=(len(h_range), [x.shape]) `S_sal` will have the same shape as `S`, and measure the overal harmonic energy at each frequency. See Also -------- interp_harmonics Examples -------- >>> y, sr = librosa.load(librosa.util.example_audio_file(), ... duration=15, offset=30) >>> S = np.abs(librosa.stft(y)) >>> freqs = librosa.core.fft_frequencies(sr) >>> harms = [1, 2, 3, 4] >>> weights = [1.0, 0.5, 0.33, 0.25] >>> S_sal = librosa.salience(S, freqs, harms, weights, fill_value=0) >>> print(S_sal.shape) (1025, 646) >>> import matplotlib.pyplot as plt >>> plt.figure() >>> librosa.display.specshow(librosa.amplitude_to_db(S_sal, ... ref=np.max), ... sr=sr, y_axis='log', x_axis='time') >>> plt.colorbar() >>> plt.title('Salience spectrogram') >>> plt.tight_layout() """ if aggregate is None: aggregate = np.average if weights is None: weights = np.ones((len(h_range), )) else: weights = np.array(weights, dtype=float) S_harm = interp_harmonics(S, freqs, h_range, kind=kind, axis=axis) if aggregate is np.average: S_sal = aggregate(S_harm, axis=0, weights=weights) else: S_sal = aggregate(S_harm, axis=0) if filter_peaks: S_peaks = scipy.signal.argrelmax(S, axis=0) S_out = np.empty(S.shape) S_out.fill(fill_value) S_out[S_peaks[0], S_peaks[1]] = S_sal[S_peaks[0], S_peaks[1]] S_sal = S_out return S_sal
def do_forceescape(value): """Enforce HTML escaping. This will probably double escape variables.""" if hasattr(value, '__html__'): value = value.__html__() return escape(text_type(value))
Enforce HTML escaping. This will probably double escape variables.
Below is the the instruction that describes the task: ### Input: Enforce HTML escaping. This will probably double escape variables. ### Response: def do_forceescape(value): """Enforce HTML escaping. This will probably double escape variables.""" if hasattr(value, '__html__'): value = value.__html__() return escape(text_type(value))
def list_nodes_full(call=None): ''' Return a list of the BareMetal servers that are on the provider. ''' if call == 'action': raise SaltCloudSystemExit( 'list_nodes_full must be called with -f or --function' ) items = query(method='servers') # For each server, iterate on its parameters. ret = {} for node in items['servers']: ret[node['name']] = {} for item in node: value = node[item] ret[node['name']][item] = value return ret
Return a list of the BareMetal servers that are on the provider.
Below is the the instruction that describes the task: ### Input: Return a list of the BareMetal servers that are on the provider. ### Response: def list_nodes_full(call=None): ''' Return a list of the BareMetal servers that are on the provider. ''' if call == 'action': raise SaltCloudSystemExit( 'list_nodes_full must be called with -f or --function' ) items = query(method='servers') # For each server, iterate on its parameters. ret = {} for node in items['servers']: ret[node['name']] = {} for item in node: value = node[item] ret[node['name']][item] = value return ret
def calcuate_bboxes(im_shape, patch_size): """Calculate bound boxes based on image shape and size of the bounding box given by `patch_size`""" h, w = im_shape ph, pw = patch_size steps_h = chain(range(0, h - ph, ph), [h - ph]) steps_w = chain(range(0, w - pw, pw), [w - pw]) return product(steps_h, steps_w)
Calculate bound boxes based on image shape and size of the bounding box given by `patch_size`
Below is the the instruction that describes the task: ### Input: Calculate bound boxes based on image shape and size of the bounding box given by `patch_size` ### Response: def calcuate_bboxes(im_shape, patch_size): """Calculate bound boxes based on image shape and size of the bounding box given by `patch_size`""" h, w = im_shape ph, pw = patch_size steps_h = chain(range(0, h - ph, ph), [h - ph]) steps_w = chain(range(0, w - pw, pw), [w - pw]) return product(steps_h, steps_w)
def drawHotspots(self, painter): """ Draws all the hotspots for the renderer. :param painter | <QPaint> """ # draw hotspots for hotspot in (self._hotspots + self._dropzones): hstyle = hotspot.style() if hstyle == XNode.HotspotStyle.Invisible: continue hotspot.render(painter, self)
Draws all the hotspots for the renderer. :param painter | <QPaint>
Below is the the instruction that describes the task: ### Input: Draws all the hotspots for the renderer. :param painter | <QPaint> ### Response: def drawHotspots(self, painter): """ Draws all the hotspots for the renderer. :param painter | <QPaint> """ # draw hotspots for hotspot in (self._hotspots + self._dropzones): hstyle = hotspot.style() if hstyle == XNode.HotspotStyle.Invisible: continue hotspot.render(painter, self)
def unpack(self, token): """ Unpack a received signed or signed and encrypted Json Web Token :param token: The Json Web Token :return: If decryption and signature verification work the payload will be returned as a Message instance if possible. """ if not token: raise KeyError _jwe_header = _jws_header = None # Check if it's an encrypted JWT darg = {} if self.allowed_enc_encs: darg['enc'] = self.allowed_enc_encs if self.allowed_enc_algs: darg['alg'] = self.allowed_enc_algs try: _decryptor = jwe_factory(token, **darg) except (KeyError, HeaderError): _decryptor = None if _decryptor: # Yes, try to decode _info = self._decrypt(_decryptor, token) _jwe_header = _decryptor.jwt.headers # Try to find out if the information encrypted was a signed JWT try: _content_type = _decryptor.jwt.headers['cty'] except KeyError: _content_type = '' else: _content_type = 'jwt' _info = token # If I have reason to believe the information I have is a signed JWT if _content_type.lower() == 'jwt': # Check that is a signed JWT if self.allowed_sign_algs: _verifier = jws_factory(_info, alg=self.allowed_sign_algs) else: _verifier = jws_factory(_info) if _verifier: _info = self._verify(_verifier, _info) else: raise Exception() _jws_header = _verifier.jwt.headers else: # So, not a signed JWT try: # A JSON document ? _info = json.loads(_info) except JSONDecodeError: # Oh, no ! Not JSON return _info except TypeError: try: _info = as_unicode(_info) _info = json.loads(_info) except JSONDecodeError: # Oh, no ! Not JSON return _info # If I know what message class the info should be mapped into if self.msg_cls: _msg_cls = self.msg_cls else: try: # try to find a issuer specific message class _msg_cls = self.iss2msg_cls[_info['iss']] except KeyError: _msg_cls = None if _msg_cls: vp_args = {'skew': self.skew} if self.iss: vp_args['aud'] = self.iss _info = self.verify_profile(_msg_cls, _info, **vp_args) _info.jwe_header = _jwe_header _info.jws_header = _jws_header return _info else: return _info
Unpack a received signed or signed and encrypted Json Web Token :param token: The Json Web Token :return: If decryption and signature verification work the payload will be returned as a Message instance if possible.
Below is the the instruction that describes the task: ### Input: Unpack a received signed or signed and encrypted Json Web Token :param token: The Json Web Token :return: If decryption and signature verification work the payload will be returned as a Message instance if possible. ### Response: def unpack(self, token): """ Unpack a received signed or signed and encrypted Json Web Token :param token: The Json Web Token :return: If decryption and signature verification work the payload will be returned as a Message instance if possible. """ if not token: raise KeyError _jwe_header = _jws_header = None # Check if it's an encrypted JWT darg = {} if self.allowed_enc_encs: darg['enc'] = self.allowed_enc_encs if self.allowed_enc_algs: darg['alg'] = self.allowed_enc_algs try: _decryptor = jwe_factory(token, **darg) except (KeyError, HeaderError): _decryptor = None if _decryptor: # Yes, try to decode _info = self._decrypt(_decryptor, token) _jwe_header = _decryptor.jwt.headers # Try to find out if the information encrypted was a signed JWT try: _content_type = _decryptor.jwt.headers['cty'] except KeyError: _content_type = '' else: _content_type = 'jwt' _info = token # If I have reason to believe the information I have is a signed JWT if _content_type.lower() == 'jwt': # Check that is a signed JWT if self.allowed_sign_algs: _verifier = jws_factory(_info, alg=self.allowed_sign_algs) else: _verifier = jws_factory(_info) if _verifier: _info = self._verify(_verifier, _info) else: raise Exception() _jws_header = _verifier.jwt.headers else: # So, not a signed JWT try: # A JSON document ? _info = json.loads(_info) except JSONDecodeError: # Oh, no ! Not JSON return _info except TypeError: try: _info = as_unicode(_info) _info = json.loads(_info) except JSONDecodeError: # Oh, no ! Not JSON return _info # If I know what message class the info should be mapped into if self.msg_cls: _msg_cls = self.msg_cls else: try: # try to find a issuer specific message class _msg_cls = self.iss2msg_cls[_info['iss']] except KeyError: _msg_cls = None if _msg_cls: vp_args = {'skew': self.skew} if self.iss: vp_args['aud'] = self.iss _info = self.verify_profile(_msg_cls, _info, **vp_args) _info.jwe_header = _jwe_header _info.jws_header = _jws_header return _info else: return _info
def translate_alias(self, alias, namespace=None, target_namespaces=None, translate_ncbi_namespace=None): """given an alias and optional namespace, return a list of all other aliases for same sequence """ if translate_ncbi_namespace is None: translate_ncbi_namespace = self.translate_ncbi_namespace seq_id = self._get_unique_seqid(alias=alias, namespace=namespace) aliases = self.aliases.fetch_aliases(seq_id=seq_id, translate_ncbi_namespace=translate_ncbi_namespace) if target_namespaces: aliases = [a for a in aliases if a["namespace"] in target_namespaces] return aliases
given an alias and optional namespace, return a list of all other aliases for same sequence
Below is the the instruction that describes the task: ### Input: given an alias and optional namespace, return a list of all other aliases for same sequence ### Response: def translate_alias(self, alias, namespace=None, target_namespaces=None, translate_ncbi_namespace=None): """given an alias and optional namespace, return a list of all other aliases for same sequence """ if translate_ncbi_namespace is None: translate_ncbi_namespace = self.translate_ncbi_namespace seq_id = self._get_unique_seqid(alias=alias, namespace=namespace) aliases = self.aliases.fetch_aliases(seq_id=seq_id, translate_ncbi_namespace=translate_ncbi_namespace) if target_namespaces: aliases = [a for a in aliases if a["namespace"] in target_namespaces] return aliases
def _make_unique_title(self, title): """Make the title unique. Adds a counter to the title to prevent duplicates. Prior to IDA 6.8, two graphs with the same title could crash IDA. This has been fixed (https://www.hex-rays.com/products/ida/6.8/index.shtml). The code will not change for support of older versions and as it is more usable this way. """ unique_title = title for counter in itertools.count(): unique_title = "{}-{}".format(title, counter) if not idaapi.find_tform(unique_title): break return unique_title
Make the title unique. Adds a counter to the title to prevent duplicates. Prior to IDA 6.8, two graphs with the same title could crash IDA. This has been fixed (https://www.hex-rays.com/products/ida/6.8/index.shtml). The code will not change for support of older versions and as it is more usable this way.
Below is the the instruction that describes the task: ### Input: Make the title unique. Adds a counter to the title to prevent duplicates. Prior to IDA 6.8, two graphs with the same title could crash IDA. This has been fixed (https://www.hex-rays.com/products/ida/6.8/index.shtml). The code will not change for support of older versions and as it is more usable this way. ### Response: def _make_unique_title(self, title): """Make the title unique. Adds a counter to the title to prevent duplicates. Prior to IDA 6.8, two graphs with the same title could crash IDA. This has been fixed (https://www.hex-rays.com/products/ida/6.8/index.shtml). The code will not change for support of older versions and as it is more usable this way. """ unique_title = title for counter in itertools.count(): unique_title = "{}-{}".format(title, counter) if not idaapi.find_tform(unique_title): break return unique_title
def _env_runner(base_env, extra_batch_callback, policies, policy_mapping_fn, unroll_length, horizon, preprocessors, obs_filters, clip_rewards, clip_actions, pack, callbacks, tf_sess, perf_stats, soft_horizon): """This implements the common experience collection logic. Args: base_env (BaseEnv): env implementing BaseEnv. extra_batch_callback (fn): function to send extra batch data to. policies (dict): Map of policy ids to PolicyGraph instances. policy_mapping_fn (func): Function that maps agent ids to policy ids. This is called when an agent first enters the environment. The agent is then "bound" to the returned policy for the episode. unroll_length (int): Number of episode steps before `SampleBatch` is yielded. Set to infinity to yield complete episodes. horizon (int): Horizon of the episode. preprocessors (dict): Map of policy id to preprocessor for the observations prior to filtering. obs_filters (dict): Map of policy id to filter used to process observations for the policy. clip_rewards (bool): Whether to clip rewards before postprocessing. pack (bool): Whether to pack multiple episodes into each batch. This guarantees batches will be exactly `unroll_length` in size. clip_actions (bool): Whether to clip actions to the space range. callbacks (dict): User callbacks to run on episode events. tf_sess (Session|None): Optional tensorflow session to use for batching TF policy evaluations. perf_stats (PerfStats): Record perf stats into this object. soft_horizon (bool): Calculate rewards but don't reset the environment when the horizon is hit. Yields: rollout (SampleBatch): Object containing state, action, reward, terminal condition, and other fields as dictated by `policy`. """ try: if not horizon: horizon = (base_env.get_unwrapped()[0].spec.max_episode_steps) except Exception: logger.debug("no episode horizon specified, assuming inf") if not horizon: horizon = float("inf") # Pool of batch builders, which can be shared across episodes to pack # trajectory data. batch_builder_pool = [] def get_batch_builder(): if batch_builder_pool: return batch_builder_pool.pop() else: return MultiAgentSampleBatchBuilder( policies, clip_rewards, callbacks.get("on_postprocess_traj")) def new_episode(): episode = MultiAgentEpisode(policies, policy_mapping_fn, get_batch_builder, extra_batch_callback) if callbacks.get("on_episode_start"): callbacks["on_episode_start"]({ "env": base_env, "policy": policies, "episode": episode, }) return episode active_episodes = defaultdict(new_episode) while True: perf_stats.iters += 1 t0 = time.time() # Get observations from all ready agents unfiltered_obs, rewards, dones, infos, off_policy_actions = \ base_env.poll() perf_stats.env_wait_time += time.time() - t0 if log_once("env_returns"): logger.info("Raw obs from env: {}".format( summarize(unfiltered_obs))) logger.info("Info return from env: {}".format(summarize(infos))) # Process observations and prepare for policy evaluation t1 = time.time() active_envs, to_eval, outputs = _process_observations( base_env, policies, batch_builder_pool, active_episodes, unfiltered_obs, rewards, dones, infos, off_policy_actions, horizon, preprocessors, obs_filters, unroll_length, pack, callbacks, soft_horizon) perf_stats.processing_time += time.time() - t1 for o in outputs: yield o # Do batched policy eval t2 = time.time() eval_results = _do_policy_eval(tf_sess, to_eval, policies, active_episodes) perf_stats.inference_time += time.time() - t2 # Process results and update episode state t3 = time.time() actions_to_send = _process_policy_eval_results( to_eval, eval_results, active_episodes, active_envs, off_policy_actions, policies, clip_actions) perf_stats.processing_time += time.time() - t3 # Return computed actions to ready envs. We also send to envs that have # taken off-policy actions; those envs are free to ignore the action. t4 = time.time() base_env.send_actions(actions_to_send) perf_stats.env_wait_time += time.time() - t4
This implements the common experience collection logic. Args: base_env (BaseEnv): env implementing BaseEnv. extra_batch_callback (fn): function to send extra batch data to. policies (dict): Map of policy ids to PolicyGraph instances. policy_mapping_fn (func): Function that maps agent ids to policy ids. This is called when an agent first enters the environment. The agent is then "bound" to the returned policy for the episode. unroll_length (int): Number of episode steps before `SampleBatch` is yielded. Set to infinity to yield complete episodes. horizon (int): Horizon of the episode. preprocessors (dict): Map of policy id to preprocessor for the observations prior to filtering. obs_filters (dict): Map of policy id to filter used to process observations for the policy. clip_rewards (bool): Whether to clip rewards before postprocessing. pack (bool): Whether to pack multiple episodes into each batch. This guarantees batches will be exactly `unroll_length` in size. clip_actions (bool): Whether to clip actions to the space range. callbacks (dict): User callbacks to run on episode events. tf_sess (Session|None): Optional tensorflow session to use for batching TF policy evaluations. perf_stats (PerfStats): Record perf stats into this object. soft_horizon (bool): Calculate rewards but don't reset the environment when the horizon is hit. Yields: rollout (SampleBatch): Object containing state, action, reward, terminal condition, and other fields as dictated by `policy`.
Below is the the instruction that describes the task: ### Input: This implements the common experience collection logic. Args: base_env (BaseEnv): env implementing BaseEnv. extra_batch_callback (fn): function to send extra batch data to. policies (dict): Map of policy ids to PolicyGraph instances. policy_mapping_fn (func): Function that maps agent ids to policy ids. This is called when an agent first enters the environment. The agent is then "bound" to the returned policy for the episode. unroll_length (int): Number of episode steps before `SampleBatch` is yielded. Set to infinity to yield complete episodes. horizon (int): Horizon of the episode. preprocessors (dict): Map of policy id to preprocessor for the observations prior to filtering. obs_filters (dict): Map of policy id to filter used to process observations for the policy. clip_rewards (bool): Whether to clip rewards before postprocessing. pack (bool): Whether to pack multiple episodes into each batch. This guarantees batches will be exactly `unroll_length` in size. clip_actions (bool): Whether to clip actions to the space range. callbacks (dict): User callbacks to run on episode events. tf_sess (Session|None): Optional tensorflow session to use for batching TF policy evaluations. perf_stats (PerfStats): Record perf stats into this object. soft_horizon (bool): Calculate rewards but don't reset the environment when the horizon is hit. Yields: rollout (SampleBatch): Object containing state, action, reward, terminal condition, and other fields as dictated by `policy`. ### Response: def _env_runner(base_env, extra_batch_callback, policies, policy_mapping_fn, unroll_length, horizon, preprocessors, obs_filters, clip_rewards, clip_actions, pack, callbacks, tf_sess, perf_stats, soft_horizon): """This implements the common experience collection logic. Args: base_env (BaseEnv): env implementing BaseEnv. extra_batch_callback (fn): function to send extra batch data to. policies (dict): Map of policy ids to PolicyGraph instances. policy_mapping_fn (func): Function that maps agent ids to policy ids. This is called when an agent first enters the environment. The agent is then "bound" to the returned policy for the episode. unroll_length (int): Number of episode steps before `SampleBatch` is yielded. Set to infinity to yield complete episodes. horizon (int): Horizon of the episode. preprocessors (dict): Map of policy id to preprocessor for the observations prior to filtering. obs_filters (dict): Map of policy id to filter used to process observations for the policy. clip_rewards (bool): Whether to clip rewards before postprocessing. pack (bool): Whether to pack multiple episodes into each batch. This guarantees batches will be exactly `unroll_length` in size. clip_actions (bool): Whether to clip actions to the space range. callbacks (dict): User callbacks to run on episode events. tf_sess (Session|None): Optional tensorflow session to use for batching TF policy evaluations. perf_stats (PerfStats): Record perf stats into this object. soft_horizon (bool): Calculate rewards but don't reset the environment when the horizon is hit. Yields: rollout (SampleBatch): Object containing state, action, reward, terminal condition, and other fields as dictated by `policy`. """ try: if not horizon: horizon = (base_env.get_unwrapped()[0].spec.max_episode_steps) except Exception: logger.debug("no episode horizon specified, assuming inf") if not horizon: horizon = float("inf") # Pool of batch builders, which can be shared across episodes to pack # trajectory data. batch_builder_pool = [] def get_batch_builder(): if batch_builder_pool: return batch_builder_pool.pop() else: return MultiAgentSampleBatchBuilder( policies, clip_rewards, callbacks.get("on_postprocess_traj")) def new_episode(): episode = MultiAgentEpisode(policies, policy_mapping_fn, get_batch_builder, extra_batch_callback) if callbacks.get("on_episode_start"): callbacks["on_episode_start"]({ "env": base_env, "policy": policies, "episode": episode, }) return episode active_episodes = defaultdict(new_episode) while True: perf_stats.iters += 1 t0 = time.time() # Get observations from all ready agents unfiltered_obs, rewards, dones, infos, off_policy_actions = \ base_env.poll() perf_stats.env_wait_time += time.time() - t0 if log_once("env_returns"): logger.info("Raw obs from env: {}".format( summarize(unfiltered_obs))) logger.info("Info return from env: {}".format(summarize(infos))) # Process observations and prepare for policy evaluation t1 = time.time() active_envs, to_eval, outputs = _process_observations( base_env, policies, batch_builder_pool, active_episodes, unfiltered_obs, rewards, dones, infos, off_policy_actions, horizon, preprocessors, obs_filters, unroll_length, pack, callbacks, soft_horizon) perf_stats.processing_time += time.time() - t1 for o in outputs: yield o # Do batched policy eval t2 = time.time() eval_results = _do_policy_eval(tf_sess, to_eval, policies, active_episodes) perf_stats.inference_time += time.time() - t2 # Process results and update episode state t3 = time.time() actions_to_send = _process_policy_eval_results( to_eval, eval_results, active_episodes, active_envs, off_policy_actions, policies, clip_actions) perf_stats.processing_time += time.time() - t3 # Return computed actions to ready envs. We also send to envs that have # taken off-policy actions; those envs are free to ignore the action. t4 = time.time() base_env.send_actions(actions_to_send) perf_stats.env_wait_time += time.time() - t4
def ScanForWindowsVolume(self, source_path): """Scans for a Windows volume. Args: source_path (str): source path. Returns: bool: True if a Windows volume was found. Raises: ScannerError: if the source path does not exists, or if the source path is not a file or directory, or if the format of or within the source file is not supported. """ windows_path_specs = self.GetBasePathSpecs(source_path) if (not windows_path_specs or self._source_type == definitions.SOURCE_TYPE_FILE): return False file_system_path_spec = windows_path_specs[0] self._file_system = resolver.Resolver.OpenFileSystem(file_system_path_spec) if file_system_path_spec.type_indicator == definitions.TYPE_INDICATOR_OS: mount_point = file_system_path_spec else: mount_point = file_system_path_spec.parent self._path_resolver = windows_path_resolver.WindowsPathResolver( self._file_system, mount_point) # The source is a directory or single volume storage media image. if not self._windows_directory: self._ScanFileSystemForWindowsDirectory(self._path_resolver) if not self._windows_directory: return False self._path_resolver.SetEnvironmentVariable( 'SystemRoot', self._windows_directory) self._path_resolver.SetEnvironmentVariable( 'WinDir', self._windows_directory) return True
Scans for a Windows volume. Args: source_path (str): source path. Returns: bool: True if a Windows volume was found. Raises: ScannerError: if the source path does not exists, or if the source path is not a file or directory, or if the format of or within the source file is not supported.
Below is the the instruction that describes the task: ### Input: Scans for a Windows volume. Args: source_path (str): source path. Returns: bool: True if a Windows volume was found. Raises: ScannerError: if the source path does not exists, or if the source path is not a file or directory, or if the format of or within the source file is not supported. ### Response: def ScanForWindowsVolume(self, source_path): """Scans for a Windows volume. Args: source_path (str): source path. Returns: bool: True if a Windows volume was found. Raises: ScannerError: if the source path does not exists, or if the source path is not a file or directory, or if the format of or within the source file is not supported. """ windows_path_specs = self.GetBasePathSpecs(source_path) if (not windows_path_specs or self._source_type == definitions.SOURCE_TYPE_FILE): return False file_system_path_spec = windows_path_specs[0] self._file_system = resolver.Resolver.OpenFileSystem(file_system_path_spec) if file_system_path_spec.type_indicator == definitions.TYPE_INDICATOR_OS: mount_point = file_system_path_spec else: mount_point = file_system_path_spec.parent self._path_resolver = windows_path_resolver.WindowsPathResolver( self._file_system, mount_point) # The source is a directory or single volume storage media image. if not self._windows_directory: self._ScanFileSystemForWindowsDirectory(self._path_resolver) if not self._windows_directory: return False self._path_resolver.SetEnvironmentVariable( 'SystemRoot', self._windows_directory) self._path_resolver.SetEnvironmentVariable( 'WinDir', self._windows_directory) return True
def _fix_unsafe(shell_input): """Find characters used to escape from a string into a shell, and wrap them in quotes if they exist. Regex pilfered from Python3 :mod:`shlex` module. :param str shell_input: The input intended for the GnuPG process. """ _unsafe = re.compile(r'[^\w@%+=:,./-]', 256) try: if len(_unsafe.findall(shell_input)) == 0: return shell_input.strip() else: clean = "'" + shell_input.replace("'", "'\"'\"'") + "'" return clean except TypeError: return None
Find characters used to escape from a string into a shell, and wrap them in quotes if they exist. Regex pilfered from Python3 :mod:`shlex` module. :param str shell_input: The input intended for the GnuPG process.
Below is the the instruction that describes the task: ### Input: Find characters used to escape from a string into a shell, and wrap them in quotes if they exist. Regex pilfered from Python3 :mod:`shlex` module. :param str shell_input: The input intended for the GnuPG process. ### Response: def _fix_unsafe(shell_input): """Find characters used to escape from a string into a shell, and wrap them in quotes if they exist. Regex pilfered from Python3 :mod:`shlex` module. :param str shell_input: The input intended for the GnuPG process. """ _unsafe = re.compile(r'[^\w@%+=:,./-]', 256) try: if len(_unsafe.findall(shell_input)) == 0: return shell_input.strip() else: clean = "'" + shell_input.replace("'", "'\"'\"'") + "'" return clean except TypeError: return None
def verify(self, verify_locations: str) -> None: """Verify that the OCSP response is trusted. Args: verify_locations: The file path to a trust store containing pem-formatted certificates, to be used for validating the OCSP response. Raises OcspResponseNotTrustedError if the validation failed ie. the OCSP response is not trusted. """ # Ensure the file exists with open(verify_locations): pass try: self._ocsp_response.basic_verify(verify_locations) except _nassl.OpenSSLError as e: if 'certificate verify error' in str(e): raise OcspResponseNotTrustedError(verify_locations) raise
Verify that the OCSP response is trusted. Args: verify_locations: The file path to a trust store containing pem-formatted certificates, to be used for validating the OCSP response. Raises OcspResponseNotTrustedError if the validation failed ie. the OCSP response is not trusted.
Below is the the instruction that describes the task: ### Input: Verify that the OCSP response is trusted. Args: verify_locations: The file path to a trust store containing pem-formatted certificates, to be used for validating the OCSP response. Raises OcspResponseNotTrustedError if the validation failed ie. the OCSP response is not trusted. ### Response: def verify(self, verify_locations: str) -> None: """Verify that the OCSP response is trusted. Args: verify_locations: The file path to a trust store containing pem-formatted certificates, to be used for validating the OCSP response. Raises OcspResponseNotTrustedError if the validation failed ie. the OCSP response is not trusted. """ # Ensure the file exists with open(verify_locations): pass try: self._ocsp_response.basic_verify(verify_locations) except _nassl.OpenSSLError as e: if 'certificate verify error' in str(e): raise OcspResponseNotTrustedError(verify_locations) raise
def get_url(cls, url, uid, **kwargs): """ Construct the URL for talking to an individual resource. http://myapi.com/api/resource/1 Args: url: The url for this resource uid: The unique identifier for an individual resource kwargs: Additional keyword argueents returns: final_url: The URL for this individual resource """ if uid: url = '{}/{}'.format(url, uid) else: url = url return cls._parse_url_and_validate(url)
Construct the URL for talking to an individual resource. http://myapi.com/api/resource/1 Args: url: The url for this resource uid: The unique identifier for an individual resource kwargs: Additional keyword argueents returns: final_url: The URL for this individual resource
Below is the the instruction that describes the task: ### Input: Construct the URL for talking to an individual resource. http://myapi.com/api/resource/1 Args: url: The url for this resource uid: The unique identifier for an individual resource kwargs: Additional keyword argueents returns: final_url: The URL for this individual resource ### Response: def get_url(cls, url, uid, **kwargs): """ Construct the URL for talking to an individual resource. http://myapi.com/api/resource/1 Args: url: The url for this resource uid: The unique identifier for an individual resource kwargs: Additional keyword argueents returns: final_url: The URL for this individual resource """ if uid: url = '{}/{}'.format(url, uid) else: url = url return cls._parse_url_and_validate(url)
async def sdiff(self, keys, *args): "Return the difference of sets specified by ``keys``" args = list_or_args(keys, args) return await self.execute_command('SDIFF', *args)
Return the difference of sets specified by ``keys``
Below is the the instruction that describes the task: ### Input: Return the difference of sets specified by ``keys`` ### Response: async def sdiff(self, keys, *args): "Return the difference of sets specified by ``keys``" args = list_or_args(keys, args) return await self.execute_command('SDIFF', *args)
def indent(text, prefix): """ Adds `prefix` to the beginning of non-empty lines in `text`. """ # Based on Python 3's textwrap.indent def prefixed_lines(): for line in text.splitlines(True): yield (prefix + line if line.strip() else line) return u"".join(prefixed_lines())
Adds `prefix` to the beginning of non-empty lines in `text`.
Below is the the instruction that describes the task: ### Input: Adds `prefix` to the beginning of non-empty lines in `text`. ### Response: def indent(text, prefix): """ Adds `prefix` to the beginning of non-empty lines in `text`. """ # Based on Python 3's textwrap.indent def prefixed_lines(): for line in text.splitlines(True): yield (prefix + line if line.strip() else line) return u"".join(prefixed_lines())
def markers_to_events(self, keep_name=False): """Copy all markers in dataset to event type. """ markers = self.parent.info.markers if markers is None: self.parent.statusBar.showMessage('No markers in dataset.') return if not keep_name: name, ok = self.new_eventtype() if not ok: return else: name = None self.annot.add_events(markers, name=name, chan='') if keep_name: self.display_eventtype() n_eventtype = self.idx_eventtype.count() self.idx_eventtype.setCurrentIndex(n_eventtype - 1) self.update_annotations()
Copy all markers in dataset to event type.
Below is the the instruction that describes the task: ### Input: Copy all markers in dataset to event type. ### Response: def markers_to_events(self, keep_name=False): """Copy all markers in dataset to event type. """ markers = self.parent.info.markers if markers is None: self.parent.statusBar.showMessage('No markers in dataset.') return if not keep_name: name, ok = self.new_eventtype() if not ok: return else: name = None self.annot.add_events(markers, name=name, chan='') if keep_name: self.display_eventtype() n_eventtype = self.idx_eventtype.count() self.idx_eventtype.setCurrentIndex(n_eventtype - 1) self.update_annotations()
def patch_namespaced_stateful_set_scale(self, name, namespace, body, **kwargs): # noqa: E501 """patch_namespaced_stateful_set_scale # noqa: E501 partially update scale of the specified StatefulSet # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_namespaced_stateful_set_scale(name, namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the Scale (required) :param str namespace: object name and auth scope, such as for teams and projects (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1Scale If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.patch_namespaced_stateful_set_scale_with_http_info(name, namespace, body, **kwargs) # noqa: E501 else: (data) = self.patch_namespaced_stateful_set_scale_with_http_info(name, namespace, body, **kwargs) # noqa: E501 return data
patch_namespaced_stateful_set_scale # noqa: E501 partially update scale of the specified StatefulSet # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_namespaced_stateful_set_scale(name, namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the Scale (required) :param str namespace: object name and auth scope, such as for teams and projects (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1Scale If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: patch_namespaced_stateful_set_scale # noqa: E501 partially update scale of the specified StatefulSet # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_namespaced_stateful_set_scale(name, namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the Scale (required) :param str namespace: object name and auth scope, such as for teams and projects (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1Scale If the method is called asynchronously, returns the request thread. ### Response: def patch_namespaced_stateful_set_scale(self, name, namespace, body, **kwargs): # noqa: E501 """patch_namespaced_stateful_set_scale # noqa: E501 partially update scale of the specified StatefulSet # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.patch_namespaced_stateful_set_scale(name, namespace, body, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the Scale (required) :param str namespace: object name and auth scope, such as for teams and projects (required) :param UNKNOWN_BASE_TYPE body: (required) :param str pretty: If 'true', then the output is pretty printed. :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed :return: V1Scale If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.patch_namespaced_stateful_set_scale_with_http_info(name, namespace, body, **kwargs) # noqa: E501 else: (data) = self.patch_namespaced_stateful_set_scale_with_http_info(name, namespace, body, **kwargs) # noqa: E501 return data
def switch(stage): """ Switch to given stage (dev/qa/production) + pull """ stage = stage.lower() local("git pull") if stage in ['dev', 'devel', 'develop']: branch_name = 'develop' elif stage in ['qa', 'release']: branches = local('git branch -r', capture=True) possible_branches = [] for b in branches.split("\n"): b_parts = b.split('/') if b_parts[1] == 'release': possible_branches.append(b_parts[2]) if len(possible_branches) == 0: raise Exception('No release branches found. Please create a new release first.') possible_branches = sorted(possible_branches, reverse=True) branch_name = 'release/%s' % possible_branches[0] elif stage in ['production', 'master']: branch_name = 'master' else: raise NotImplemented local("git checkout %s" % branch_name) local("git pull")
Switch to given stage (dev/qa/production) + pull
Below is the the instruction that describes the task: ### Input: Switch to given stage (dev/qa/production) + pull ### Response: def switch(stage): """ Switch to given stage (dev/qa/production) + pull """ stage = stage.lower() local("git pull") if stage in ['dev', 'devel', 'develop']: branch_name = 'develop' elif stage in ['qa', 'release']: branches = local('git branch -r', capture=True) possible_branches = [] for b in branches.split("\n"): b_parts = b.split('/') if b_parts[1] == 'release': possible_branches.append(b_parts[2]) if len(possible_branches) == 0: raise Exception('No release branches found. Please create a new release first.') possible_branches = sorted(possible_branches, reverse=True) branch_name = 'release/%s' % possible_branches[0] elif stage in ['production', 'master']: branch_name = 'master' else: raise NotImplemented local("git checkout %s" % branch_name) local("git pull")
def list_scheduled_queries(self): """ List all scheduled_queries :return: A list of all scheduled query dicts :rtype: list of dict :raises: This will raise a :class:`ServerException<logentries_api.exceptions.ServerException>` if there is an error from Logentries """ url = 'https://logentries.com/rest/{account_id}/api/scheduled_queries/'.format( account_id=self.account_id) return self._api_get(url=url).get('scheduled_searches')
List all scheduled_queries :return: A list of all scheduled query dicts :rtype: list of dict :raises: This will raise a :class:`ServerException<logentries_api.exceptions.ServerException>` if there is an error from Logentries
Below is the the instruction that describes the task: ### Input: List all scheduled_queries :return: A list of all scheduled query dicts :rtype: list of dict :raises: This will raise a :class:`ServerException<logentries_api.exceptions.ServerException>` if there is an error from Logentries ### Response: def list_scheduled_queries(self): """ List all scheduled_queries :return: A list of all scheduled query dicts :rtype: list of dict :raises: This will raise a :class:`ServerException<logentries_api.exceptions.ServerException>` if there is an error from Logentries """ url = 'https://logentries.com/rest/{account_id}/api/scheduled_queries/'.format( account_id=self.account_id) return self._api_get(url=url).get('scheduled_searches')
def task_postponed(self): """ Track (if required) postponing event and do the same job as :meth:`.WScheduleRecord.task_postponed` method do :return: None """ tracker = self.task().tracker_storage() if tracker is not None and self.track_wait() is True: details = self.task().event_details(WTrackerEvents.wait) tracker.register_wait(self.task(), event_details=details) WScheduleRecord.task_postponed(self)
Track (if required) postponing event and do the same job as :meth:`.WScheduleRecord.task_postponed` method do :return: None
Below is the the instruction that describes the task: ### Input: Track (if required) postponing event and do the same job as :meth:`.WScheduleRecord.task_postponed` method do :return: None ### Response: def task_postponed(self): """ Track (if required) postponing event and do the same job as :meth:`.WScheduleRecord.task_postponed` method do :return: None """ tracker = self.task().tracker_storage() if tracker is not None and self.track_wait() is True: details = self.task().event_details(WTrackerEvents.wait) tracker.register_wait(self.task(), event_details=details) WScheduleRecord.task_postponed(self)
def insert(self, idx, value): """ Inserts a value in the ``ListVariable`` at an appropriate index. :param idx: The index before which to insert the new value. :param value: The value to insert. """ self._value.insert(idx, value) self._rebuild()
Inserts a value in the ``ListVariable`` at an appropriate index. :param idx: The index before which to insert the new value. :param value: The value to insert.
Below is the the instruction that describes the task: ### Input: Inserts a value in the ``ListVariable`` at an appropriate index. :param idx: The index before which to insert the new value. :param value: The value to insert. ### Response: def insert(self, idx, value): """ Inserts a value in the ``ListVariable`` at an appropriate index. :param idx: The index before which to insert the new value. :param value: The value to insert. """ self._value.insert(idx, value) self._rebuild()
def p_OperationRest(p): """OperationRest : ReturnType OptionalIdentifier "(" ArgumentList ")" ";" """ p[0] = model.Operation(return_type=p[1], name=p[2], arguments=p[4])
OperationRest : ReturnType OptionalIdentifier "(" ArgumentList ")" ";"
Below is the the instruction that describes the task: ### Input: OperationRest : ReturnType OptionalIdentifier "(" ArgumentList ")" ";" ### Response: def p_OperationRest(p): """OperationRest : ReturnType OptionalIdentifier "(" ArgumentList ")" ";" """ p[0] = model.Operation(return_type=p[1], name=p[2], arguments=p[4])
def nslookup(cls): """ Implementation of UNIX nslookup. """ try: # We try to get the addresse information of the given domain or IP. if "current_test_data" in PyFunceble.INTERN: # pragma: no cover # The end-user want more information whith his test. if not Check().is_ip_valid(): # The element we are testing is not an IP. # We request the address informations. request = PyFunceble.socket.getaddrinfo( PyFunceble.INTERN["to_test"], 80, 0, 0, PyFunceble.socket.IPPROTO_TCP, ) for sequence in request: # We loop through the sequence returned by the request. # We append the NS informations into the nslookup index. PyFunceble.INTERN["current_test_data"]["nslookup"].append( sequence[-1][0] ) else: # The element we are testing is an IP. request = PyFunceble.socket.gethostbyaddr( PyFunceble.INTERN["to_test"] ) # We append the NS informations into the nslookup index. PyFunceble.INTERN["current_test_data"]["nslookup"][ "hostname" ] = request[0] PyFunceble.INTERN["current_test_data"]["nslookup"][ "aliases" ] = request[1] PyFunceble.INTERN["current_test_data"]["nslookup"]["ips"] = request[ 2 ] else: if not Check().is_ip_valid(): # The element we are testing is not an IP. PyFunceble.socket.getaddrinfo( PyFunceble.INTERN["to_test"], 80, 0, 0, PyFunceble.socket.IPPROTO_TCP, ) else: # The element we are testing is an IP. PyFunceble.socket.gethostbyaddr(PyFunceble.INTERN["to_test"]) # It was done successfuly, we return True. # Note: we don't need to read the addresses so we consider as successful # as long as there is no error. return True except (OSError, PyFunceble.socket.herror, PyFunceble.socket.gaierror): # One of the listed exception is matched. # It was done unsuccesfuly, we return False. return False
Implementation of UNIX nslookup.
Below is the the instruction that describes the task: ### Input: Implementation of UNIX nslookup. ### Response: def nslookup(cls): """ Implementation of UNIX nslookup. """ try: # We try to get the addresse information of the given domain or IP. if "current_test_data" in PyFunceble.INTERN: # pragma: no cover # The end-user want more information whith his test. if not Check().is_ip_valid(): # The element we are testing is not an IP. # We request the address informations. request = PyFunceble.socket.getaddrinfo( PyFunceble.INTERN["to_test"], 80, 0, 0, PyFunceble.socket.IPPROTO_TCP, ) for sequence in request: # We loop through the sequence returned by the request. # We append the NS informations into the nslookup index. PyFunceble.INTERN["current_test_data"]["nslookup"].append( sequence[-1][0] ) else: # The element we are testing is an IP. request = PyFunceble.socket.gethostbyaddr( PyFunceble.INTERN["to_test"] ) # We append the NS informations into the nslookup index. PyFunceble.INTERN["current_test_data"]["nslookup"][ "hostname" ] = request[0] PyFunceble.INTERN["current_test_data"]["nslookup"][ "aliases" ] = request[1] PyFunceble.INTERN["current_test_data"]["nslookup"]["ips"] = request[ 2 ] else: if not Check().is_ip_valid(): # The element we are testing is not an IP. PyFunceble.socket.getaddrinfo( PyFunceble.INTERN["to_test"], 80, 0, 0, PyFunceble.socket.IPPROTO_TCP, ) else: # The element we are testing is an IP. PyFunceble.socket.gethostbyaddr(PyFunceble.INTERN["to_test"]) # It was done successfuly, we return True. # Note: we don't need to read the addresses so we consider as successful # as long as there is no error. return True except (OSError, PyFunceble.socket.herror, PyFunceble.socket.gaierror): # One of the listed exception is matched. # It was done unsuccesfuly, we return False. return False
def connect_async(self, connection_id, connection_string, callback): """Connect to a device by its connection_string This function asynchronously connects to a device by its BLE address passed in the connection_string parameter and calls callback when finished. Callback is called on either success or failure with the signature: callback(conection_id, adapter_id, success: bool, failure_reason: string or None) Args: connection_string (string): A unique connection string that identifies which device to connect to, if many are possible. connection_id (int): A unique integer set by the caller for referring to this connection once created callback (callable): A callback function called when the connection has succeeded or failed """ self._try_connect(connection_string) def _on_finished(_name, control_info, exception): if exception is not None: callback(connection_id, self.id, False, str(exception)) return if control_info is not None: self._control_info = control_info callback(connection_id, self.id, True, None) self._connection_id = connection_id self._control_thread.command(JLinkControlThread.VERIFY_CONTROL, _on_finished, self._device_info, self._control_info)
Connect to a device by its connection_string This function asynchronously connects to a device by its BLE address passed in the connection_string parameter and calls callback when finished. Callback is called on either success or failure with the signature: callback(conection_id, adapter_id, success: bool, failure_reason: string or None) Args: connection_string (string): A unique connection string that identifies which device to connect to, if many are possible. connection_id (int): A unique integer set by the caller for referring to this connection once created callback (callable): A callback function called when the connection has succeeded or failed
Below is the the instruction that describes the task: ### Input: Connect to a device by its connection_string This function asynchronously connects to a device by its BLE address passed in the connection_string parameter and calls callback when finished. Callback is called on either success or failure with the signature: callback(conection_id, adapter_id, success: bool, failure_reason: string or None) Args: connection_string (string): A unique connection string that identifies which device to connect to, if many are possible. connection_id (int): A unique integer set by the caller for referring to this connection once created callback (callable): A callback function called when the connection has succeeded or failed ### Response: def connect_async(self, connection_id, connection_string, callback): """Connect to a device by its connection_string This function asynchronously connects to a device by its BLE address passed in the connection_string parameter and calls callback when finished. Callback is called on either success or failure with the signature: callback(conection_id, adapter_id, success: bool, failure_reason: string or None) Args: connection_string (string): A unique connection string that identifies which device to connect to, if many are possible. connection_id (int): A unique integer set by the caller for referring to this connection once created callback (callable): A callback function called when the connection has succeeded or failed """ self._try_connect(connection_string) def _on_finished(_name, control_info, exception): if exception is not None: callback(connection_id, self.id, False, str(exception)) return if control_info is not None: self._control_info = control_info callback(connection_id, self.id, True, None) self._connection_id = connection_id self._control_thread.command(JLinkControlThread.VERIFY_CONTROL, _on_finished, self._device_info, self._control_info)
def get_validated_object(self, field_type, value): """ Returns the value validated by the field_type """ if field_type.check_value(value) or field_type.can_use_value(value): data = field_type.use_value(value) self._prepare_child(data) return data else: return None
Returns the value validated by the field_type
Below is the the instruction that describes the task: ### Input: Returns the value validated by the field_type ### Response: def get_validated_object(self, field_type, value): """ Returns the value validated by the field_type """ if field_type.check_value(value) or field_type.can_use_value(value): data = field_type.use_value(value) self._prepare_child(data) return data else: return None
def get_data_times_for_job_legacy(self, num_job): """ Get the data that this job will need to read in. """ # Should all be integers, so no rounding needed shift_dur = self.curr_seg[0] + int(self.job_time_shift * num_job) job_data_seg = self.data_chunk.shift(shift_dur) # If this is the last job, push the end back if num_job == (self.num_jobs - 1): dataPushBack = job_data_seg[1] - self.curr_seg[1] assert dataPushBack >= 0 job_data_seg = segments.segment(job_data_seg[0] - dataPushBack, self.curr_seg[1]) assert (abs(job_data_seg) == self.data_length) return job_data_seg
Get the data that this job will need to read in.
Below is the the instruction that describes the task: ### Input: Get the data that this job will need to read in. ### Response: def get_data_times_for_job_legacy(self, num_job): """ Get the data that this job will need to read in. """ # Should all be integers, so no rounding needed shift_dur = self.curr_seg[0] + int(self.job_time_shift * num_job) job_data_seg = self.data_chunk.shift(shift_dur) # If this is the last job, push the end back if num_job == (self.num_jobs - 1): dataPushBack = job_data_seg[1] - self.curr_seg[1] assert dataPushBack >= 0 job_data_seg = segments.segment(job_data_seg[0] - dataPushBack, self.curr_seg[1]) assert (abs(job_data_seg) == self.data_length) return job_data_seg
def validate(self, graph): """ Validate the graph by checking whether it is a directed acyclic graph. Args: graph (DiGraph): Reference to a DiGraph object from NetworkX. Raises: DirectedAcyclicGraphInvalid: If the graph is not a valid dag. """ if not nx.is_directed_acyclic_graph(graph): raise DirectedAcyclicGraphInvalid(graph_name=self._name)
Validate the graph by checking whether it is a directed acyclic graph. Args: graph (DiGraph): Reference to a DiGraph object from NetworkX. Raises: DirectedAcyclicGraphInvalid: If the graph is not a valid dag.
Below is the the instruction that describes the task: ### Input: Validate the graph by checking whether it is a directed acyclic graph. Args: graph (DiGraph): Reference to a DiGraph object from NetworkX. Raises: DirectedAcyclicGraphInvalid: If the graph is not a valid dag. ### Response: def validate(self, graph): """ Validate the graph by checking whether it is a directed acyclic graph. Args: graph (DiGraph): Reference to a DiGraph object from NetworkX. Raises: DirectedAcyclicGraphInvalid: If the graph is not a valid dag. """ if not nx.is_directed_acyclic_graph(graph): raise DirectedAcyclicGraphInvalid(graph_name=self._name)
def _get_kind_and_names(attributes): """Gets kind and possible names for :tl:`DocumentAttribute`.""" kind = 'document' possible_names = [] for attr in attributes: if isinstance(attr, types.DocumentAttributeFilename): possible_names.insert(0, attr.file_name) elif isinstance(attr, types.DocumentAttributeAudio): kind = 'audio' if attr.performer and attr.title: possible_names.append('{} - {}'.format( attr.performer, attr.title )) elif attr.performer: possible_names.append(attr.performer) elif attr.title: possible_names.append(attr.title) elif attr.voice: kind = 'voice' return kind, possible_names
Gets kind and possible names for :tl:`DocumentAttribute`.
Below is the the instruction that describes the task: ### Input: Gets kind and possible names for :tl:`DocumentAttribute`. ### Response: def _get_kind_and_names(attributes): """Gets kind and possible names for :tl:`DocumentAttribute`.""" kind = 'document' possible_names = [] for attr in attributes: if isinstance(attr, types.DocumentAttributeFilename): possible_names.insert(0, attr.file_name) elif isinstance(attr, types.DocumentAttributeAudio): kind = 'audio' if attr.performer and attr.title: possible_names.append('{} - {}'.format( attr.performer, attr.title )) elif attr.performer: possible_names.append(attr.performer) elif attr.title: possible_names.append(attr.title) elif attr.voice: kind = 'voice' return kind, possible_names
def delete_events(self, event_collection, timeframe=None, timezone=None, filters=None): """ Deletes events. :param event_collection: string, the event collection from which event are being deleted :param timeframe: string or dict, the timeframe in which the events happened example: "previous_7_days" :param timezone: int, the timezone you'd like to use for the timeframe and interval in seconds :param filters: array of dict, contains the filters you'd like to apply to the data example: [{"property_name":"device", "operator":"eq", "property_value":"iPhone"}] """ params = self.get_params(timeframe=timeframe, timezone=timezone, filters=filters) return self.api.delete_events(event_collection, params)
Deletes events. :param event_collection: string, the event collection from which event are being deleted :param timeframe: string or dict, the timeframe in which the events happened example: "previous_7_days" :param timezone: int, the timezone you'd like to use for the timeframe and interval in seconds :param filters: array of dict, contains the filters you'd like to apply to the data example: [{"property_name":"device", "operator":"eq", "property_value":"iPhone"}]
Below is the the instruction that describes the task: ### Input: Deletes events. :param event_collection: string, the event collection from which event are being deleted :param timeframe: string or dict, the timeframe in which the events happened example: "previous_7_days" :param timezone: int, the timezone you'd like to use for the timeframe and interval in seconds :param filters: array of dict, contains the filters you'd like to apply to the data example: [{"property_name":"device", "operator":"eq", "property_value":"iPhone"}] ### Response: def delete_events(self, event_collection, timeframe=None, timezone=None, filters=None): """ Deletes events. :param event_collection: string, the event collection from which event are being deleted :param timeframe: string or dict, the timeframe in which the events happened example: "previous_7_days" :param timezone: int, the timezone you'd like to use for the timeframe and interval in seconds :param filters: array of dict, contains the filters you'd like to apply to the data example: [{"property_name":"device", "operator":"eq", "property_value":"iPhone"}] """ params = self.get_params(timeframe=timeframe, timezone=timezone, filters=filters) return self.api.delete_events(event_collection, params)
def returns(self, val): """Set the last call to return a value. Set a static value to return when a method is called. I.E.:: >>> f = Fake().provides('get_number').returns(64) >>> f.get_number() 64 """ exp = self._get_current_call() exp.return_val = val return self
Set the last call to return a value. Set a static value to return when a method is called. I.E.:: >>> f = Fake().provides('get_number').returns(64) >>> f.get_number() 64
Below is the the instruction that describes the task: ### Input: Set the last call to return a value. Set a static value to return when a method is called. I.E.:: >>> f = Fake().provides('get_number').returns(64) >>> f.get_number() 64 ### Response: def returns(self, val): """Set the last call to return a value. Set a static value to return when a method is called. I.E.:: >>> f = Fake().provides('get_number').returns(64) >>> f.get_number() 64 """ exp = self._get_current_call() exp.return_val = val return self
def is_subscriber(self): """Returns whether the user is a subscriber or not. True or False.""" doc = self._request(self.ws_prefix + ".getInfo", True) return _extract(doc, "subscriber") == "1"
Returns whether the user is a subscriber or not. True or False.
Below is the the instruction that describes the task: ### Input: Returns whether the user is a subscriber or not. True or False. ### Response: def is_subscriber(self): """Returns whether the user is a subscriber or not. True or False.""" doc = self._request(self.ws_prefix + ".getInfo", True) return _extract(doc, "subscriber") == "1"
async def build(self, building: UnitTypeId, near: Union[Point2, Point3], max_distance: int=20, unit: Optional[Unit]=None, random_alternative: bool=True, placement_step: int=2): """Build a building.""" if isinstance(near, Unit): near = near.position.to2 elif near is not None: near = near.to2 else: return p = await self.find_placement(building, near.rounded, max_distance, random_alternative, placement_step) if p is None: return ActionResult.CantFindPlacementLocation unit = unit or self.select_build_worker(p) if unit is None or not self.can_afford(building): return ActionResult.Error return await self.do(unit.build(building, p))
Build a building.
Below is the the instruction that describes the task: ### Input: Build a building. ### Response: async def build(self, building: UnitTypeId, near: Union[Point2, Point3], max_distance: int=20, unit: Optional[Unit]=None, random_alternative: bool=True, placement_step: int=2): """Build a building.""" if isinstance(near, Unit): near = near.position.to2 elif near is not None: near = near.to2 else: return p = await self.find_placement(building, near.rounded, max_distance, random_alternative, placement_step) if p is None: return ActionResult.CantFindPlacementLocation unit = unit or self.select_build_worker(p) if unit is None or not self.can_afford(building): return ActionResult.Error return await self.do(unit.build(building, p))
def format_diff_xml(a_xml, b_xml): """Create a diff between two XML documents. Args: a_xml: str b_xml: str Returns: str : `Differ`-style delta """ return '\n'.join( difflib.ndiff( reformat_to_pretty_xml(a_xml).splitlines(), reformat_to_pretty_xml(b_xml).splitlines(), ) )
Create a diff between two XML documents. Args: a_xml: str b_xml: str Returns: str : `Differ`-style delta
Below is the the instruction that describes the task: ### Input: Create a diff between two XML documents. Args: a_xml: str b_xml: str Returns: str : `Differ`-style delta ### Response: def format_diff_xml(a_xml, b_xml): """Create a diff between two XML documents. Args: a_xml: str b_xml: str Returns: str : `Differ`-style delta """ return '\n'.join( difflib.ndiff( reformat_to_pretty_xml(a_xml).splitlines(), reformat_to_pretty_xml(b_xml).splitlines(), ) )
def calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50, max_nn_length=60, tm_method='santalucia', salt_corrections_method='santalucia'): ''' Return the tm of `seq` as a float. ''' tm_meth = _tm_methods.get(tm_method) if tm_meth is None: raise ValueError('{} is not a valid tm calculation method'.format( tm_method)) salt_meth = _salt_corrections_methods.get(salt_corrections_method) if salt_meth is None: raise ValueError('{} is not a valid salt correction method'.format( salt_corrections_method)) # For whatever reason mv_conc and dna_conc have to be ints args = [pjoin(PRIMER3_HOME, 'oligotm'), '-mv', str(mv_conc), '-dv', str(dv_conc), '-n', str(dntp_conc), '-d', str(dna_conc), '-tp', str(tm_meth), '-sc', str(salt_meth), seq] tm = subprocess.check_output(args, stderr=DEV_NULL, env=os.environ) return float(tm)
Return the tm of `seq` as a float.
Below is the the instruction that describes the task: ### Input: Return the tm of `seq` as a float. ### Response: def calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50, max_nn_length=60, tm_method='santalucia', salt_corrections_method='santalucia'): ''' Return the tm of `seq` as a float. ''' tm_meth = _tm_methods.get(tm_method) if tm_meth is None: raise ValueError('{} is not a valid tm calculation method'.format( tm_method)) salt_meth = _salt_corrections_methods.get(salt_corrections_method) if salt_meth is None: raise ValueError('{} is not a valid salt correction method'.format( salt_corrections_method)) # For whatever reason mv_conc and dna_conc have to be ints args = [pjoin(PRIMER3_HOME, 'oligotm'), '-mv', str(mv_conc), '-dv', str(dv_conc), '-n', str(dntp_conc), '-d', str(dna_conc), '-tp', str(tm_meth), '-sc', str(salt_meth), seq] tm = subprocess.check_output(args, stderr=DEV_NULL, env=os.environ) return float(tm)
def p_range(self, p): """range : value DOT_DOT value | value""" n = len(p) if n == 2: p[0] = (p[1],) elif n == 4: p[0] = (p[1], p[3])
range : value DOT_DOT value | value
Below is the the instruction that describes the task: ### Input: range : value DOT_DOT value | value ### Response: def p_range(self, p): """range : value DOT_DOT value | value""" n = len(p) if n == 2: p[0] = (p[1],) elif n == 4: p[0] = (p[1], p[3])
def channel_portion(image, channel): '''Estimates the amount of a color relative to other colors. :param image: numpy.ndarray :param channel: int :returns: portion of a channel in an image :rtype: float ''' # Separate color channels rgb = [] for i in range(3): rgb.append(image[:, :, i].astype(int)) ch = rgb.pop(channel) relative_values = ch - np.sum(rgb, axis=0) / 2 relative_values = np.maximum(np.zeros(ch.shape), relative_values) return float(np.average(relative_values) / 255)
Estimates the amount of a color relative to other colors. :param image: numpy.ndarray :param channel: int :returns: portion of a channel in an image :rtype: float
Below is the the instruction that describes the task: ### Input: Estimates the amount of a color relative to other colors. :param image: numpy.ndarray :param channel: int :returns: portion of a channel in an image :rtype: float ### Response: def channel_portion(image, channel): '''Estimates the amount of a color relative to other colors. :param image: numpy.ndarray :param channel: int :returns: portion of a channel in an image :rtype: float ''' # Separate color channels rgb = [] for i in range(3): rgb.append(image[:, :, i].astype(int)) ch = rgb.pop(channel) relative_values = ch - np.sum(rgb, axis=0) / 2 relative_values = np.maximum(np.zeros(ch.shape), relative_values) return float(np.average(relative_values) / 255)
def transform(self, func): """ Apply a transformation to tokens in this :class:`.FeatureSet`\. Parameters ---------- func : callable Should take four parameters: token, value in document (e.g. count), value in :class:`.FeatureSet` (e.g. overall count), and document count (i.e. number of documents in which the token occurs). Should return a new numeric (int or float) value, or None. If value is 0 or None, the token will be excluded. Returns ------- :class:`.FeatureSet` Examples -------- Apply a tf*idf transformation. .. code-block:: python >>> words = corpus.features['words'] >>> def tfidf(f, c, C, DC): ... tf = float(c) ... idf = log(float(len(words.features))/float(DC)) ... return tf*idf >>> corpus.features['words_tfidf'] = words.transform(tfidf) """ features = {} for i, feature in self.features.iteritems(): feature_ = [] for f, v in feature: t = self.lookup[f] v_ = func(f, v, self.counts[t], self.documentCounts[t]) if v_: feature_.append((f, v_)) features[i] = Feature(feature_) return FeatureSet(features)
Apply a transformation to tokens in this :class:`.FeatureSet`\. Parameters ---------- func : callable Should take four parameters: token, value in document (e.g. count), value in :class:`.FeatureSet` (e.g. overall count), and document count (i.e. number of documents in which the token occurs). Should return a new numeric (int or float) value, or None. If value is 0 or None, the token will be excluded. Returns ------- :class:`.FeatureSet` Examples -------- Apply a tf*idf transformation. .. code-block:: python >>> words = corpus.features['words'] >>> def tfidf(f, c, C, DC): ... tf = float(c) ... idf = log(float(len(words.features))/float(DC)) ... return tf*idf >>> corpus.features['words_tfidf'] = words.transform(tfidf)
Below is the the instruction that describes the task: ### Input: Apply a transformation to tokens in this :class:`.FeatureSet`\. Parameters ---------- func : callable Should take four parameters: token, value in document (e.g. count), value in :class:`.FeatureSet` (e.g. overall count), and document count (i.e. number of documents in which the token occurs). Should return a new numeric (int or float) value, or None. If value is 0 or None, the token will be excluded. Returns ------- :class:`.FeatureSet` Examples -------- Apply a tf*idf transformation. .. code-block:: python >>> words = corpus.features['words'] >>> def tfidf(f, c, C, DC): ... tf = float(c) ... idf = log(float(len(words.features))/float(DC)) ... return tf*idf >>> corpus.features['words_tfidf'] = words.transform(tfidf) ### Response: def transform(self, func): """ Apply a transformation to tokens in this :class:`.FeatureSet`\. Parameters ---------- func : callable Should take four parameters: token, value in document (e.g. count), value in :class:`.FeatureSet` (e.g. overall count), and document count (i.e. number of documents in which the token occurs). Should return a new numeric (int or float) value, or None. If value is 0 or None, the token will be excluded. Returns ------- :class:`.FeatureSet` Examples -------- Apply a tf*idf transformation. .. code-block:: python >>> words = corpus.features['words'] >>> def tfidf(f, c, C, DC): ... tf = float(c) ... idf = log(float(len(words.features))/float(DC)) ... return tf*idf >>> corpus.features['words_tfidf'] = words.transform(tfidf) """ features = {} for i, feature in self.features.iteritems(): feature_ = [] for f, v in feature: t = self.lookup[f] v_ = func(f, v, self.counts[t], self.documentCounts[t]) if v_: feature_.append((f, v_)) features[i] = Feature(feature_) return FeatureSet(features)
def _get_starting_population(initial_population, initial_position, population_size, population_stddev, seed): """Constructs the initial population. If an initial population is not already provided, this function constructs a population by adding random normal noise to the initial position. Args: initial_population: None or a list of `Tensor`s. The initial population. initial_position: None or a list of `Tensor`s. The initial position. If initial_population is None, this argument must not be None. population_size: Scalar integer `Tensor`. The number of members in the population. If the initial population is not None, this parameter is ignored. population_stddev: A positive scalar real `Tensor` of the same dtype as `initial_position` or `initial_population` (whichever is not None). This parameter is ignored if `initial_population` is specified. Used to generate the population from the `initial_position` by adding random normal noise with zero mean and the specified standard deviation. seed: Seed for random number generation. Returns: A list of `Tensor`s. The initial population. """ if initial_population is not None: return [tf.convert_to_tensor(value=part) for part in initial_population] # Constructs the population by adding normal noise to the initial position. seed_stream = distributions.SeedStream(seed, salt='get_starting_population') population = [] for part in initial_position: part = tf.convert_to_tensor(value=part) part_event_shape = tf.shape(input=part) # We only draw population_size-1 random vectors because we want to ensure # that the supplied position is part of the population. The first member # is set to be the initial_position. population_part_shape = tf.concat([[population_size-1], part_event_shape], axis=0) population_part = tf.random.normal(population_part_shape, stddev=population_stddev, dtype=part.dtype.base_dtype, seed=seed_stream()) population_part += part population_part = tf.concat([[part], population_part], axis=0) population.append(population_part) return population
Constructs the initial population. If an initial population is not already provided, this function constructs a population by adding random normal noise to the initial position. Args: initial_population: None or a list of `Tensor`s. The initial population. initial_position: None or a list of `Tensor`s. The initial position. If initial_population is None, this argument must not be None. population_size: Scalar integer `Tensor`. The number of members in the population. If the initial population is not None, this parameter is ignored. population_stddev: A positive scalar real `Tensor` of the same dtype as `initial_position` or `initial_population` (whichever is not None). This parameter is ignored if `initial_population` is specified. Used to generate the population from the `initial_position` by adding random normal noise with zero mean and the specified standard deviation. seed: Seed for random number generation. Returns: A list of `Tensor`s. The initial population.
Below is the the instruction that describes the task: ### Input: Constructs the initial population. If an initial population is not already provided, this function constructs a population by adding random normal noise to the initial position. Args: initial_population: None or a list of `Tensor`s. The initial population. initial_position: None or a list of `Tensor`s. The initial position. If initial_population is None, this argument must not be None. population_size: Scalar integer `Tensor`. The number of members in the population. If the initial population is not None, this parameter is ignored. population_stddev: A positive scalar real `Tensor` of the same dtype as `initial_position` or `initial_population` (whichever is not None). This parameter is ignored if `initial_population` is specified. Used to generate the population from the `initial_position` by adding random normal noise with zero mean and the specified standard deviation. seed: Seed for random number generation. Returns: A list of `Tensor`s. The initial population. ### Response: def _get_starting_population(initial_population, initial_position, population_size, population_stddev, seed): """Constructs the initial population. If an initial population is not already provided, this function constructs a population by adding random normal noise to the initial position. Args: initial_population: None or a list of `Tensor`s. The initial population. initial_position: None or a list of `Tensor`s. The initial position. If initial_population is None, this argument must not be None. population_size: Scalar integer `Tensor`. The number of members in the population. If the initial population is not None, this parameter is ignored. population_stddev: A positive scalar real `Tensor` of the same dtype as `initial_position` or `initial_population` (whichever is not None). This parameter is ignored if `initial_population` is specified. Used to generate the population from the `initial_position` by adding random normal noise with zero mean and the specified standard deviation. seed: Seed for random number generation. Returns: A list of `Tensor`s. The initial population. """ if initial_population is not None: return [tf.convert_to_tensor(value=part) for part in initial_population] # Constructs the population by adding normal noise to the initial position. seed_stream = distributions.SeedStream(seed, salt='get_starting_population') population = [] for part in initial_position: part = tf.convert_to_tensor(value=part) part_event_shape = tf.shape(input=part) # We only draw population_size-1 random vectors because we want to ensure # that the supplied position is part of the population. The first member # is set to be the initial_position. population_part_shape = tf.concat([[population_size-1], part_event_shape], axis=0) population_part = tf.random.normal(population_part_shape, stddev=population_stddev, dtype=part.dtype.base_dtype, seed=seed_stream()) population_part += part population_part = tf.concat([[part], population_part], axis=0) population.append(population_part) return population
def read_vcpu_struct_field(self, field_name, x, y, p): """Read a value out of the VCPU struct for a specific core. Similar to :py:meth:`.read_struct_field` except this method accesses the individual VCPU struct for to each core and contains application runtime status. Parameters ---------- field_name : string Name of the field to read from the struct (e.g. `"cpu_state"`) Returns ------- value A value of the type contained in the specified struct field. """ # Get the base address of the VCPU struct for this chip, then advance # to get the correct VCPU struct for the requested core. field, address, pack_chars = \ self._get_vcpu_field_and_address(field_name, x, y, p) # Perform the read length = struct.calcsize(pack_chars) data = self.read(address, length, x, y) # Unpack and return unpacked = struct.unpack(pack_chars, data) if field.length == 1: return unpacked[0] else: # If the field is a string then truncate it and return if b"s" in pack_chars: return unpacked[0].strip(b"\x00").decode("utf-8") # Otherwise just return. (Note: at the time of writing, no fields # in the VCPU struct are of this form.) return unpacked
Read a value out of the VCPU struct for a specific core. Similar to :py:meth:`.read_struct_field` except this method accesses the individual VCPU struct for to each core and contains application runtime status. Parameters ---------- field_name : string Name of the field to read from the struct (e.g. `"cpu_state"`) Returns ------- value A value of the type contained in the specified struct field.
Below is the the instruction that describes the task: ### Input: Read a value out of the VCPU struct for a specific core. Similar to :py:meth:`.read_struct_field` except this method accesses the individual VCPU struct for to each core and contains application runtime status. Parameters ---------- field_name : string Name of the field to read from the struct (e.g. `"cpu_state"`) Returns ------- value A value of the type contained in the specified struct field. ### Response: def read_vcpu_struct_field(self, field_name, x, y, p): """Read a value out of the VCPU struct for a specific core. Similar to :py:meth:`.read_struct_field` except this method accesses the individual VCPU struct for to each core and contains application runtime status. Parameters ---------- field_name : string Name of the field to read from the struct (e.g. `"cpu_state"`) Returns ------- value A value of the type contained in the specified struct field. """ # Get the base address of the VCPU struct for this chip, then advance # to get the correct VCPU struct for the requested core. field, address, pack_chars = \ self._get_vcpu_field_and_address(field_name, x, y, p) # Perform the read length = struct.calcsize(pack_chars) data = self.read(address, length, x, y) # Unpack and return unpacked = struct.unpack(pack_chars, data) if field.length == 1: return unpacked[0] else: # If the field is a string then truncate it and return if b"s" in pack_chars: return unpacked[0].strip(b"\x00").decode("utf-8") # Otherwise just return. (Note: at the time of writing, no fields # in the VCPU struct are of this form.) return unpacked
def rename_model(self, old_model, new_model): """ Change the label of a model attached to the Bundle :parameter str old_model: the current name of the model (must exist) :parameter str new_model: the desired new name of the model (must not exist) :return: None :raises ValueError: if the new_model is forbidden """ # TODO: raise error if old_feature not found? self._check_label(new_model) self._rename_label('model', old_model, new_model)
Change the label of a model attached to the Bundle :parameter str old_model: the current name of the model (must exist) :parameter str new_model: the desired new name of the model (must not exist) :return: None :raises ValueError: if the new_model is forbidden
Below is the the instruction that describes the task: ### Input: Change the label of a model attached to the Bundle :parameter str old_model: the current name of the model (must exist) :parameter str new_model: the desired new name of the model (must not exist) :return: None :raises ValueError: if the new_model is forbidden ### Response: def rename_model(self, old_model, new_model): """ Change the label of a model attached to the Bundle :parameter str old_model: the current name of the model (must exist) :parameter str new_model: the desired new name of the model (must not exist) :return: None :raises ValueError: if the new_model is forbidden """ # TODO: raise error if old_feature not found? self._check_label(new_model) self._rename_label('model', old_model, new_model)
def interfaces(device=None, interface=None, title=None, pattern=None, ipnet=None, best=True, display=_DEFAULT_DISPLAY): ''' Search for interfaces details in the following mine functions: - net.interfaces - net.ipaddrs Optional arguments: device Return interface data from a certain device only. interface Return data selecting by interface name. pattern Return interfaces that contain a certain pattern in their description. ipnet Return interfaces whose IP networks associated include this IP network. best: ``True`` When ``ipnet`` is specified, this argument says if the runner should return only the best match (the output will contain at most one row). Default: ``True`` (return only the best match). display: True Display on the screen or return structured object? Default: ``True`` (return on the CLI). title Display a custom title for the table. CLI Example: .. code-block:: bash $ sudo salt-run net.interfaces interface=vt-0/0/10 Output Example: .. code-block:: text Details for interface xe-0/0/0 _________________________________________________________________________________________________________________ | Device | Interface | Interface Description | UP | Enabled | Speed [Mbps] | MAC Address | IP Addresses | _________________________________________________________________________________________________________________ | edge01.bjm01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.flw01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.pos01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.oua01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ ''' def _ipnet_belongs(net): ''' Helper to tell if a IP address or network belong to a certain network. ''' if net == '0.0.0.0/0': return False net_obj = _get_network_obj(net) if not net_obj: return False return ipnet in net_obj or net_obj in ipnet labels = { 'device': 'Device', 'interface': 'Interface', 'interface_description': 'Interface Description', 'is_up': 'UP', 'is_enabled': 'Enabled', 'speed': 'Speed [Mbps]', 'mac': 'MAC Address', 'ips': 'IP Addresses' } rows = [] net_runner_opts = _get_net_runner_opts() if pattern: title = 'Pattern "{0}" found in the description of the following interfaces'.format(pattern) if not title: title = 'Details' if interface: title += ' for interface {0}'.format(interface) else: title += ' for all interfaces' if device: title += ' on device {0}'.format(device) if ipnet: title += ' that include network {net}'.format(net=six.text_type(ipnet)) if best: title += ' - only best match returned' all_interfaces = _get_mine('net.interfaces') all_ipaddrs = _get_mine('net.ipaddrs') if device: all_interfaces = {device: all_interfaces.get(device, {})} if ipnet and not isinstance(ipnet, IPNetwork): ipnet = _get_network_obj(ipnet) best_row = {} best_net_match = None for device, net_interfaces_out in six.iteritems(all_interfaces): # pylint: disable=too-many-nested-blocks if not net_interfaces_out: continue if not net_interfaces_out.get('result', False): continue selected_device_interfaces = net_interfaces_out.get('out', {}) if interface: selected_device_interfaces = {interface: selected_device_interfaces.get(interface, {})} for interface_name, interface_details in six.iteritems(selected_device_interfaces): if not interface_details: continue if ipnet and interface_name in net_runner_opts.get('ignore_interfaces'): continue interface_description = (interface_details.get('description', '') or '') if pattern: if pattern.lower() not in interface_description.lower(): continue if not all_ipaddrs.get(device, {}).get('result', False): continue ips = [] device_entry = { 'device': device, 'interface': interface_name, 'interface_description': interface_description, 'is_up': (interface_details.get('is_up', '') or ''), 'is_enabled': (interface_details.get('is_enabled', '') or ''), 'speed': (interface_details.get('speed', '') or ''), 'mac': napalm_helpers.convert(napalm_helpers.mac, (interface_details.get('mac_address', '') or '')), 'ips': [] } intf_entry_found = False for intrf, interface_ips in six.iteritems(all_ipaddrs.get(device, {}).get('out', {})): if intrf.split('.')[0] == interface_name: ip_addresses = interface_ips.get('ipv4', {}) # all IPv4 addresses ip_addresses.update(interface_ips.get('ipv6', {})) # and all IPv6 addresses ips = [ '{0}/{1}'.format( ip_addr, addr_details.get('prefix_length', '32') ) for ip_addr, addr_details in six.iteritems(ip_addresses) ] interf_entry = {} interf_entry.update(device_entry) interf_entry['ips'] = ips if display: interf_entry['ips'] = '\n'.join(interf_entry['ips']) if ipnet: inet_ips = [ six.text_type(ip) for ip in ips if _ipnet_belongs(ip) ] # filter and get only IP include ipnet if inet_ips: # if any if best: # determine the global best match compare = [best_net_match] compare.extend(list(map(_get_network_obj, inet_ips))) new_best_net_match = max(compare) if new_best_net_match != best_net_match: best_net_match = new_best_net_match best_row = interf_entry else: # or include all intf_entry_found = True rows.append(interf_entry) else: intf_entry_found = True rows.append(interf_entry) if not intf_entry_found and not ipnet: interf_entry = {} interf_entry.update(device_entry) if display: interf_entry['ips'] = '' rows.append(interf_entry) if ipnet and best and best_row: rows = [best_row] return _display_runner(rows, labels, title, display=display)
Search for interfaces details in the following mine functions: - net.interfaces - net.ipaddrs Optional arguments: device Return interface data from a certain device only. interface Return data selecting by interface name. pattern Return interfaces that contain a certain pattern in their description. ipnet Return interfaces whose IP networks associated include this IP network. best: ``True`` When ``ipnet`` is specified, this argument says if the runner should return only the best match (the output will contain at most one row). Default: ``True`` (return only the best match). display: True Display on the screen or return structured object? Default: ``True`` (return on the CLI). title Display a custom title for the table. CLI Example: .. code-block:: bash $ sudo salt-run net.interfaces interface=vt-0/0/10 Output Example: .. code-block:: text Details for interface xe-0/0/0 _________________________________________________________________________________________________________________ | Device | Interface | Interface Description | UP | Enabled | Speed [Mbps] | MAC Address | IP Addresses | _________________________________________________________________________________________________________________ | edge01.bjm01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.flw01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.pos01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.oua01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________
Below is the the instruction that describes the task: ### Input: Search for interfaces details in the following mine functions: - net.interfaces - net.ipaddrs Optional arguments: device Return interface data from a certain device only. interface Return data selecting by interface name. pattern Return interfaces that contain a certain pattern in their description. ipnet Return interfaces whose IP networks associated include this IP network. best: ``True`` When ``ipnet`` is specified, this argument says if the runner should return only the best match (the output will contain at most one row). Default: ``True`` (return only the best match). display: True Display on the screen or return structured object? Default: ``True`` (return on the CLI). title Display a custom title for the table. CLI Example: .. code-block:: bash $ sudo salt-run net.interfaces interface=vt-0/0/10 Output Example: .. code-block:: text Details for interface xe-0/0/0 _________________________________________________________________________________________________________________ | Device | Interface | Interface Description | UP | Enabled | Speed [Mbps] | MAC Address | IP Addresses | _________________________________________________________________________________________________________________ | edge01.bjm01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.flw01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.pos01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.oua01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ ### Response: def interfaces(device=None, interface=None, title=None, pattern=None, ipnet=None, best=True, display=_DEFAULT_DISPLAY): ''' Search for interfaces details in the following mine functions: - net.interfaces - net.ipaddrs Optional arguments: device Return interface data from a certain device only. interface Return data selecting by interface name. pattern Return interfaces that contain a certain pattern in their description. ipnet Return interfaces whose IP networks associated include this IP network. best: ``True`` When ``ipnet`` is specified, this argument says if the runner should return only the best match (the output will contain at most one row). Default: ``True`` (return only the best match). display: True Display on the screen or return structured object? Default: ``True`` (return on the CLI). title Display a custom title for the table. CLI Example: .. code-block:: bash $ sudo salt-run net.interfaces interface=vt-0/0/10 Output Example: .. code-block:: text Details for interface xe-0/0/0 _________________________________________________________________________________________________________________ | Device | Interface | Interface Description | UP | Enabled | Speed [Mbps] | MAC Address | IP Addresses | _________________________________________________________________________________________________________________ | edge01.bjm01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.flw01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.pos01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ | edge01.oua01 | vt-0/0/10 | | True | True | 1000 | | | _________________________________________________________________________________________________________________ ''' def _ipnet_belongs(net): ''' Helper to tell if a IP address or network belong to a certain network. ''' if net == '0.0.0.0/0': return False net_obj = _get_network_obj(net) if not net_obj: return False return ipnet in net_obj or net_obj in ipnet labels = { 'device': 'Device', 'interface': 'Interface', 'interface_description': 'Interface Description', 'is_up': 'UP', 'is_enabled': 'Enabled', 'speed': 'Speed [Mbps]', 'mac': 'MAC Address', 'ips': 'IP Addresses' } rows = [] net_runner_opts = _get_net_runner_opts() if pattern: title = 'Pattern "{0}" found in the description of the following interfaces'.format(pattern) if not title: title = 'Details' if interface: title += ' for interface {0}'.format(interface) else: title += ' for all interfaces' if device: title += ' on device {0}'.format(device) if ipnet: title += ' that include network {net}'.format(net=six.text_type(ipnet)) if best: title += ' - only best match returned' all_interfaces = _get_mine('net.interfaces') all_ipaddrs = _get_mine('net.ipaddrs') if device: all_interfaces = {device: all_interfaces.get(device, {})} if ipnet and not isinstance(ipnet, IPNetwork): ipnet = _get_network_obj(ipnet) best_row = {} best_net_match = None for device, net_interfaces_out in six.iteritems(all_interfaces): # pylint: disable=too-many-nested-blocks if not net_interfaces_out: continue if not net_interfaces_out.get('result', False): continue selected_device_interfaces = net_interfaces_out.get('out', {}) if interface: selected_device_interfaces = {interface: selected_device_interfaces.get(interface, {})} for interface_name, interface_details in six.iteritems(selected_device_interfaces): if not interface_details: continue if ipnet and interface_name in net_runner_opts.get('ignore_interfaces'): continue interface_description = (interface_details.get('description', '') or '') if pattern: if pattern.lower() not in interface_description.lower(): continue if not all_ipaddrs.get(device, {}).get('result', False): continue ips = [] device_entry = { 'device': device, 'interface': interface_name, 'interface_description': interface_description, 'is_up': (interface_details.get('is_up', '') or ''), 'is_enabled': (interface_details.get('is_enabled', '') or ''), 'speed': (interface_details.get('speed', '') or ''), 'mac': napalm_helpers.convert(napalm_helpers.mac, (interface_details.get('mac_address', '') or '')), 'ips': [] } intf_entry_found = False for intrf, interface_ips in six.iteritems(all_ipaddrs.get(device, {}).get('out', {})): if intrf.split('.')[0] == interface_name: ip_addresses = interface_ips.get('ipv4', {}) # all IPv4 addresses ip_addresses.update(interface_ips.get('ipv6', {})) # and all IPv6 addresses ips = [ '{0}/{1}'.format( ip_addr, addr_details.get('prefix_length', '32') ) for ip_addr, addr_details in six.iteritems(ip_addresses) ] interf_entry = {} interf_entry.update(device_entry) interf_entry['ips'] = ips if display: interf_entry['ips'] = '\n'.join(interf_entry['ips']) if ipnet: inet_ips = [ six.text_type(ip) for ip in ips if _ipnet_belongs(ip) ] # filter and get only IP include ipnet if inet_ips: # if any if best: # determine the global best match compare = [best_net_match] compare.extend(list(map(_get_network_obj, inet_ips))) new_best_net_match = max(compare) if new_best_net_match != best_net_match: best_net_match = new_best_net_match best_row = interf_entry else: # or include all intf_entry_found = True rows.append(interf_entry) else: intf_entry_found = True rows.append(interf_entry) if not intf_entry_found and not ipnet: interf_entry = {} interf_entry.update(device_entry) if display: interf_entry['ips'] = '' rows.append(interf_entry) if ipnet and best and best_row: rows = [best_row] return _display_runner(rows, labels, title, display=display)
def shard_id(self): """Returns the shard ID for this guild if applicable.""" count = self._state.shard_count if count is None: return None return (self.id >> 22) % count
Returns the shard ID for this guild if applicable.
Below is the the instruction that describes the task: ### Input: Returns the shard ID for this guild if applicable. ### Response: def shard_id(self): """Returns the shard ID for this guild if applicable.""" count = self._state.shard_count if count is None: return None return (self.id >> 22) % count
def segment(f, output, target_duration, mpegts): """Segment command.""" try: target_duration = int(target_duration) except ValueError: exit('Error: Invalid target duration.') try: mpegts = int(mpegts) except ValueError: exit('Error: Invalid MPEGTS value.') WebVTTSegmenter().segment(f, output, target_duration, mpegts)
Segment command.
Below is the the instruction that describes the task: ### Input: Segment command. ### Response: def segment(f, output, target_duration, mpegts): """Segment command.""" try: target_duration = int(target_duration) except ValueError: exit('Error: Invalid target duration.') try: mpegts = int(mpegts) except ValueError: exit('Error: Invalid MPEGTS value.') WebVTTSegmenter().segment(f, output, target_duration, mpegts)
def get_info(self): ''' Get info regarding the current template state :return: info dictionary ''' self.render() info = super(Template, self).get_info() res = {} res['name'] = self.get_name() res['mutation'] = { 'current_index': self._current_index, 'total_number': self.num_mutations() } res['value'] = { 'rendered': { 'base64': b64encode(self._current_rendered.tobytes()).decode(), 'length_in_bytes': len(self._current_rendered.tobytes()), } } res['hash'] = self.hash() res['field'] = info return res
Get info regarding the current template state :return: info dictionary
Below is the the instruction that describes the task: ### Input: Get info regarding the current template state :return: info dictionary ### Response: def get_info(self): ''' Get info regarding the current template state :return: info dictionary ''' self.render() info = super(Template, self).get_info() res = {} res['name'] = self.get_name() res['mutation'] = { 'current_index': self._current_index, 'total_number': self.num_mutations() } res['value'] = { 'rendered': { 'base64': b64encode(self._current_rendered.tobytes()).decode(), 'length_in_bytes': len(self._current_rendered.tobytes()), } } res['hash'] = self.hash() res['field'] = info return res
def cost_sampling(X, y, cost_mat, method='RejectionSampling', oversampling_norm=0.1, max_wc=97.5): """Cost-proportionate sampling. Parameters ---------- X : array-like of shape = [n_samples, n_features] The input samples. y : array-like of shape = [n_samples] Ground truth (correct) labels. cost_mat : array-like of shape = [n_samples, 4] Cost matrix of the classification problem Where the columns represents the costs of: false positives, false negatives, true positives and true negatives, for each example. method : str, optional (default = RejectionSampling) Method to perform the cost-proportionate sampling, either 'RejectionSampling' or 'OverSampling'. oversampling_norm: float, optional (default = 0.1) normalize value of wc, the smaller the biggest the data. max_wc: float, optional (default = 97.5) outlier adjustment for the cost. References ---------- .. [1] B. Zadrozny, J. Langford, N. Naoki, "Cost-sensitive learning by cost-proportionate example weighting", in Proceedings of the Third IEEE International Conference on Data Mining, 435-442, 2003. .. [2] C. Elkan, "The foundations of Cost-Sensitive Learning", in Seventeenth International Joint Conference on Artificial Intelligence, 973-978, 2001. Examples -------- >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.cross_validation import train_test_split >>> from costcla.datasets import load_creditscoring1 >>> from costcla.sampling import cost_sampling, undersampling >>> from costcla.metrics import savings_score >>> data = load_creditscoring1() >>> sets = train_test_split(data.data, data.target, data.cost_mat, test_size=0.33, random_state=0) >>> X_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = sets >>> X_cps_o, y_cps_o, cost_mat_cps_o = cost_sampling(X_train, y_train, cost_mat_train, method='OverSampling') >>> X_cps_r, y_cps_r, cost_mat_cps_r = cost_sampling(X_train, y_train, cost_mat_train, method='RejectionSampling') >>> X_u, y_u, cost_mat_u = undersampling(X_train, y_train, cost_mat_train) >>> y_pred_test_rf = RandomForestClassifier(random_state=0).fit(X_train, y_train).predict(X_test) >>> y_pred_test_rf_cps_o = RandomForestClassifier(random_state=0).fit(X_cps_o, y_cps_o).predict(X_test) >>> y_pred_test_rf_cps_r = RandomForestClassifier(random_state=0).fit(X_cps_r, y_cps_r).predict(X_test) >>> y_pred_test_rf_u = RandomForestClassifier(random_state=0).fit(X_u, y_u).predict(X_test) >>> # Savings using only RandomForest >>> print(savings_score(y_test, y_pred_test_rf, cost_mat_test)) 0.12454256594 >>> # Savings using RandomForest with cost-proportionate over-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_o, cost_mat_test)) 0.192480226286 >>> # Savings using RandomForest with cost-proportionate rejection-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_r, cost_mat_test)) 0.465830173459 >>> # Savings using RandomForest with under-sampling >>> print(savings_score(y_test, y_pred_test_rf_u, cost_mat_test)) 0.466630646543 >>> # Size of each training set >>> print(X_train.shape[0], X_cps_o.shape[0], X_cps_r.shape[0], X_u.shape[0]) 75653 109975 8690 10191 >>> # Percentage of positives in each training set >>> print(y_train.mean(), y_cps_o.mean(), y_cps_r.mean(), y_u.mean()) 0.0668182358928 0.358054103205 0.436939010357 0.49602590521 """ #TODO: Check consistency of input # The methods are construct only for the misclassification costs, not the full cost matrix. cost_mis = cost_mat[:, 0] cost_mis[y == 1] = cost_mat[y == 1, 1] # wc = cost_mis / cost_mis.max() wc = np.minimum(cost_mis / np.percentile(cost_mis, max_wc), 1) n_samples = X.shape[0] filter_ = list(range(n_samples)) if method == 'RejectionSampling': # under-sampling by rejection [1] #TODO: Add random state rej_rand = np.random.rand(n_samples) filter_ = rej_rand <= wc elif method == 'OverSampling': # over-sampling with normalized wn [2] wc_n = np.ceil(wc / oversampling_norm).astype(np.int) new_n = wc_n.sum() filter_ = np.ones(new_n, dtype=np.int) e = 0 #TODO replace for for i in range(n_samples): filter_[e: e + wc_n[i]] = i e += wc_n[i] x_cps = X[filter_] y_cps = y[filter_] cost_mat_cps = cost_mat[filter_] return x_cps, y_cps, cost_mat_cps
Cost-proportionate sampling. Parameters ---------- X : array-like of shape = [n_samples, n_features] The input samples. y : array-like of shape = [n_samples] Ground truth (correct) labels. cost_mat : array-like of shape = [n_samples, 4] Cost matrix of the classification problem Where the columns represents the costs of: false positives, false negatives, true positives and true negatives, for each example. method : str, optional (default = RejectionSampling) Method to perform the cost-proportionate sampling, either 'RejectionSampling' or 'OverSampling'. oversampling_norm: float, optional (default = 0.1) normalize value of wc, the smaller the biggest the data. max_wc: float, optional (default = 97.5) outlier adjustment for the cost. References ---------- .. [1] B. Zadrozny, J. Langford, N. Naoki, "Cost-sensitive learning by cost-proportionate example weighting", in Proceedings of the Third IEEE International Conference on Data Mining, 435-442, 2003. .. [2] C. Elkan, "The foundations of Cost-Sensitive Learning", in Seventeenth International Joint Conference on Artificial Intelligence, 973-978, 2001. Examples -------- >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.cross_validation import train_test_split >>> from costcla.datasets import load_creditscoring1 >>> from costcla.sampling import cost_sampling, undersampling >>> from costcla.metrics import savings_score >>> data = load_creditscoring1() >>> sets = train_test_split(data.data, data.target, data.cost_mat, test_size=0.33, random_state=0) >>> X_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = sets >>> X_cps_o, y_cps_o, cost_mat_cps_o = cost_sampling(X_train, y_train, cost_mat_train, method='OverSampling') >>> X_cps_r, y_cps_r, cost_mat_cps_r = cost_sampling(X_train, y_train, cost_mat_train, method='RejectionSampling') >>> X_u, y_u, cost_mat_u = undersampling(X_train, y_train, cost_mat_train) >>> y_pred_test_rf = RandomForestClassifier(random_state=0).fit(X_train, y_train).predict(X_test) >>> y_pred_test_rf_cps_o = RandomForestClassifier(random_state=0).fit(X_cps_o, y_cps_o).predict(X_test) >>> y_pred_test_rf_cps_r = RandomForestClassifier(random_state=0).fit(X_cps_r, y_cps_r).predict(X_test) >>> y_pred_test_rf_u = RandomForestClassifier(random_state=0).fit(X_u, y_u).predict(X_test) >>> # Savings using only RandomForest >>> print(savings_score(y_test, y_pred_test_rf, cost_mat_test)) 0.12454256594 >>> # Savings using RandomForest with cost-proportionate over-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_o, cost_mat_test)) 0.192480226286 >>> # Savings using RandomForest with cost-proportionate rejection-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_r, cost_mat_test)) 0.465830173459 >>> # Savings using RandomForest with under-sampling >>> print(savings_score(y_test, y_pred_test_rf_u, cost_mat_test)) 0.466630646543 >>> # Size of each training set >>> print(X_train.shape[0], X_cps_o.shape[0], X_cps_r.shape[0], X_u.shape[0]) 75653 109975 8690 10191 >>> # Percentage of positives in each training set >>> print(y_train.mean(), y_cps_o.mean(), y_cps_r.mean(), y_u.mean()) 0.0668182358928 0.358054103205 0.436939010357 0.49602590521
Below is the the instruction that describes the task: ### Input: Cost-proportionate sampling. Parameters ---------- X : array-like of shape = [n_samples, n_features] The input samples. y : array-like of shape = [n_samples] Ground truth (correct) labels. cost_mat : array-like of shape = [n_samples, 4] Cost matrix of the classification problem Where the columns represents the costs of: false positives, false negatives, true positives and true negatives, for each example. method : str, optional (default = RejectionSampling) Method to perform the cost-proportionate sampling, either 'RejectionSampling' or 'OverSampling'. oversampling_norm: float, optional (default = 0.1) normalize value of wc, the smaller the biggest the data. max_wc: float, optional (default = 97.5) outlier adjustment for the cost. References ---------- .. [1] B. Zadrozny, J. Langford, N. Naoki, "Cost-sensitive learning by cost-proportionate example weighting", in Proceedings of the Third IEEE International Conference on Data Mining, 435-442, 2003. .. [2] C. Elkan, "The foundations of Cost-Sensitive Learning", in Seventeenth International Joint Conference on Artificial Intelligence, 973-978, 2001. Examples -------- >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.cross_validation import train_test_split >>> from costcla.datasets import load_creditscoring1 >>> from costcla.sampling import cost_sampling, undersampling >>> from costcla.metrics import savings_score >>> data = load_creditscoring1() >>> sets = train_test_split(data.data, data.target, data.cost_mat, test_size=0.33, random_state=0) >>> X_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = sets >>> X_cps_o, y_cps_o, cost_mat_cps_o = cost_sampling(X_train, y_train, cost_mat_train, method='OverSampling') >>> X_cps_r, y_cps_r, cost_mat_cps_r = cost_sampling(X_train, y_train, cost_mat_train, method='RejectionSampling') >>> X_u, y_u, cost_mat_u = undersampling(X_train, y_train, cost_mat_train) >>> y_pred_test_rf = RandomForestClassifier(random_state=0).fit(X_train, y_train).predict(X_test) >>> y_pred_test_rf_cps_o = RandomForestClassifier(random_state=0).fit(X_cps_o, y_cps_o).predict(X_test) >>> y_pred_test_rf_cps_r = RandomForestClassifier(random_state=0).fit(X_cps_r, y_cps_r).predict(X_test) >>> y_pred_test_rf_u = RandomForestClassifier(random_state=0).fit(X_u, y_u).predict(X_test) >>> # Savings using only RandomForest >>> print(savings_score(y_test, y_pred_test_rf, cost_mat_test)) 0.12454256594 >>> # Savings using RandomForest with cost-proportionate over-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_o, cost_mat_test)) 0.192480226286 >>> # Savings using RandomForest with cost-proportionate rejection-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_r, cost_mat_test)) 0.465830173459 >>> # Savings using RandomForest with under-sampling >>> print(savings_score(y_test, y_pred_test_rf_u, cost_mat_test)) 0.466630646543 >>> # Size of each training set >>> print(X_train.shape[0], X_cps_o.shape[0], X_cps_r.shape[0], X_u.shape[0]) 75653 109975 8690 10191 >>> # Percentage of positives in each training set >>> print(y_train.mean(), y_cps_o.mean(), y_cps_r.mean(), y_u.mean()) 0.0668182358928 0.358054103205 0.436939010357 0.49602590521 ### Response: def cost_sampling(X, y, cost_mat, method='RejectionSampling', oversampling_norm=0.1, max_wc=97.5): """Cost-proportionate sampling. Parameters ---------- X : array-like of shape = [n_samples, n_features] The input samples. y : array-like of shape = [n_samples] Ground truth (correct) labels. cost_mat : array-like of shape = [n_samples, 4] Cost matrix of the classification problem Where the columns represents the costs of: false positives, false negatives, true positives and true negatives, for each example. method : str, optional (default = RejectionSampling) Method to perform the cost-proportionate sampling, either 'RejectionSampling' or 'OverSampling'. oversampling_norm: float, optional (default = 0.1) normalize value of wc, the smaller the biggest the data. max_wc: float, optional (default = 97.5) outlier adjustment for the cost. References ---------- .. [1] B. Zadrozny, J. Langford, N. Naoki, "Cost-sensitive learning by cost-proportionate example weighting", in Proceedings of the Third IEEE International Conference on Data Mining, 435-442, 2003. .. [2] C. Elkan, "The foundations of Cost-Sensitive Learning", in Seventeenth International Joint Conference on Artificial Intelligence, 973-978, 2001. Examples -------- >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.cross_validation import train_test_split >>> from costcla.datasets import load_creditscoring1 >>> from costcla.sampling import cost_sampling, undersampling >>> from costcla.metrics import savings_score >>> data = load_creditscoring1() >>> sets = train_test_split(data.data, data.target, data.cost_mat, test_size=0.33, random_state=0) >>> X_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = sets >>> X_cps_o, y_cps_o, cost_mat_cps_o = cost_sampling(X_train, y_train, cost_mat_train, method='OverSampling') >>> X_cps_r, y_cps_r, cost_mat_cps_r = cost_sampling(X_train, y_train, cost_mat_train, method='RejectionSampling') >>> X_u, y_u, cost_mat_u = undersampling(X_train, y_train, cost_mat_train) >>> y_pred_test_rf = RandomForestClassifier(random_state=0).fit(X_train, y_train).predict(X_test) >>> y_pred_test_rf_cps_o = RandomForestClassifier(random_state=0).fit(X_cps_o, y_cps_o).predict(X_test) >>> y_pred_test_rf_cps_r = RandomForestClassifier(random_state=0).fit(X_cps_r, y_cps_r).predict(X_test) >>> y_pred_test_rf_u = RandomForestClassifier(random_state=0).fit(X_u, y_u).predict(X_test) >>> # Savings using only RandomForest >>> print(savings_score(y_test, y_pred_test_rf, cost_mat_test)) 0.12454256594 >>> # Savings using RandomForest with cost-proportionate over-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_o, cost_mat_test)) 0.192480226286 >>> # Savings using RandomForest with cost-proportionate rejection-sampling >>> print(savings_score(y_test, y_pred_test_rf_cps_r, cost_mat_test)) 0.465830173459 >>> # Savings using RandomForest with under-sampling >>> print(savings_score(y_test, y_pred_test_rf_u, cost_mat_test)) 0.466630646543 >>> # Size of each training set >>> print(X_train.shape[0], X_cps_o.shape[0], X_cps_r.shape[0], X_u.shape[0]) 75653 109975 8690 10191 >>> # Percentage of positives in each training set >>> print(y_train.mean(), y_cps_o.mean(), y_cps_r.mean(), y_u.mean()) 0.0668182358928 0.358054103205 0.436939010357 0.49602590521 """ #TODO: Check consistency of input # The methods are construct only for the misclassification costs, not the full cost matrix. cost_mis = cost_mat[:, 0] cost_mis[y == 1] = cost_mat[y == 1, 1] # wc = cost_mis / cost_mis.max() wc = np.minimum(cost_mis / np.percentile(cost_mis, max_wc), 1) n_samples = X.shape[0] filter_ = list(range(n_samples)) if method == 'RejectionSampling': # under-sampling by rejection [1] #TODO: Add random state rej_rand = np.random.rand(n_samples) filter_ = rej_rand <= wc elif method == 'OverSampling': # over-sampling with normalized wn [2] wc_n = np.ceil(wc / oversampling_norm).astype(np.int) new_n = wc_n.sum() filter_ = np.ones(new_n, dtype=np.int) e = 0 #TODO replace for for i in range(n_samples): filter_[e: e + wc_n[i]] = i e += wc_n[i] x_cps = X[filter_] y_cps = y[filter_] cost_mat_cps = cost_mat[filter_] return x_cps, y_cps, cost_mat_cps
def set_hex_color(self, color, *, index=0, transition_time=None): """Set hex color of the light.""" values = { ATTR_LIGHT_COLOR_HEX: color, } if transition_time is not None: values[ATTR_TRANSITION_TIME] = transition_time return self.set_values(values, index=index)
Set hex color of the light.
Below is the the instruction that describes the task: ### Input: Set hex color of the light. ### Response: def set_hex_color(self, color, *, index=0, transition_time=None): """Set hex color of the light.""" values = { ATTR_LIGHT_COLOR_HEX: color, } if transition_time is not None: values[ATTR_TRANSITION_TIME] = transition_time return self.set_values(values, index=index)
def relation_set(relation_id=None, relation_settings=None, **kwargs): """Attempt to use leader-set if supported in the current version of Juju, otherwise falls back on relation-set. Note that we only attempt to use leader-set if the provided relation_id is a peer relation id or no relation id is provided (in which case we assume we are within the peer relation context). """ try: if relation_id in relation_ids('cluster'): return leader_set(settings=relation_settings, **kwargs) else: raise NotImplementedError except NotImplementedError: return _relation_set(relation_id=relation_id, relation_settings=relation_settings, **kwargs)
Attempt to use leader-set if supported in the current version of Juju, otherwise falls back on relation-set. Note that we only attempt to use leader-set if the provided relation_id is a peer relation id or no relation id is provided (in which case we assume we are within the peer relation context).
Below is the the instruction that describes the task: ### Input: Attempt to use leader-set if supported in the current version of Juju, otherwise falls back on relation-set. Note that we only attempt to use leader-set if the provided relation_id is a peer relation id or no relation id is provided (in which case we assume we are within the peer relation context). ### Response: def relation_set(relation_id=None, relation_settings=None, **kwargs): """Attempt to use leader-set if supported in the current version of Juju, otherwise falls back on relation-set. Note that we only attempt to use leader-set if the provided relation_id is a peer relation id or no relation id is provided (in which case we assume we are within the peer relation context). """ try: if relation_id in relation_ids('cluster'): return leader_set(settings=relation_settings, **kwargs) else: raise NotImplementedError except NotImplementedError: return _relation_set(relation_id=relation_id, relation_settings=relation_settings, **kwargs)
def scale_and_center(mol): """Center and Scale molecule 2D coordinates. This method changes mol coordinates directly to center but not scale. This method returns width, height and MLB(median length of bond) and scaling will be done by drawer method with these values. Returns: width: float height: float mlb: median length of bond """ cnt = mol.atom_count() if cnt < 2: mol.size2d = (0, 0, 1) mol.descriptors.add("ScaleAndCenter") return xs = [] ys = [] for _, atom in mol.atoms_iter(): xs.append(atom.coords[0]) ys.append(atom.coords[1]) xmin, xmax = (min(xs), max(xs)) ymin, ymax = (min(ys), max(ys)) width = xmax - xmin height = ymax - ymin x_offset = width / 2 + xmin y_offset = height / 2 + ymin dists = [] for u, v, _ in mol.bonds_iter(): dists.append(geometry.distance(mol.atom(u).coords, mol.atom(v).coords)) try: mlb = statistics.median(dists) except statistics.StatisticsError: # No connection mlb = math.sqrt(max([width, height]) / cnt) # empirical if not mlb: # Many of connected atoms are overlapped mol.size2d = (0, 0, 1) mol.descriptors.add("ScaleAndCenter") return # Centering for _, atom in mol.atoms_iter(): atom.coords = (atom.coords[0] - x_offset, atom.coords[1] - y_offset) mol.size2d = (width, height, mlb) mol.descriptors.add("ScaleAndCenter")
Center and Scale molecule 2D coordinates. This method changes mol coordinates directly to center but not scale. This method returns width, height and MLB(median length of bond) and scaling will be done by drawer method with these values. Returns: width: float height: float mlb: median length of bond
Below is the the instruction that describes the task: ### Input: Center and Scale molecule 2D coordinates. This method changes mol coordinates directly to center but not scale. This method returns width, height and MLB(median length of bond) and scaling will be done by drawer method with these values. Returns: width: float height: float mlb: median length of bond ### Response: def scale_and_center(mol): """Center and Scale molecule 2D coordinates. This method changes mol coordinates directly to center but not scale. This method returns width, height and MLB(median length of bond) and scaling will be done by drawer method with these values. Returns: width: float height: float mlb: median length of bond """ cnt = mol.atom_count() if cnt < 2: mol.size2d = (0, 0, 1) mol.descriptors.add("ScaleAndCenter") return xs = [] ys = [] for _, atom in mol.atoms_iter(): xs.append(atom.coords[0]) ys.append(atom.coords[1]) xmin, xmax = (min(xs), max(xs)) ymin, ymax = (min(ys), max(ys)) width = xmax - xmin height = ymax - ymin x_offset = width / 2 + xmin y_offset = height / 2 + ymin dists = [] for u, v, _ in mol.bonds_iter(): dists.append(geometry.distance(mol.atom(u).coords, mol.atom(v).coords)) try: mlb = statistics.median(dists) except statistics.StatisticsError: # No connection mlb = math.sqrt(max([width, height]) / cnt) # empirical if not mlb: # Many of connected atoms are overlapped mol.size2d = (0, 0, 1) mol.descriptors.add("ScaleAndCenter") return # Centering for _, atom in mol.atoms_iter(): atom.coords = (atom.coords[0] - x_offset, atom.coords[1] - y_offset) mol.size2d = (width, height, mlb) mol.descriptors.add("ScaleAndCenter")
def get_res_stats(self,nonzero=True): """ get some common residual stats from the current obsvals, weights and grouping in self.observation_data and the modelled values in self.res. The key here is 'current' because if obsval, weights and/or groupings have changed in self.observation_data since the res file was generated then the current values for obsval, weight and group are used Parameters ---------- nonzero : bool calculate stats using only nonzero-weighted observations. This may seem obsvious to most users, but you never know.... Returns ------- df : pd.DataFrame a dataframe with columns for groups names and indices of statistic name. Note ---- the normalized RMSE is normalized against the obsval range (max - min) """ res = self.res.copy() res.loc[:,"obsnme"] = res.pop("name") res.index = res.obsnme if nonzero: obs = self.observation_data.loc[self.nnz_obs_names,:] #print(obs.shape,res.shape) res = res.loc[obs.obsnme,:] #print(obs.shape, res.shape) #reset the res parts to current obs values and remove #duplicate attributes res.loc[:,"weight"] = obs.weight res.loc[:,"obsval"] = obs.obsval res.loc[:,"obgnme"] = obs.obgnme res.pop("group") res.pop("measured") #build these attribute lists for faster lookup later og_dict = {og:res.loc[res.obgnme==og,"obsnme"] for og in res.obgnme.unique()} og_names = list(og_dict.keys()) # the list of functions and names sfuncs = [self._stats_rss, self._stats_mean,self._stats_mae, self._stats_rmse,self._stats_nrmse] snames = ["rss","mean","mae","rmse","nrmse"] data = [] for sfunc,sname in zip(sfuncs,snames): full = sfunc(res) groups = [full] for og in og_names: onames = og_dict[og] res_og = res.loc[onames,:] groups.append(sfunc(res_og)) data.append(groups) og_names.insert(0,"all") df = pd.DataFrame(data,columns=og_names,index=snames) return df
get some common residual stats from the current obsvals, weights and grouping in self.observation_data and the modelled values in self.res. The key here is 'current' because if obsval, weights and/or groupings have changed in self.observation_data since the res file was generated then the current values for obsval, weight and group are used Parameters ---------- nonzero : bool calculate stats using only nonzero-weighted observations. This may seem obsvious to most users, but you never know.... Returns ------- df : pd.DataFrame a dataframe with columns for groups names and indices of statistic name. Note ---- the normalized RMSE is normalized against the obsval range (max - min)
Below is the the instruction that describes the task: ### Input: get some common residual stats from the current obsvals, weights and grouping in self.observation_data and the modelled values in self.res. The key here is 'current' because if obsval, weights and/or groupings have changed in self.observation_data since the res file was generated then the current values for obsval, weight and group are used Parameters ---------- nonzero : bool calculate stats using only nonzero-weighted observations. This may seem obsvious to most users, but you never know.... Returns ------- df : pd.DataFrame a dataframe with columns for groups names and indices of statistic name. Note ---- the normalized RMSE is normalized against the obsval range (max - min) ### Response: def get_res_stats(self,nonzero=True): """ get some common residual stats from the current obsvals, weights and grouping in self.observation_data and the modelled values in self.res. The key here is 'current' because if obsval, weights and/or groupings have changed in self.observation_data since the res file was generated then the current values for obsval, weight and group are used Parameters ---------- nonzero : bool calculate stats using only nonzero-weighted observations. This may seem obsvious to most users, but you never know.... Returns ------- df : pd.DataFrame a dataframe with columns for groups names and indices of statistic name. Note ---- the normalized RMSE is normalized against the obsval range (max - min) """ res = self.res.copy() res.loc[:,"obsnme"] = res.pop("name") res.index = res.obsnme if nonzero: obs = self.observation_data.loc[self.nnz_obs_names,:] #print(obs.shape,res.shape) res = res.loc[obs.obsnme,:] #print(obs.shape, res.shape) #reset the res parts to current obs values and remove #duplicate attributes res.loc[:,"weight"] = obs.weight res.loc[:,"obsval"] = obs.obsval res.loc[:,"obgnme"] = obs.obgnme res.pop("group") res.pop("measured") #build these attribute lists for faster lookup later og_dict = {og:res.loc[res.obgnme==og,"obsnme"] for og in res.obgnme.unique()} og_names = list(og_dict.keys()) # the list of functions and names sfuncs = [self._stats_rss, self._stats_mean,self._stats_mae, self._stats_rmse,self._stats_nrmse] snames = ["rss","mean","mae","rmse","nrmse"] data = [] for sfunc,sname in zip(sfuncs,snames): full = sfunc(res) groups = [full] for og in og_names: onames = og_dict[og] res_og = res.loc[onames,:] groups.append(sfunc(res_og)) data.append(groups) og_names.insert(0,"all") df = pd.DataFrame(data,columns=og_names,index=snames) return df
def do_py(self, arg): """ :: Usage: py py COMMAND Arguments: COMMAND the command to be executed Description: The command without a parameter will be executed and the interactive python mode is entered. The python mode can be ended with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``,'`exit()``. Non-python commands can be issued with ``cmd("your command")``. If the python code is located in an external file it can be run with ``run("filename.py")``. In case a COMMAND is provided it will be executed and the python interpreter will return to the command shell. This code is copied from Cmd2. """ self.pystate['self'] = self arg = arg.strip() localvars = (self.locals_in_py and self.pystate) or {} interp = InteractiveConsole(locals=localvars) interp.runcode('import sys, os;sys.path.insert(0, os.getcwd())') if arg: interp.runcode(arg) else: def quit(): raise EmbeddedConsoleExit def onecmd(arg): return self.onecmd(arg + '\n') def run(arg): try: f = open(arg) interp.runcode(f.read()) f.close() except IOError, e: self.perror(e) self.pystate['quit'] = quit self.pystate['exit'] = quit self.pystate['cmd'] = onecmd self.pystate['run'] = run try: cprt = 'Type "help", "copyright", "credits" or "license" for more information.' keepstate = Statekeeper(sys, ('stdin', 'stdout')) sys.stdout = self.stdout sys.stdin = self.stdin interp.interact(banner="Python %s on %s\n%s\n(%s)\n%s" % (sys.version, sys.platform, cprt, self.__class__.__name__, self.do_py.__doc__)) except EmbeddedConsoleExit: pass keepstate.restore()
:: Usage: py py COMMAND Arguments: COMMAND the command to be executed Description: The command without a parameter will be executed and the interactive python mode is entered. The python mode can be ended with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``,'`exit()``. Non-python commands can be issued with ``cmd("your command")``. If the python code is located in an external file it can be run with ``run("filename.py")``. In case a COMMAND is provided it will be executed and the python interpreter will return to the command shell. This code is copied from Cmd2.
Below is the the instruction that describes the task: ### Input: :: Usage: py py COMMAND Arguments: COMMAND the command to be executed Description: The command without a parameter will be executed and the interactive python mode is entered. The python mode can be ended with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``,'`exit()``. Non-python commands can be issued with ``cmd("your command")``. If the python code is located in an external file it can be run with ``run("filename.py")``. In case a COMMAND is provided it will be executed and the python interpreter will return to the command shell. This code is copied from Cmd2. ### Response: def do_py(self, arg): """ :: Usage: py py COMMAND Arguments: COMMAND the command to be executed Description: The command without a parameter will be executed and the interactive python mode is entered. The python mode can be ended with ``Ctrl-D`` (Unix) / ``Ctrl-Z`` (Windows), ``quit()``,'`exit()``. Non-python commands can be issued with ``cmd("your command")``. If the python code is located in an external file it can be run with ``run("filename.py")``. In case a COMMAND is provided it will be executed and the python interpreter will return to the command shell. This code is copied from Cmd2. """ self.pystate['self'] = self arg = arg.strip() localvars = (self.locals_in_py and self.pystate) or {} interp = InteractiveConsole(locals=localvars) interp.runcode('import sys, os;sys.path.insert(0, os.getcwd())') if arg: interp.runcode(arg) else: def quit(): raise EmbeddedConsoleExit def onecmd(arg): return self.onecmd(arg + '\n') def run(arg): try: f = open(arg) interp.runcode(f.read()) f.close() except IOError, e: self.perror(e) self.pystate['quit'] = quit self.pystate['exit'] = quit self.pystate['cmd'] = onecmd self.pystate['run'] = run try: cprt = 'Type "help", "copyright", "credits" or "license" for more information.' keepstate = Statekeeper(sys, ('stdin', 'stdout')) sys.stdout = self.stdout sys.stdin = self.stdin interp.interact(banner="Python %s on %s\n%s\n(%s)\n%s" % (sys.version, sys.platform, cprt, self.__class__.__name__, self.do_py.__doc__)) except EmbeddedConsoleExit: pass keepstate.restore()
def upload_file(self, file): """The method is posting file to the remote server""" url = self._get_url('/api/1.0/upload/post') fcontent = FileContent(file) binary_data = fcontent.get_binary() headers = self._get_request_headers() req = urllib.request.Request(url, binary_data, headers) req.add_header('Content-type', fcontent.get_content_type()) req.add_header('Content-length', len(binary_data)) resp = urllib.request.urlopen(req) return definition.UploadPostResponse(_response_to_json(resp))
The method is posting file to the remote server
Below is the the instruction that describes the task: ### Input: The method is posting file to the remote server ### Response: def upload_file(self, file): """The method is posting file to the remote server""" url = self._get_url('/api/1.0/upload/post') fcontent = FileContent(file) binary_data = fcontent.get_binary() headers = self._get_request_headers() req = urllib.request.Request(url, binary_data, headers) req.add_header('Content-type', fcontent.get_content_type()) req.add_header('Content-length', len(binary_data)) resp = urllib.request.urlopen(req) return definition.UploadPostResponse(_response_to_json(resp))
def transpose(self, *axes): """Permute the dimensions of a Timeseries.""" if self.ndim <= 1: return self ar = np.asarray(self).transpose(*axes) if axes[0] != 0: # then axis 0 is unaffected by the transposition newlabels = [self.labels[ax] for ax in axes] return Timeseries(ar, self.tspan, newlabels) else: return ar
Permute the dimensions of a Timeseries.
Below is the the instruction that describes the task: ### Input: Permute the dimensions of a Timeseries. ### Response: def transpose(self, *axes): """Permute the dimensions of a Timeseries.""" if self.ndim <= 1: return self ar = np.asarray(self).transpose(*axes) if axes[0] != 0: # then axis 0 is unaffected by the transposition newlabels = [self.labels[ax] for ax in axes] return Timeseries(ar, self.tspan, newlabels) else: return ar
async def export_tree(self, tree, dest, previous_tree=None, *, force=False, previous_index_file=None): '''This method is the core of `peru sync`. If the contents of "dest" match "previous_tree", then export_tree() updates them to match "tree". If not, it raises an error and doesn't touch any files. Because it's important for the no-op `peru sync` to be fast, we make an extra optimization for this case. The caller passes in the path to the index file used during the last sync, which should already reflect "previous_tree". That allows us to skip the read-tree and update-index calls, so all we have to do is a single diff-files operation to check for cleanliness. It's difficult to predict all the different states the index file might end up in under different error conditions, not only now but also in past and future git versions. For safety and simplicity, if any operation returns an error code, we delete the supplied index file. Right now this includes expected errors, like "sync would overwrite existing files," and unexpected errors, like "index is on fire."''' tree = tree or (await self.get_empty_tree()) previous_tree = previous_tree or (await self.get_empty_tree()) makedirs(dest) with contextlib.ExitStack() as stack: # If the caller gave us an index file, create a git session around # it. Otherwise, create a clean one. Note that because we delete # the index file whenever there are errors, we also allow the # caller to pass in a path to a nonexistent file. In that case we # have to pay the cost to recreate it. did_refresh = False if previous_index_file: session = GitSession(self.trees_path, previous_index_file, dest) stack.enter_context(delete_if_error(previous_index_file)) if not os.path.exists(previous_index_file): did_refresh = True await session.read_tree_and_stats_into_index(previous_tree) else: session = stack.enter_context(self.clean_git_session(dest)) did_refresh = True await session.read_tree_and_stats_into_index(previous_tree) # The fast path. If the previous tree is the same as the current # one, and no files have changed at all, short-circuit. if previous_tree == tree: if (await session.working_copy_matches_index()): return # Everything below is the slow path. Some files have changed, or # the tree has changed, or both. If we didn't refresh the index # file above, we must do so now. if not did_refresh: await session.read_tree_and_stats_into_index(previous_tree) modified = await session.get_modified_files_skipping_deletes() if modified and not force: raise DirtyWorkingCopyError( 'Imported files have been modified ' + '(use --force to overwrite):\n\n' + _format_file_lines(modified)) # Do all the file updates and deletions needed to produce `tree`. try: await session.read_tree_updating_working_copy(tree, force) except GitError: # Give a more informative error if we failed because files that # are new in `tree` already existed in the working copy. new_files = await session.get_new_files_in_tree( previous_tree, tree) existing_new_files = [ f for f in new_files if f and os.path.exists(os.path.join(dest, f)) ] existing_new_files.sort() if existing_new_files: raise DirtyWorkingCopyError( 'Imports would overwrite preexisting files ' '(use --force to write anyway):\n\n' + _format_file_lines(existing_new_files)) else: # We must've failed for some other reason. Let the error # keep going. raise # Recreate any missing files. await session.checkout_files_from_index()
This method is the core of `peru sync`. If the contents of "dest" match "previous_tree", then export_tree() updates them to match "tree". If not, it raises an error and doesn't touch any files. Because it's important for the no-op `peru sync` to be fast, we make an extra optimization for this case. The caller passes in the path to the index file used during the last sync, which should already reflect "previous_tree". That allows us to skip the read-tree and update-index calls, so all we have to do is a single diff-files operation to check for cleanliness. It's difficult to predict all the different states the index file might end up in under different error conditions, not only now but also in past and future git versions. For safety and simplicity, if any operation returns an error code, we delete the supplied index file. Right now this includes expected errors, like "sync would overwrite existing files," and unexpected errors, like "index is on fire."
Below is the the instruction that describes the task: ### Input: This method is the core of `peru sync`. If the contents of "dest" match "previous_tree", then export_tree() updates them to match "tree". If not, it raises an error and doesn't touch any files. Because it's important for the no-op `peru sync` to be fast, we make an extra optimization for this case. The caller passes in the path to the index file used during the last sync, which should already reflect "previous_tree". That allows us to skip the read-tree and update-index calls, so all we have to do is a single diff-files operation to check for cleanliness. It's difficult to predict all the different states the index file might end up in under different error conditions, not only now but also in past and future git versions. For safety and simplicity, if any operation returns an error code, we delete the supplied index file. Right now this includes expected errors, like "sync would overwrite existing files," and unexpected errors, like "index is on fire." ### Response: async def export_tree(self, tree, dest, previous_tree=None, *, force=False, previous_index_file=None): '''This method is the core of `peru sync`. If the contents of "dest" match "previous_tree", then export_tree() updates them to match "tree". If not, it raises an error and doesn't touch any files. Because it's important for the no-op `peru sync` to be fast, we make an extra optimization for this case. The caller passes in the path to the index file used during the last sync, which should already reflect "previous_tree". That allows us to skip the read-tree and update-index calls, so all we have to do is a single diff-files operation to check for cleanliness. It's difficult to predict all the different states the index file might end up in under different error conditions, not only now but also in past and future git versions. For safety and simplicity, if any operation returns an error code, we delete the supplied index file. Right now this includes expected errors, like "sync would overwrite existing files," and unexpected errors, like "index is on fire."''' tree = tree or (await self.get_empty_tree()) previous_tree = previous_tree or (await self.get_empty_tree()) makedirs(dest) with contextlib.ExitStack() as stack: # If the caller gave us an index file, create a git session around # it. Otherwise, create a clean one. Note that because we delete # the index file whenever there are errors, we also allow the # caller to pass in a path to a nonexistent file. In that case we # have to pay the cost to recreate it. did_refresh = False if previous_index_file: session = GitSession(self.trees_path, previous_index_file, dest) stack.enter_context(delete_if_error(previous_index_file)) if not os.path.exists(previous_index_file): did_refresh = True await session.read_tree_and_stats_into_index(previous_tree) else: session = stack.enter_context(self.clean_git_session(dest)) did_refresh = True await session.read_tree_and_stats_into_index(previous_tree) # The fast path. If the previous tree is the same as the current # one, and no files have changed at all, short-circuit. if previous_tree == tree: if (await session.working_copy_matches_index()): return # Everything below is the slow path. Some files have changed, or # the tree has changed, or both. If we didn't refresh the index # file above, we must do so now. if not did_refresh: await session.read_tree_and_stats_into_index(previous_tree) modified = await session.get_modified_files_skipping_deletes() if modified and not force: raise DirtyWorkingCopyError( 'Imported files have been modified ' + '(use --force to overwrite):\n\n' + _format_file_lines(modified)) # Do all the file updates and deletions needed to produce `tree`. try: await session.read_tree_updating_working_copy(tree, force) except GitError: # Give a more informative error if we failed because files that # are new in `tree` already existed in the working copy. new_files = await session.get_new_files_in_tree( previous_tree, tree) existing_new_files = [ f for f in new_files if f and os.path.exists(os.path.join(dest, f)) ] existing_new_files.sort() if existing_new_files: raise DirtyWorkingCopyError( 'Imports would overwrite preexisting files ' '(use --force to write anyway):\n\n' + _format_file_lines(existing_new_files)) else: # We must've failed for some other reason. Let the error # keep going. raise # Recreate any missing files. await session.checkout_files_from_index()
def debug(self, topLeftIndex, bottomRightIndex): """ Temporary debug to test the dataChanged signal. TODO: remove. """ if topLeftIndex.isValid() and bottomRightIndex.isValid(): topRow = topLeftIndex.row() bottomRow = bottomRightIndex.row() for row in range(topRow, bottomRow + 1): index = topLeftIndex.sibling(row, 0) childItem = self.getItem(index) logger.debug("Data changed in: {}".format(childItem.nodePath))
Temporary debug to test the dataChanged signal. TODO: remove.
Below is the the instruction that describes the task: ### Input: Temporary debug to test the dataChanged signal. TODO: remove. ### Response: def debug(self, topLeftIndex, bottomRightIndex): """ Temporary debug to test the dataChanged signal. TODO: remove. """ if topLeftIndex.isValid() and bottomRightIndex.isValid(): topRow = topLeftIndex.row() bottomRow = bottomRightIndex.row() for row in range(topRow, bottomRow + 1): index = topLeftIndex.sibling(row, 0) childItem = self.getItem(index) logger.debug("Data changed in: {}".format(childItem.nodePath))
def delete(key, service=None, profile=None): # pylint: disable=W0613 ''' Get a value from the cache service ''' key, profile = _parse_key(key, profile) cache = salt.cache.Cache(__opts__) try: cache.flush(profile['bank'], key=key) return True except Exception: return False
Get a value from the cache service
Below is the the instruction that describes the task: ### Input: Get a value from the cache service ### Response: def delete(key, service=None, profile=None): # pylint: disable=W0613 ''' Get a value from the cache service ''' key, profile = _parse_key(key, profile) cache = salt.cache.Cache(__opts__) try: cache.flush(profile['bank'], key=key) return True except Exception: return False
def get_version(version): """Dynamically calculate the version based on VERSION tuple.""" if len(version) > 2 and version[2] is not None: if isinstance(version[2], int): str_version = "%s.%s.%s" % version[:3] else: str_version = "%s.%s_%s" % version[:3] else: str_version = "%s.%s" % version[:2] return str_version
Dynamically calculate the version based on VERSION tuple.
Below is the the instruction that describes the task: ### Input: Dynamically calculate the version based on VERSION tuple. ### Response: def get_version(version): """Dynamically calculate the version based on VERSION tuple.""" if len(version) > 2 and version[2] is not None: if isinstance(version[2], int): str_version = "%s.%s.%s" % version[:3] else: str_version = "%s.%s_%s" % version[:3] else: str_version = "%s.%s" % version[:2] return str_version
def mknod(self, req, parent, name, mode, rdev): """Create file node Valid replies: reply_entry reply_err """ self.reply_err(req, errno.EROFS)
Create file node Valid replies: reply_entry reply_err
Below is the the instruction that describes the task: ### Input: Create file node Valid replies: reply_entry reply_err ### Response: def mknod(self, req, parent, name, mode, rdev): """Create file node Valid replies: reply_entry reply_err """ self.reply_err(req, errno.EROFS)
def _places(client, url_part, query=None, location=None, radius=None, keyword=None, language=None, min_price=0, max_price=4, name=None, open_now=False, rank_by=None, type=None, region=None, page_token=None): """ Internal handler for ``places``, ``places_nearby``, and ``places_radar``. See each method's docs for arg details. """ params = {"minprice": min_price, "maxprice": max_price} if query: params["query"] = query if location: params["location"] = convert.latlng(location) if radius: params["radius"] = radius if keyword: params["keyword"] = keyword if language: params["language"] = language if name: params["name"] = convert.join_list(" ", name) if open_now: params["opennow"] = "true" if rank_by: params["rankby"] = rank_by if type: params["type"] = type if region: params["region"] = region if page_token: params["pagetoken"] = page_token url = "/maps/api/place/%ssearch/json" % url_part return client._request(url, params)
Internal handler for ``places``, ``places_nearby``, and ``places_radar``. See each method's docs for arg details.
Below is the the instruction that describes the task: ### Input: Internal handler for ``places``, ``places_nearby``, and ``places_radar``. See each method's docs for arg details. ### Response: def _places(client, url_part, query=None, location=None, radius=None, keyword=None, language=None, min_price=0, max_price=4, name=None, open_now=False, rank_by=None, type=None, region=None, page_token=None): """ Internal handler for ``places``, ``places_nearby``, and ``places_radar``. See each method's docs for arg details. """ params = {"minprice": min_price, "maxprice": max_price} if query: params["query"] = query if location: params["location"] = convert.latlng(location) if radius: params["radius"] = radius if keyword: params["keyword"] = keyword if language: params["language"] = language if name: params["name"] = convert.join_list(" ", name) if open_now: params["opennow"] = "true" if rank_by: params["rankby"] = rank_by if type: params["type"] = type if region: params["region"] = region if page_token: params["pagetoken"] = page_token url = "/maps/api/place/%ssearch/json" % url_part return client._request(url, params)
def route_handler(context, content, pargs, kwargs): """ Route shortcode works a lot like rendering a page based on the url or route. This allows inserting in rendered HTML within another page. Activate it with the 'shortcodes' template filter. Within the content use the chill route shortcode: "[chill route /path/to/something/]" where the '[chill' and ']' are the shortcode starting and ending tags. And 'route' is this route handler that takes one argument which is the url. """ (node, rule_kw) = node_from_uri(pargs[0]) if node == None: return u"<!-- 404 '{0}' -->".format(pargs[0]) rule_kw.update( node ) values = rule_kw values.update( request.form.to_dict(flat=True) ) values.update( request.args.to_dict(flat=True) ) values['method'] = request.method noderequest = values.copy() noderequest.pop('node_id') noderequest.pop('name') noderequest.pop('value') rendered = render_node(node['id'], noderequest=noderequest, **values) if rendered: if not isinstance(rendered, (str, unicode, int, float)): # return a json string return encoder.encode(rendered) return rendered # Nothing to show, so nothing found return "<!-- 404 '{0}' -->".format(pargs[0])
Route shortcode works a lot like rendering a page based on the url or route. This allows inserting in rendered HTML within another page. Activate it with the 'shortcodes' template filter. Within the content use the chill route shortcode: "[chill route /path/to/something/]" where the '[chill' and ']' are the shortcode starting and ending tags. And 'route' is this route handler that takes one argument which is the url.
Below is the the instruction that describes the task: ### Input: Route shortcode works a lot like rendering a page based on the url or route. This allows inserting in rendered HTML within another page. Activate it with the 'shortcodes' template filter. Within the content use the chill route shortcode: "[chill route /path/to/something/]" where the '[chill' and ']' are the shortcode starting and ending tags. And 'route' is this route handler that takes one argument which is the url. ### Response: def route_handler(context, content, pargs, kwargs): """ Route shortcode works a lot like rendering a page based on the url or route. This allows inserting in rendered HTML within another page. Activate it with the 'shortcodes' template filter. Within the content use the chill route shortcode: "[chill route /path/to/something/]" where the '[chill' and ']' are the shortcode starting and ending tags. And 'route' is this route handler that takes one argument which is the url. """ (node, rule_kw) = node_from_uri(pargs[0]) if node == None: return u"<!-- 404 '{0}' -->".format(pargs[0]) rule_kw.update( node ) values = rule_kw values.update( request.form.to_dict(flat=True) ) values.update( request.args.to_dict(flat=True) ) values['method'] = request.method noderequest = values.copy() noderequest.pop('node_id') noderequest.pop('name') noderequest.pop('value') rendered = render_node(node['id'], noderequest=noderequest, **values) if rendered: if not isinstance(rendered, (str, unicode, int, float)): # return a json string return encoder.encode(rendered) return rendered # Nothing to show, so nothing found return "<!-- 404 '{0}' -->".format(pargs[0])
def p_sum_lvl_1(self, p): """ sum_lvl_1 : script_lvl_1 | script_lvl_1 PLUS sum_lvl_1""" if len(p) == 4: p[3].append(p[1]) p[0] = p[3] else: p[0] = [p[1]]
sum_lvl_1 : script_lvl_1 | script_lvl_1 PLUS sum_lvl_1
Below is the the instruction that describes the task: ### Input: sum_lvl_1 : script_lvl_1 | script_lvl_1 PLUS sum_lvl_1 ### Response: def p_sum_lvl_1(self, p): """ sum_lvl_1 : script_lvl_1 | script_lvl_1 PLUS sum_lvl_1""" if len(p) == 4: p[3].append(p[1]) p[0] = p[3] else: p[0] = [p[1]]