hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3bab2d583dbdb20d52e61488899227c07f7c4954 | 30,867 | py | Python | blocks/bricks/sequence_generators.py | KIKOcaoyue/blocks | dfbeb400cfacfc1abe75e377cc03c1bf61b9c2fa | [
"BSD-3-Clause"
] | 1,067 | 2015-05-16T23:39:15.000Z | 2019-02-10T13:33:00.000Z | blocks/bricks/sequence_generators.py | loveisbasa/blocks | 7f380deec8f810b390880e6a4de836115e6e478d | [
"BSD-3-Clause"
] | 577 | 2015-05-16T18:52:53.000Z | 2018-11-27T15:31:09.000Z | blocks/bricks/sequence_generators.py | loveisbasa/blocks | 7f380deec8f810b390880e6a4de836115e6e478d | [
"BSD-3-Clause"
] | 379 | 2015-05-21T03:24:04.000Z | 2019-01-29T02:55:00.000Z | """Sequence generation framework.
Recurrent networks are often used to generate/model sequences.
Examples include language modelling, machine translation, handwriting
synthesis, etc.. A typical pattern in this context is that
sequence elements are generated one often another, and every generated
element is fed back into the recurrent network state. Sometimes
also an attention mechanism is used to condition sequence generation
on some structured input like another sequence or an image.
This module provides :class:`SequenceGenerator` that builds a sequence
generating network from three main components:
* a core recurrent transition, e.g. :class:`~blocks.bricks.recurrent.LSTM`
or :class:`~blocks.bricks.recurrent.GatedRecurrent`
* a readout component that can produce sequence elements using
the network state and the information from the attention mechanism
* an attention mechanism (see :mod:`~blocks.bricks.attention` for
more information)
Implementation-wise :class:`SequenceGenerator` fully relies on
:class:`BaseSequenceGenerator`. At the level of the latter an
attention is mandatory, moreover it must be a part of the recurrent
transition (see :class:`~blocks.bricks.attention.AttentionRecurrent`).
To simulate optional attention, :class:`SequenceGenerator` wraps the
pure recurrent network in :class:`FakeAttentionRecurrent`.
"""
from abc import ABCMeta, abstractmethod
from six import add_metaclass
from theano import tensor
from blocks.bricks import Initializable, Random, Bias, NDimensionalSoftmax
from blocks.bricks.base import application, Brick, lazy
from blocks.bricks.parallel import Fork, Merge
from blocks.bricks.lookup import LookupTable
from blocks.bricks.recurrent import recurrent
from blocks.bricks.attention import (
AbstractAttentionRecurrent, AttentionRecurrent)
from blocks.roles import add_role, COST
from blocks.utils import dict_union, dict_subset
class BaseSequenceGenerator(Initializable):
r"""A generic sequence generator.
This class combines two components, a readout network and an
attention-equipped recurrent transition, into a context-dependent
sequence generator. Third component must be also given which
forks feedback from the readout network to obtain inputs for the
transition.
The class provides two methods: :meth:`generate` and :meth:`cost`. The
former is to actually generate sequences and the latter is to compute
the cost of generating given sequences.
The generation algorithm description follows.
**Definitions and notation:**
* States :math:`s_i` of the generator are the states of the transition
as specified in `transition.state_names`.
* Contexts of the generator are the contexts of the
transition as specified in `transition.context_names`.
* Glimpses :math:`g_i` are intermediate entities computed at every
generation step from states, contexts and the previous step glimpses.
They are computed in the transition's `apply` method when not given
or by explicitly calling the transition's `take_glimpses` method. The
set of glimpses considered is specified in
`transition.glimpse_names`.
* Outputs :math:`y_i` are produced at every step and form the output
sequence. A generation cost :math:`c_i` is assigned to each output.
**Algorithm:**
1. Initialization.
.. math::
y_0 = readout.initial\_outputs(contexts)\\
s_0, g_0 = transition.initial\_states(contexts)\\
i = 1\\
By default all recurrent bricks from :mod:`~blocks.bricks.recurrent`
have trainable initial states initialized with zeros. Subclass them
or :class:`~blocks.bricks.recurrent.BaseRecurrent` directly to get
custom initial states.
2. New glimpses are computed:
.. math:: g_i = transition.take\_glimpses(
s_{i-1}, g_{i-1}, contexts)
3. A new output is generated by the readout and its cost is
computed:
.. math::
f_{i-1} = readout.feedback(y_{i-1}) \\
r_i = readout.readout(f_{i-1}, s_{i-1}, g_i, contexts) \\
y_i = readout.emit(r_i) \\
c_i = readout.cost(r_i, y_i)
Note that the *new* glimpses and the *old* states are used at this
step. The reason for not merging all readout methods into one is
to make an efficient implementation of :meth:`cost` possible.
4. New states are computed and iteration is done:
.. math::
f_i = readout.feedback(y_i) \\
s_i = transition.compute\_states(s_{i-1}, g_i,
fork.apply(f_i), contexts) \\
i = i + 1
5. Back to step 2 if the desired sequence
length has not been yet reached.
| A scheme of the algorithm described above follows.
.. image:: /_static/sequence_generator_scheme.png
:height: 500px
:width: 500px
..
Parameters
----------
readout : instance of :class:`AbstractReadout`
The readout component of the sequence generator.
transition : instance of :class:`AbstractAttentionRecurrent`
The transition component of the sequence generator.
fork : :class:`~.bricks.Brick`
The brick to compute the transition's inputs from the feedback.
See Also
--------
:class:`.Initializable` : for initialization parameters
:class:`SequenceGenerator` : more user friendly interface to this\
brick
"""
@lazy()
def __init__(self, readout, transition, fork, **kwargs):
self.readout = readout
self.transition = transition
self.fork = fork
children = [self.readout, self.fork, self.transition]
kwargs.setdefault('children', []).extend(children)
super(BaseSequenceGenerator, self).__init__(**kwargs)
@property
def _state_names(self):
return self.transition.compute_states.outputs
@property
def _context_names(self):
return self.transition.apply.contexts
@property
def _glimpse_names(self):
return self.transition.take_glimpses.outputs
def _push_allocation_config(self):
# Configure readout. That involves `get_dim` requests
# to the transition. To make sure that it answers
# correctly we should finish its configuration first.
self.transition.push_allocation_config()
transition_sources = (self._state_names + self._context_names +
self._glimpse_names)
self.readout.source_dims = [self.transition.get_dim(name)
if name in transition_sources
else self.readout.get_dim(name)
for name in self.readout.source_names]
# Configure fork. For similar reasons as outlined above,
# first push `readout` configuration.
self.readout.push_allocation_config()
feedback_name, = self.readout.feedback.outputs
self.fork.input_dim = self.readout.get_dim(feedback_name)
self.fork.output_dims = self.transition.get_dims(
self.fork.apply.outputs)
@application
def cost(self, application_call, outputs, mask=None, **kwargs):
"""Returns the average cost over the minibatch.
The cost is computed by averaging the sum of per token costs for
each sequence over the minibatch.
.. warning::
Note that, the computed cost can be problematic when batches
consist of vastly different sequence lengths.
Parameters
----------
outputs : :class:`~tensor.TensorVariable`
The 3(2) dimensional tensor containing output sequences.
The axis 0 must stand for time, the axis 1 for the
position in the batch.
mask : :class:`~tensor.TensorVariable`
The binary matrix identifying fake outputs.
Returns
-------
cost : :class:`~tensor.Variable`
Theano variable for cost, computed by summing over timesteps
and then averaging over the minibatch.
Notes
-----
The contexts are expected as keyword arguments.
Adds average cost per sequence element `AUXILIARY` variable to
the computational graph with name ``per_sequence_element``.
"""
# Compute the sum of costs
costs = self.cost_matrix(outputs, mask=mask, **kwargs)
cost = tensor.mean(costs.sum(axis=0))
add_role(cost, COST)
# Add auxiliary variable for per sequence element cost
application_call.add_auxiliary_variable(
(costs.sum() / mask.sum()) if mask is not None else costs.mean(),
name='per_sequence_element')
return cost
@application
def cost_matrix(self, application_call, outputs, mask=None, **kwargs):
"""Returns generation costs for output sequences.
See Also
--------
:meth:`cost` : Scalar cost.
"""
# We assume the data has axes (time, batch, features, ...)
batch_size = outputs.shape[1]
# Prepare input for the iterative part
states = dict_subset(kwargs, self._state_names, must_have=False)
# masks in context are optional (e.g. `attended_mask`)
contexts = dict_subset(kwargs, self._context_names, must_have=False)
feedback = self.readout.feedback(outputs)
inputs = self.fork.apply(feedback, as_dict=True)
# Run the recurrent network
results = self.transition.apply(
mask=mask, return_initial_states=True, as_dict=True,
**dict_union(inputs, states, contexts))
# Separate the deliverables. The last states are discarded: they
# are not used to predict any output symbol. The initial glimpses
# are discarded because they are not used for prediction.
# Remember, glimpses are computed _before_ output stage, states are
# computed after.
states = {name: results[name][:-1] for name in self._state_names}
glimpses = {name: results[name][1:] for name in self._glimpse_names}
# Compute the cost
feedback = tensor.roll(feedback, 1, 0)
feedback = tensor.set_subtensor(
feedback[0],
self.readout.feedback(self.readout.initial_outputs(batch_size)))
readouts = self.readout.readout(
feedback=feedback, **dict_union(states, glimpses, contexts))
costs = self.readout.cost(readouts, outputs)
if mask is not None:
costs *= mask
for name, variable in list(glimpses.items()) + list(states.items()):
application_call.add_auxiliary_variable(
variable.copy(), name=name)
# This variables can be used to initialize the initial states of the
# next batch using the last states of the current batch.
for name in self._state_names + self._glimpse_names:
application_call.add_auxiliary_variable(
results[name][-1].copy(), name=name+"_final_value")
return costs
@recurrent
def generate(self, outputs, **kwargs):
"""A sequence generation step.
Parameters
----------
outputs : :class:`~tensor.TensorVariable`
The outputs from the previous step.
Notes
-----
The contexts, previous states and glimpses are expected as keyword
arguments.
"""
states = dict_subset(kwargs, self._state_names)
# masks in context are optional (e.g. `attended_mask`)
contexts = dict_subset(kwargs, self._context_names, must_have=False)
glimpses = dict_subset(kwargs, self._glimpse_names)
next_glimpses = self.transition.take_glimpses(
as_dict=True, **dict_union(states, glimpses, contexts))
next_readouts = self.readout.readout(
feedback=self.readout.feedback(outputs),
**dict_union(states, next_glimpses, contexts))
next_outputs = self.readout.emit(next_readouts)
next_costs = self.readout.cost(next_readouts, next_outputs)
next_feedback = self.readout.feedback(next_outputs)
next_inputs = (self.fork.apply(next_feedback, as_dict=True)
if self.fork else {'feedback': next_feedback})
next_states = self.transition.compute_states(
as_list=True,
**dict_union(next_inputs, states, next_glimpses, contexts))
return (next_states + [next_outputs] +
list(next_glimpses.values()) + [next_costs])
@generate.delegate
def generate_delegate(self):
return self.transition.apply
@generate.property('states')
def generate_states(self):
return self._state_names + ['outputs'] + self._glimpse_names
@generate.property('outputs')
def generate_outputs(self):
return (self._state_names + ['outputs'] +
self._glimpse_names + ['costs'])
def get_dim(self, name):
if name in (self._state_names + self._context_names +
self._glimpse_names):
return self.transition.get_dim(name)
elif name == 'outputs':
return self.readout.get_dim(name)
return super(BaseSequenceGenerator, self).get_dim(name)
@application
def initial_states(self, batch_size, *args, **kwargs):
# TODO: support dict of outputs for application methods
# to simplify this code.
state_dict = dict(
self.transition.initial_states(
batch_size, as_dict=True, *args, **kwargs),
outputs=self.readout.initial_outputs(batch_size))
return [state_dict[state_name]
for state_name in self.generate.states]
@initial_states.property('outputs')
def initial_states_outputs(self):
return self.generate.states
@add_metaclass(ABCMeta)
class AbstractReadout(Initializable):
"""The interface for the readout component of a sequence generator.
The readout component of a sequence generator is a bridge between
the core recurrent network and the output sequence.
Parameters
----------
source_names : list
A list of the source names (outputs) that are needed for the
readout part e.g. ``['states']`` or
``['states', 'weighted_averages']`` or ``['states', 'feedback']``.
readout_dim : int
The dimension of the readout.
Attributes
----------
source_names : list
readout_dim : int
See Also
--------
:class:`BaseSequenceGenerator` : see how exactly a readout is used
:class:`Readout` : the typically used readout brick
"""
@lazy(allocation=['source_names', 'readout_dim'])
def __init__(self, source_names, readout_dim, **kwargs):
self.source_names = source_names
self.readout_dim = readout_dim
super(AbstractReadout, self).__init__(**kwargs)
@abstractmethod
def emit(self, readouts):
"""Produce outputs from readouts.
Parameters
----------
readouts : :class:`~theano.Variable`
Readouts produced by the :meth:`readout` method of
a `(batch_size, readout_dim)` shape.
"""
pass
@abstractmethod
def cost(self, readouts, outputs):
"""Compute generation cost of outputs given readouts.
Parameters
----------
readouts : :class:`~theano.Variable`
Readouts produced by the :meth:`readout` method
of a `(..., readout dim)` shape.
outputs : :class:`~theano.Variable`
Outputs whose cost should be computed. Should have as many
or one less dimensions compared to `readout`. If readout has
`n` dimensions, first `n - 1` dimensions of `outputs` should
match with those of `readouts`.
"""
pass
@abstractmethod
def initial_outputs(self, batch_size):
"""Compute initial outputs for the generator's first step.
In the notation from the :class:`BaseSequenceGenerator`
documentation this method should compute :math:`y_0`.
"""
pass
@abstractmethod
def readout(self, **kwargs):
r"""Compute the readout vector from states, glimpses, etc.
Parameters
----------
\*\*kwargs: dict
Contains sequence generator states, glimpses,
contexts and feedback from the previous outputs.
"""
pass
@abstractmethod
def feedback(self, outputs):
"""Feeds outputs back to be used as inputs of the transition."""
pass
class Readout(AbstractReadout):
r"""Readout brick with separated emitter and feedback parts.
:class:`Readout` combines a few bits and pieces into an object
that can be used as the readout component in
:class:`BaseSequenceGenerator`. This includes an emitter brick,
to which :meth:`emit`, :meth:`cost` and :meth:`initial_outputs`
calls are delegated, a feedback brick to which :meth:`feedback`
functionality is delegated, and a pipeline to actually compute
readouts from all the sources (see the `source_names` attribute
of :class:`AbstractReadout`).
The readout computation pipeline is constructed from `merge` and
`post_merge` brick, whose responsibilites are described in the
respective docstrings.
Parameters
----------
emitter : an instance of :class:`AbstractEmitter`
The emitter component.
feedback_brick : an instance of :class:`AbstractFeedback`
The feedback component.
merge : :class:`~.bricks.Brick`, optional
A brick that takes the sources given in `source_names` as an input
and combines them into a single output. If given, `merge_prototype`
cannot be given.
merge_prototype : :class:`.FeedForward`, optional
If `merge` isn't given, the transformation given by
`merge_prototype` is applied to each input before being summed. By
default a :class:`.Linear` transformation without biases is used.
If given, `merge` cannot be given.
post_merge : :class:`.Feedforward`, optional
This transformation is applied to the merged inputs. By default
:class:`.Bias` is used.
merged_dim : int, optional
The input dimension of `post_merge` i.e. the output dimension of
`merge` (or `merge_prototype`). If not give, it is assumed to be
the same as `readout_dim` (i.e. `post_merge` is assumed to not
change dimensions).
\*\*kwargs : dict
Passed to the parent's constructor.
See Also
--------
:class:`BaseSequenceGenerator` : see how exactly a readout is used
:class:`AbstractEmitter`, :class:`AbstractFeedback`
"""
def __init__(self, emitter=None, feedback_brick=None,
merge=None, merge_prototype=None,
post_merge=None, merged_dim=None, **kwargs):
if not emitter:
emitter = TrivialEmitter(kwargs['readout_dim'])
if not feedback_brick:
feedback_brick = TrivialFeedback(kwargs['readout_dim'])
if not merge:
merge = Merge(input_names=kwargs['source_names'],
prototype=merge_prototype)
if not post_merge:
post_merge = Bias(dim=kwargs['readout_dim'])
if not merged_dim:
merged_dim = kwargs['readout_dim']
self.emitter = emitter
self.feedback_brick = feedback_brick
self.merge = merge
self.post_merge = post_merge
self.merged_dim = merged_dim
children = [self.emitter, self.feedback_brick, self.merge,
self.post_merge]
kwargs.setdefault('children', []).extend(children)
super(Readout, self).__init__(**kwargs)
def _push_allocation_config(self):
self.emitter.readout_dim = self.get_dim('readouts')
self.feedback_brick.output_dim = self.get_dim('outputs')
self.merge.input_names = self.source_names
self.merge.input_dims = self.source_dims
self.merge.output_dim = self.merged_dim
self.post_merge.input_dim = self.merged_dim
self.post_merge.output_dim = self.readout_dim
@application
def readout(self, **kwargs):
merged = self.merge.apply(**{name: kwargs[name]
for name in self.merge.input_names})
merged = self.post_merge.apply(merged)
return merged
@application
def emit(self, readouts):
return self.emitter.emit(readouts)
@application
def cost(self, readouts, outputs):
return self.emitter.cost(readouts, outputs)
@application
def initial_outputs(self, batch_size):
return self.emitter.initial_outputs(batch_size)
@application(outputs=['feedback'])
def feedback(self, outputs):
return self.feedback_brick.feedback(outputs)
def get_dim(self, name):
if name == 'outputs':
return self.emitter.get_dim(name)
elif name == 'feedback':
return self.feedback_brick.get_dim(name)
elif name == 'readouts':
return self.readout_dim
return super(Readout, self).get_dim(name)
@add_metaclass(ABCMeta)
class AbstractEmitter(Brick):
"""The interface for the emitter component of a readout.
Attributes
----------
readout_dim : int
The dimension of the readout. Is given by the
:class:`Readout` brick when allocation configuration
is pushed.
See Also
--------
:class:`Readout`
:class:`SoftmaxEmitter` : for integer outputs
Notes
-----
An important detail about the emitter cost is that it will be
evaluated with inputs of different dimensions so it has to be
flexible enough to handle this. The two ways in which it can be
applied are:
1. In :meth:BaseSequenceGenerator.cost_matrix where it will
be applied to the whole sequence at once.
2. In :meth:BaseSequenceGenerator.generate where it will be
applied to only one step of the sequence.
"""
@abstractmethod
def emit(self, readouts):
"""Implements the respective method of :class:`Readout`."""
pass
@abstractmethod
def cost(self, readouts, outputs):
"""Implements the respective method of :class:`Readout`."""
pass
@abstractmethod
def initial_outputs(self, batch_size):
"""Implements the respective method of :class:`Readout`."""
pass
@add_metaclass(ABCMeta)
class AbstractFeedback(Brick):
"""The interface for the feedback component of a readout.
See Also
--------
:class:`Readout`
:class:`LookupFeedback` for integer outputs
"""
@abstractmethod
def feedback(self, outputs):
"""Implements the respective method of :class:`Readout`."""
pass
class TrivialEmitter(AbstractEmitter):
"""An emitter for the trivial case when readouts are outputs.
Parameters
----------
readout_dim : int
The dimension of the readout.
Notes
-----
By default :meth:`cost` always returns zero tensor.
"""
@lazy(allocation=['readout_dim'])
def __init__(self, readout_dim, **kwargs):
super(TrivialEmitter, self).__init__(**kwargs)
self.readout_dim = readout_dim
@application
def emit(self, readouts):
return readouts
@application
def cost(self, readouts, outputs):
return tensor.zeros_like(outputs)
@application
def initial_outputs(self, batch_size):
return tensor.zeros((batch_size, self.readout_dim))
def get_dim(self, name):
if name == 'outputs':
return self.readout_dim
return super(TrivialEmitter, self).get_dim(name)
class SoftmaxEmitter(AbstractEmitter, Initializable, Random):
"""A softmax emitter for the case of integer outputs.
Interprets readout elements as energies corresponding to their indices.
Parameters
----------
initial_output : int or a scalar :class:`~theano.Variable`
The initial output.
"""
def __init__(self, initial_output=0, **kwargs):
self.initial_output = initial_output
self.softmax = NDimensionalSoftmax()
children = [self.softmax]
kwargs.setdefault('children', []).extend(children)
super(SoftmaxEmitter, self).__init__(**kwargs)
@application
def probs(self, readouts):
return self.softmax.apply(readouts, extra_ndim=readouts.ndim - 2)
@application
def emit(self, readouts):
probs = self.probs(readouts)
batch_size = probs.shape[0]
pvals_flat = probs.reshape((batch_size, -1))
generated = self.theano_rng.multinomial(pvals=pvals_flat)
return generated.reshape(probs.shape).argmax(axis=-1)
@application
def cost(self, readouts, outputs):
# WARNING: unfortunately this application method works
# just fine when `readouts` and `outputs` have
# different dimensions. Be careful!
return self.softmax.categorical_cross_entropy(
outputs, readouts, extra_ndim=readouts.ndim - 2)
@application
def initial_outputs(self, batch_size):
return self.initial_output * tensor.ones((batch_size,), dtype='int64')
def get_dim(self, name):
if name == 'outputs':
return 0
return super(SoftmaxEmitter, self).get_dim(name)
class TrivialFeedback(AbstractFeedback):
"""A feedback brick for the case when readout are outputs."""
@lazy(allocation=['output_dim'])
def __init__(self, output_dim, **kwargs):
super(TrivialFeedback, self).__init__(**kwargs)
self.output_dim = output_dim
@application(outputs=['feedback'])
def feedback(self, outputs):
return outputs
def get_dim(self, name):
if name == 'feedback':
return self.output_dim
return super(TrivialFeedback, self).get_dim(name)
class LookupFeedback(AbstractFeedback, Initializable):
"""A feedback brick for the case when readout are integers.
Stores and retrieves distributed representations of integers.
"""
def __init__(self, num_outputs=None, feedback_dim=None, **kwargs):
self.num_outputs = num_outputs
self.feedback_dim = feedback_dim
self.lookup = LookupTable(num_outputs, feedback_dim)
children = [self.lookup]
kwargs.setdefault('children', []).extend(children)
super(LookupFeedback, self).__init__(**kwargs)
def _push_allocation_config(self):
self.lookup.length = self.num_outputs
self.lookup.dim = self.feedback_dim
@application
def feedback(self, outputs):
assert self.output_dim == 0
return self.lookup.apply(outputs)
def get_dim(self, name):
if name == 'feedback':
return self.feedback_dim
return super(LookupFeedback, self).get_dim(name)
class FakeAttentionRecurrent(AbstractAttentionRecurrent, Initializable):
"""Adds fake attention interface to a transition.
:class:`BaseSequenceGenerator` requires its transition brick to support
:class:`~blocks.bricks.attention.AbstractAttentionRecurrent` interface,
that is to have an embedded attention mechanism. For the cases when no
attention is required (e.g. language modeling or encoder-decoder
models), :class:`FakeAttentionRecurrent` is used to wrap a usual
recurrent brick. The resulting brick has no glimpses and simply
passes all states and contexts to the wrapped one.
.. todo::
Get rid of this brick and support attention-less transitions
in :class:`BaseSequenceGenerator`.
"""
def __init__(self, transition, **kwargs):
self.transition = transition
self.state_names = transition.apply.states
self.context_names = transition.apply.contexts
self.glimpse_names = []
children = [self.transition]
kwargs.setdefault('children', []).extend(children)
super(FakeAttentionRecurrent, self).__init__(**kwargs)
@application
def apply(self, *args, **kwargs):
return self.transition.apply(*args, **kwargs)
@apply.delegate
def apply_delegate(self):
return self.transition.apply
@application
def compute_states(self, *args, **kwargs):
return self.transition.apply(iterate=False, *args, **kwargs)
@compute_states.delegate
def compute_states_delegate(self):
return self.transition.apply
@application(outputs=[])
def take_glimpses(self, *args, **kwargs):
return None
@application
def initial_states(self, batch_size, *args, **kwargs):
return self.transition.initial_states(batch_size,
*args, **kwargs)
@initial_states.property('outputs')
def initial_states_outputs(self):
return self.transition.apply.states
def get_dim(self, name):
return self.transition.get_dim(name)
class SequenceGenerator(BaseSequenceGenerator):
r"""A more user-friendly interface for :class:`BaseSequenceGenerator`.
Parameters
----------
readout : instance of :class:`AbstractReadout`
The readout component for the sequence generator.
transition : instance of :class:`.BaseRecurrent`
The recurrent transition to be used in the sequence generator.
Will be combined with `attention`, if that one is given.
attention : object, optional
The attention mechanism to be added to ``transition``,
an instance of
:class:`~blocks.bricks.attention.AbstractAttention`.
add_contexts : bool
If ``True``, the
:class:`.AttentionRecurrent` wrapping the
`transition` will add additional contexts for the attended and its
mask.
\*\*kwargs : dict
All keywords arguments are passed to the base class. If `fork`
keyword argument is not provided, :class:`.Fork` is created
that forks all transition sequential inputs without a "mask"
substring in them.
"""
def __init__(self, readout, transition, attention=None,
add_contexts=True, **kwargs):
normal_inputs = [name for name in transition.apply.sequences
if 'mask' not in name]
kwargs.setdefault('fork', Fork(normal_inputs))
if attention:
transition = AttentionRecurrent(
transition, attention,
add_contexts=add_contexts, name="att_trans")
else:
transition = FakeAttentionRecurrent(transition,
name="with_fake_attention")
super(SequenceGenerator, self).__init__(
readout, transition, **kwargs)
| 35.357388 | 78 | 0.655846 | 3,626 | 30,867 | 5.458632 | 0.146167 | 0.015157 | 0.006568 | 0.008488 | 0.272773 | 0.196282 | 0.160915 | 0.129036 | 0.089122 | 0.046481 | 0 | 0.002169 | 0.253021 | 30,867 | 872 | 79 | 35.397936 | 0.856269 | 0.450028 | 0 | 0.305949 | 1 | 0 | 0.023152 | 0 | 0 | 0 | 0 | 0.002294 | 0.002833 | 1 | 0.169972 | false | 0.025496 | 0.031161 | 0.073654 | 0.368272 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bb3f6ef40737ea90a9f45f3effca880da0d4227 | 154 | py | Python | lambdata-mkhalil/my_script.py | mkhalil7625/lambdata-mkhalil | 87f74166a3ae4f4cc92733cb5fc0c15e3b32f565 | [
"MIT"
] | null | null | null | lambdata-mkhalil/my_script.py | mkhalil7625/lambdata-mkhalil | 87f74166a3ae4f4cc92733cb5fc0c15e3b32f565 | [
"MIT"
] | null | null | null | lambdata-mkhalil/my_script.py | mkhalil7625/lambdata-mkhalil | 87f74166a3ae4f4cc92733cb5fc0c15e3b32f565 | [
"MIT"
] | null | null | null | import pandas as pd
from my_mod import enlarge
print("Hello!")
df = pd.DataFrame({"a":[1,2,3], "b":[4,5,6]})
print(df.head())
x = 11
print(enlarge(x)) | 14 | 45 | 0.62987 | 30 | 154 | 3.2 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06015 | 0.136364 | 154 | 11 | 46 | 14 | 0.661654 | 0 | 0 | 0 | 0 | 0 | 0.051613 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
3bbaa7105c8bdb5d9e446a53505849bc8d258fd0 | 2,479 | py | Python | src/aiotube/playlist.py | jnsougata/AioTube | 719bc52e442d06f922ada65da7650cfb92a0f237 | [
"MIT"
] | 4 | 2021-10-02T07:01:22.000Z | 2021-12-30T08:27:36.000Z | src/aiotube/playlist.py | jnsougata/AioTube | 719bc52e442d06f922ada65da7650cfb92a0f237 | [
"MIT"
] | 2 | 2021-11-18T20:21:39.000Z | 2021-12-27T17:12:17.000Z | src/aiotube/playlist.py | jnsougata/AioTube | 719bc52e442d06f922ada65da7650cfb92a0f237 | [
"MIT"
] | 3 | 2021-10-01T03:21:33.000Z | 2021-12-21T20:49:30.000Z | from ._threads import _Thread
from .utils import filter
from .videobulk import _VideoBulk
from ._http import _get_playlist_data
from ._rgxs import _PlaylistPatterns as rgx
from typing import List, Optional, Dict, Any
class Playlist:
__HEAD = 'https://www.youtube.com/playlist?list='
def __init__(self, playlist_id: str):
"""
:param str playlist_id: the _id of the playlist
"""
if 'youtube.com' in playlist_id:
self.id = playlist_id.split('list=')[-1]
else:
self.id = playlist_id
self.__playlist_data = _get_playlist_data(self.id)
def __repr__(self):
return f'<Playlist {self.url}>'
@property
def name(self) -> Optional[str]:
"""
:return: the name of the playlist
"""
names = rgx.name.findall(self.__playlist_data)
return names[0] if names else None
@property
def url(self) -> Optional[str]:
"""
:return: url of the playlist
"""
return f'https://www.youtube.com/playlist?list={self.id}'
@property
def video_count(self) -> Optional[str]:
"""
:return: total number of videos in that playlist
"""
video_count = rgx.video_count.findall(self.__playlist_data)
return video_count[0] if video_count else None
@property
def videos(self) -> _VideoBulk:
"""
:return: list of < video objects > for each video in the playlist (consider limit)
"""
videos = rgx.video_id.findall(self.__playlist_data)
return _VideoBulk(filter(iterable=videos))
@property
def thumbnail(self) -> Optional[str]:
"""
:return: url of the thumbnail of the playlist
"""
thumbnails = rgx.thumbnail.findall(self.__playlist_data)
return thumbnails[0] if thumbnails else None
@property
def info(self) -> Dict[str, Any]:
"""
:return: a dict containing playlist info
"""
def _get_data(pattern):
data = pattern.findall(self.__playlist_data)
return data[0] if data else None
patterns = [rgx.name, rgx.video_count, rgx.thumbnail]
data = _Thread.run(_get_data, patterns)
return {
'name': data[0],
'video_count': data[1],
'videos': filter(rgx.video_id.findall(raw)),
'url': self.__HEAD + self.id,
'thumbnail': data[2]
}
| 27.853933 | 90 | 0.592981 | 295 | 2,479 | 4.766102 | 0.240678 | 0.068279 | 0.068279 | 0.081792 | 0.187055 | 0.083926 | 0.041252 | 0 | 0 | 0 | 0 | 0.004592 | 0.297297 | 2,479 | 89 | 91 | 27.853933 | 0.802526 | 0.132715 | 0 | 0.122449 | 0 | 0 | 0.077929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.183673 | false | 0 | 0.122449 | 0.020408 | 0.510204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3bbcc2fb4051ff428b3a3b66b12c0e4e0235c79c | 1,092 | py | Python | download_stock_data.py | dabideee13/Price-Pattern-Prediction | 632d961fc08777adab8eeb7ecbf16ac7cc71a3a7 | [
"MIT"
] | null | null | null | download_stock_data.py | dabideee13/Price-Pattern-Prediction | 632d961fc08777adab8eeb7ecbf16ac7cc71a3a7 | [
"MIT"
] | null | null | null | download_stock_data.py | dabideee13/Price-Pattern-Prediction | 632d961fc08777adab8eeb7ecbf16ac7cc71a3a7 | [
"MIT"
] | null | null | null | #!/opt/anaconda3/bin/python
# -*- coding: utf-8 -*-
"""
Get Stock Data
"""
import time
import pandas as pd
import yfinance as yf
if __name__ == '__main__':
# Path to file
# TODO: make directory if directory doesn't exist
f_file = "/Users/d.e.magno/Datasets/raw_stocks_new.csv"
# TODO: need to check which is already downloaded
stock_file = pd.read_csv('/Users/d.e.magno/Datasets/tickers/generic.csv')
stock_list = stock_file.Ticker
start_timeA = time.time()
for stock in stock_list:
try:
start_timeB = time.time()
print("Downloading {}...".format(stock))
yf.Ticker(stock).history(period="max").to_csv(
f_file.format(stock))
time.sleep(10)
end_timeB = time.time()
print("Time elapsed:", end_timeB - start_timeB)
print()
except Exception as ex:
pass
except KeyboardInterrupt as ex:
break
print("Finished.")
end_timeA = time.time()
print("Total time elapsed:", end_timeA - start_timeA)
| 24.266667 | 77 | 0.598901 | 140 | 1,092 | 4.485714 | 0.528571 | 0.050955 | 0.062102 | 0.038217 | 0.063694 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005109 | 0.282967 | 1,092 | 44 | 78 | 24.818182 | 0.796935 | 0.157509 | 0 | 0 | 0 | 0 | 0.174393 | 0.098234 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0 | false | 0.04 | 0.12 | 0 | 0.12 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bc0be85bb851d619749be911d22c015dc81cc08 | 26,696 | py | Python | pyinstagram/base.py | alessandrocucci/PyInstagram | cd8f30b8c470a8cdcd8da801af897e4d14f7a677 | [
"MIT"
] | 1 | 2019-05-03T17:46:02.000Z | 2019-05-03T17:46:02.000Z | pyinstagram/base.py | alessandrocucci/PyInstagram | cd8f30b8c470a8cdcd8da801af897e4d14f7a677 | [
"MIT"
] | 1 | 2021-06-01T21:51:23.000Z | 2021-06-01T21:51:23.000Z | pyinstagram/base.py | alessandrocucci/PyInstagram | cd8f30b8c470a8cdcd8da801af897e4d14f7a677 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import random
from datetime import datetime
from operator import itemgetter
import requests
import time
from pyinstagram.model import Media
from .exceptions import OAuthException, PyInstagramException
from .oauth import OAuth
from .constants import API_URL
from .utils import DESAdapter
class InstagramApiClient(object):
"""
Classe base per le chiamate all'API ufficiale!
"""
def __init__(self, access_token=None):
self.access_token = access_token
if isinstance(access_token, OAuth):
self.access_token = access_token.access_token
if not self.access_token:
# TODO: Gestire il caso in cui l'access token scada
raise OAuthException("Per usare la libreria devi prima autenticarti!")
@staticmethod
def go_to_sleep(seconds=3600):
"""
Questo metodo viene chiamato quando è stato raggiunto il
limite consentito dall'API, se succede metto in pausa il
programma per un'ora.
:param seconds: int - Numero di secondi di attesa
:return: None
"""
time.sleep(seconds)
def _make_request(self, uri, method='get', data=None):
"""
Metodo che effettua la richiesta alle API Instagram.
:param uri: str - L'Uri da chiamare
:param method: str - metodo http con cui fare la richiesta
:param data: dict - dizionario con i dati da passare nella richiesta
:return: list - lista di dati di risposta
"""
next_url = "" # per la paginazione
res = []
retry = 1 # serve per ripetere la chiamata dopo un ora se supero il limite di richieste
while retry:
res = getattr(requests, method)(uri, data=data)
res, next_url = self._handle_response(res)
if res == 0:
# la chiamata non è andata a buon fine perchè ho raggiunto il limite di chiamate
# ho già aspettato un'ora, adesso ci riprovo.
continue
retry = 0
return res, next_url
def _handle_response(self, request):
"""
Una volta effettuata la chiamata, ci occupiamo di
interpretarne la risposta.
Se la richiesta è andata a buon fine, restituiamo la
lista dei dati, altrimenti o mettiamo in pausa il
programma (se abbiamo raggiunto il limite dell'API)
o solleviamo un'eccezione appropriata.
:param request: requests - la risposta della chiamata
:return: list - lista dei dati ricevuti
"""
if request.status_code == 200:
# Tutto ok!
try:
res = request.json()
except Exception:
raise Exception(request.text)
else:
data = res['data']
next_url = res.get('pagination', {}).get('next_url')
return data, next_url
elif request.status_code == 429:
# OAuthRateLimitException
self.go_to_sleep()
return 0
elif request.status_code == 400:
raise OAuthException(request.json()['meta']['error_message'])
elif "<!DOCTYPE html>" in request.text:
raise PyInstagramException("Page not found")
else:
raise PyInstagramException
def get_by_user(self, id_user=None, count=0):
"""
Metodo usato per cercare gli ultimi post di un utente.
Se non viene passato il paramentro id_user, chiederemo
i post dell'utente che ha autorizzato l'app.
:param id_user: str - post dell'utente da cercare
:param count: int - limita a {count} risultati
:return: list - lista dati
"""
all_media = []
id_user = id_user or "self"
url = API_URL + "users/{0}/media/recent/?access_token={1}".format(id_user, self.access_token)
if count:
url += "&count={}".format(count)
raw_list, next_url = self._make_request(url)
all_media.extend(raw_list)
if len(all_media) > count:
return all_media[:count]
while next_url:
raw_list, next_url = self._make_request(next_url)
all_media.extend(raw_list)
return all_media[:count]
def get_by_hashtag(self, tags=(), count=0):
"""
Metodo usato per cercare i post con uno o più hashtag.
:param tags: iterable - gli hashtag da cercare
:param count: int - massimo numero di risultati da restituire
:return: list - lista di dati
"""
if isinstance(tags, str):
tags = (tags, )
all_media = []
for tag in tags:
url = API_URL + "tags/{0}/media/recent?access_token={1}".format(tag, self.access_token)
if count:
url += "&count={}".format(count)
raw_list, next_url = self._make_request(url)
all_media.extend(raw_list)
while next_url:
raw_list, next_url = self._make_request(next_url)
all_media.extend(raw_list)
return all_media
def search_for_tag(self, tag, count=3):
"""
Metodo usato per cercare hashtag simili a un altro.
:param tag: str - hashtag da cercare
:param count: int - limita a un numero di hashtag
:return: dict
"""
url = API_URL + "tags/search?q={0}&access_token={1}".format(tag, self.access_token)
res, _ = self._make_request(url)
res = sorted(res, key=itemgetter('media_count'))
names = {r['name']: r['media_count'] for r in res[:count]}
return names
class InstagramJsonClient(object):
"""
Classe per fare semplici richieste in get senza usare access token
o le API ufficiali. Fa largo uso di url con query string.
"""
def __init__(self):
self.base_url = "https://www.instagram.com/"
self.session = self._init_session()
def _init_session(self):
"""Abilita il supporto 3DES su Instagram"""
s = requests.Session()
s.mount(self.base_url, DESAdapter())
return s
def get_user_info(self, user):
"""
Ritorna le informazioni di un utente
:param user: username Instagram
:return: dizionario con le info dell'utente
"""
base_url = "{base}{user}/?__a=1".format(
base=self.base_url,
user=user
)
res = self.session.get(base_url)
try:
res = res.json()
except Exception:
raise PyInstagramException("Impossibile scaricare i dati dall'indirizzo: {}".format(base_url))
return res.get('user', {})
def get_by_user(self, user, count=None, since=None, until=None):
"""
Ricerca post (pubblici) di un utente.
Gestisce automaticamente la paginazione.
Ritorna una lista di dizionari così composta:
[
{
id: "1606977067425770236_528817151",
code: "BZNISDyHKr8",
user: {
id: "528817151",
full_name: "NASA",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/11375151_392132304319140_1291663475_a.jpg",
username: "nasa"
},
images: {
thumbnail: {
width: 150,
height: 150,
url: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s150x150/e15/21690201_1801206810171539_7249344908006260736_n.jpg"
},
low_resolution: {
width: 320,
height: 320,
url: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s320x320/e15/21690201_1801206810171539_7249344908006260736_n.jpg"
},
standard_resolution: {
width: 640,
height: 640,
url: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s640x640/e15/21690201_1801206810171539_7249344908006260736_n.jpg"
}
},
created_time: "1505786616",
caption: {
id: "17887172635109592",
text: "Look up in the sky tonight and see Saturn! This month Saturn is the only prominent evening planet low in the southwest sky. Look for it near the constellation Sagittarius. Above and below Saturn--from a dark sky--you can't miss the summer Milky Way spanning the sky from northeast to southwest! Grab a pair of binoculars and scan the teapot-shaped Sagittarius, where stars and some brighter clumps appear as steam from the teapot. Those bright clumps are near the center of our galaxy, which is full of gas, dust and stars. Credit: NASA #nasa #space #astronomy #september #whatsup #night #nightsky #stars #stargazing #saturn #planet",
created_time: "1505786616",
from: {
id: "528817151",
full_name: "NASA",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/11375151_392132304319140_1291663475_a.jpg",
username: "nasa"
}
},
user_has_liked: false,
likes: {
data: [
{
id: "4010977557",
full_name: "Natalia",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/s150x150/14482183_140565769737733_5249004653428867072_a.jpg",
username: "nata.barata"
},
{
id: "2055640911",
full_name: "S@brin@ Lec○cq ♡☆♡",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/s150x150/13534211_1557747037863158_1773299287_a.jpg",
username: "melsab19"
},
{
id: "752521983",
full_name: "Laura Álvarez Peláez",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/10624147_809215025765686_985825156_a.jpg",
username: "lauriwushu"
},
{
id: "1719376530",
full_name: "Julia Paniti",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/10985984_1575721159312127_239135761_a.jpg",
username: "julia_paniti"
}
],
count: 204038
},
comments: {
data: [
{
id: "17876620534138631",
text: "@jennytried ❤️",
created_time: "1505855823",
from: {
id: "4610349",
full_name: "",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/10932285_747424172021124_1089839988_a.jpg",
username: "siskascherz"
}
},
{
id: "17899664473040297",
text: "@a.hm.ed.1",
created_time: "1505855825",
from: {
id: "416900232",
full_name: "Maryem BenKh",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/s150x150/16907969_415736022127336_8841431139366207488_a.jpg",
username: "maariam_bk"
}
},
{
id: "17871962107174729",
text: "Wonderful 😍",
created_time: "1505855872",
from: {
id: "2982243595",
full_name: "Smit Raj",
profile_picture: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-19/s150x150/21690360_117321958944805_772082897589895168_n.jpg",
username: "smit_raj_"
}
}
],
count: 1564
},
can_view_comments: true,
can_delete_comments: false,
type: "video",
link: "https://www.instagram.com/p/BZNISDyHKr8/",
location: null,
alt_media_url: "https://scontent-mxp1-1.cdninstagram.com/t50.2886-16/21904634_340030459792492_153261372472295424_n.mp4",
videos: {
standard_resolution: {
width: 640,
height: 640,
url: "https://scontent-mxp1-1.cdninstagram.com/t50.2886-16/21904634_340030459792492_153261372472295424_n.mp4"
},
low_bandwidth: {
width: 480,
height: 480,
url: "https://scontent-mxp1-1.cdninstagram.com/t50.2886-16/21868687_149708205622876_4737472794344816640_n.mp4"
},
low_resolution: {
width: 480,
height: 480,
url: "https://scontent-mxp1-1.cdninstagram.com/t50.2886-16/21868687_149708205622876_4737472794344816640_n.mp4"
}
},
video_views: 1012473
},
]
:param user: str - username Instagram
:param count: int - limita il numero di risultati
:param since: str - Risultati a partire da questa data, es. "20170101000000"
:param until: str - Risultati entro questa data, es. "20171231235959"
:return:
"""
if since:
try:
since = datetime.strptime(since, "%Y%m%d%H%M%S")
except ValueError:
raise ValueError("Il parametro since non è in un formato corretto (es. '20170101000000')")
if until:
try:
until = datetime.strptime(until, "%Y%m%d%H%M%S")
except ValueError:
raise ValueError("Il parametro until non è in un formato corretto (es. '20170101000000')")
all_data = []
base_url = "{base}{user}?__a=1{{max}}".format(
base=self.base_url,
user=user
)
max_id = ""
next_url = base_url.format(max=max_id)
while True:
res = self.session.get(next_url)
if not res.status_code == 200:
return all_data[:count]
try:
res = res.json()
except Exception:
raise PyInstagramException("Impossibile scaricare i dati dall'indirizzo: {}".format(next_url))
for media_res in res['user']['media']['nodes']:
# Instagram non mi permette di cercare per data, però mi fornisce la
# data di creazione del post in formato Unix Timestamp. Quindi, per
# gestire il caso in cui volessi solo risultati in un certo intervallo,
# verifico che il mio post sia stato creato in questo lasso di tempo.
created_at = int(media_res['date'])
if since and created_at < time.mktime(since.timetuple()):
# sono andato troppo indietro, posso uscire
return all_data[:count]
if until and created_at > time.mktime(until.timetuple()):
continue
all_data.append(media_res)
if res['user']['media']['nodes'] and (not len(all_data) > count if count else True):
# ho oggetti, ne ho altri da scaricare, e non ho raggiunto il limite di risultati
try:
max_id = res['user']['media']['nodes'][-1]['id']
next_url = base_url.format(max="&max_id={}".format(max_id))
except IndexError:
# aspetto un po', index è vuoto e Instagram mi blocca il flusso
time.sleep(random.randint(10, 60))
else:
# tutto ok, ho altri dati da scaricare
continue
else:
# non ho dati, oppure ne ho di più di quelli voluti
break
return all_data[:count]
def get_by_hashtag(self, tags=(), count=1000000, top_posts=True, since=None, until=None):
"""
Ricerca per hashtag.
Gestisce automaticamente la paginazione.
Ritorna una lista di oggetti SqlAlchemy a partire da
una lista di dizionari fatti come segue:
[
{
comments_disabled: false,
id: "1607551655901147333",
dimensions: {
height: 640,
width: 640
},
owner: {
id: "981246989"
},
thumbnail_src: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/e35/21820166_125621088095492_8628217971971457024_n.jpg",
thumbnail_resources: [
{
src: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s150x150/e35/21820166_125621088095492_8628217971971457024_n.jpg",
config_width: 150,
config_height: 150
},
{
src: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s240x240/e35/21820166_125621088095492_8628217971971457024_n.jpg",
config_width: 240,
config_height: 240
},
{
src: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s320x320/e35/21820166_125621088095492_8628217971971457024_n.jpg",
config_width: 320,
config_height: 320
},
{
src: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/s480x480/e35/21820166_125621088095492_8628217971971457024_n.jpg",
config_width: 480,
config_height: 480
}
],
is_video: false,
code: "BZPK7bAFDDF",
date: 1505855112,
display_src: "https://scontent-mxp1-1.cdninstagram.com/t51.2885-15/e35/21820166_125621088095492_8628217971971457024_n.jpg",
caption: "Tommy Hilfiger London Fashion Week Spring_Summer 2018 @londonfashionweek @britishfashioncouncil @tommyhilfiger #londonfashionweek#LFW#fashion#paris#fashionblogger#tehran#fashioneditor#fashionweek#style#streetstyle##milan#london#newyork#mfw#lfw#nyfw#vogue#gq#art#love#fashionshow#blogger#life#event#ss2018#instafashion#runway#fashionmoment0#TOMMYNOW",
comments: {
count: 1
},
likes: {
count: 24
}
},
]
:param tags: str or tuple - hashtag (senza il #) o tupla di hastag
:param count: int - limita i risultati
:param top_posts: bool - limita ai top posts altrimenti ritorna tutto
:param since: str - Risultati a partire da questa data, es. "20170101000000"
:param until: str - Risultati entro questa data, es. "20171231235959"
:return: list - lista di dizionari
"""
if isinstance(tags, str):
tags = (tags, )
if since:
try:
since = datetime.strptime(since, "%Y%m%d%H%M%S")
except ValueError:
raise ValueError("Il parametro since non è in un formato corretto (es. '20170101000000')")
if until:
try:
until = datetime.strptime(until, "%Y%m%d%H%M%S")
except ValueError:
raise ValueError("Il parametro until non è in un formato corretto (es. '20170101000000')")
mapper = {
'id': 'id',
'comments': 'edge_media_to_comment.count',
'unix_datetime': 'taken_at_timestamp',
'user': 'owner.id',
'likes': 'edge_liked_by.count',
'is_video': 'is_video',
'url': 'display_src',
'height': 'dimensions.height',
'width': 'dimensions.width',
'code': 'shortcode'
}
all_data = []
for tag in tags:
all_data_tag = []
base_url = "{base}explore/tags/{tag}?__a=1{{max}}".format(
base=self.base_url,
tag=tag
)
max_id = ""
next_url = base_url.format(max=max_id)
while True:
res = self.session.get(next_url)
try:
res = res.json()
except Exception:
if "Sorry, this page isn't available" in res.text:
# Post rimosso o non più raggiungibile
continue
else:
raise PyInstagramException("Impossibile scaricare i dati dall'indirizzo: {}".format(next_url))
res_media = res['graphql']['hashtag']['edge_hashtag_to_top_posts'] if top_posts else res['graphql']['hashtag']['edge_hashtag_to_media']
has_next_page = res['graphql']['hashtag']['edge_hashtag_to_media']['page_info']['has_next_page']
# converto in oggetti SqlAlchemy
sqlalchemy_media = []
for element in res_media['edges']:
# Instagram non mi permette di cercare per data, però mi fornisce la
# data di creazione del post in formato Unix Timestamp. Quindi, per
# gestire il caso in cui volessi solo risultati in un certo intervallo,
# verifico che il mio post sia stato creato in questo lasso di tempo.
created_at = int(element['node']['taken_at_timestamp'])
if since and created_at < time.mktime(since.timetuple()):
# sono andato troppo indietro, posso uscire
break
if until and created_at > time.mktime(until.timetuple()):
continue
model = Media()
for field_to, getter in mapper.items():
path = getter.split('.')
val = element['node']
for key in path:
val = val.get(key, {})
if isinstance(val, dict):
val = None
setattr(model, field_to, val)
model.json = element['node']
model.caption = element['node']['edge_media_to_caption']['edges'][0]['node']['text']
sqlalchemy_media.append(model)
all_data_tag.extend(sqlalchemy_media)
if res_media['edges'] and has_next_page and not len(all_data_tag) > count and not top_posts:
try:
max_id = res['graphql']['hashtag']['edge_hashtag_to_media']['page_info']['end_cursor']
next_url = base_url.format(max="&max_id={}".format(max_id))
except IndexError:
# aspetto un po', index è vuoto e Instagram mi blocca il flusso
time.sleep(random.randint(10, 60))
else:
# tutto ok, ho altri dati da scaricare
continue
else:
# non ho dati, oppure ne ho di più di quelli voluti
break
all_data.extend(all_data_tag)
return all_data[:count]
def get_by_media_codes(self, codes=(), all_comments=False):
"""
Restituisce una lista contenente i dati dei post richiesti
(identificati dalla stringa 'code' del post). Attivando
il flag all_comments, verranno fatte ulteriori richieste
gestendo la paginazione dei commenti. I commenti verranno
aggiunti al json originale in modo da avere alla fina una
lista composta da tanti elementi quanti sono i post
richiesti.
:param codes: stringa del codice o tupla con i codici dei post
:param all_comments: bool - se attivato, scarica tutti i commenti
:return: lista di json con i dati dei post richiesti
"""
if isinstance(codes, str):
codes = (codes,)
all_data = []
for code in codes:
url = "{base}p/{code}?__a=1".format(
base=self.base_url,
code=code
)
res = self.session.get(url)
try:
res = res.json()
except Exception:
if "Sorry, this page isn't available" in res.text:
# Post rimosso o non più raggiungibile
continue
else:
raise PyInstagramException("Impossibile scaricare i dati dall'indirizzo: {}".format(url))
if all_comments:
while True:
page_info = res['graphql']['shortcode_media']['edge_media_to_comment']['page_info']
if page_info['has_next_page']:
next_url = url + "&max_id={}".format(page_info['end_cursor'])
next_res = self.session.get(next_url)
next_res = next_res.json()
res_edges = res['graphql']['shortcode_media']['edge_media_to_comment']['edges']
next_edges = next_res['graphql']['shortcode_media']['edge_media_to_comment']['edges']
res_edges.extend(next_edges)
else:
break
all_data.append(res)
return all_data
| 44.271973 | 661 | 0.523599 | 2,756 | 26,696 | 4.933962 | 0.234035 | 0.012355 | 0.027504 | 0.029122 | 0.470952 | 0.440285 | 0.412855 | 0.395279 | 0.353876 | 0.340491 | 0 | 0.104754 | 0.388523 | 26,696 | 602 | 662 | 44.345515 | 0.727824 | 0.480784 | 0 | 0.454902 | 0 | 0 | 0.144811 | 0.031886 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.05098 | false | 0 | 0.039216 | 0 | 0.152941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bc47ccda883fce926fb879e2f171e425ac7191d | 1,959 | py | Python | login/models.py | zcw576020095/netsysyconfig_platform | d47be2c5b3418d59a226cb9e135972160e51df00 | [
"Unlicense"
] | 1 | 2022-03-25T07:49:10.000Z | 2022-03-25T07:49:10.000Z | login/models.py | zcw576020095/netsysyconfig_platform | d47be2c5b3418d59a226cb9e135972160e51df00 | [
"Unlicense"
] | null | null | null | login/models.py | zcw576020095/netsysyconfig_platform | d47be2c5b3418d59a226cb9e135972160e51df00 | [
"Unlicense"
] | null | null | null | from django.db import models
# Create your models here.
class User(models.Model):
gender = (
('male',"男"),
('female',"女")
)
name = models.CharField(max_length=128,unique=True)
password = models.CharField(max_length=256)
email = models.EmailField(unique=True)
sex = models.CharField(max_length=32, choices=gender,default="男")
create_time = models.DateTimeField(auto_now_add=True)
has_confirmed = models.BooleanField(default=False)
def __str__(self):
return self.name
class Meta:
ordering = ["-create_time"]
verbose_name = "用户"
verbose_name_plural = "用户"
class ConfirmString(models.Model):
code = models.CharField(max_length=256)
user = models.OneToOneField('User',on_delete=models.CASCADE)
create_time = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.user.name + ": " + self.code
class Meta:
ordering = ["-create_time"]
verbose_name = "确认码"
verbose_name_plural = "确认码"
## 断网记录
class ClickHistory(models.Model):
clicknet_areaname = models.CharField(max_length=128,verbose_name='断网区域')
clicknet_date = models.DateTimeField(max_length=64,verbose_name="断网时间")
def __str__(self):
return '{} {}'.format(self.clicknet_areaname,self.clicknet_date)
class Meta:
db_table = 'click_history'
ordering = ["-clicknet_date"]
verbose_name = "断网记录"
verbose_name_plural = "断网记录"
## 联网记录
class ConnectHistory(models.Model):
connectnet_areaname = models.CharField(max_length=128,verbose_name='联网区域')
connectnet_date = models.DateTimeField(max_length=64,verbose_name="联网时间")
def __str__(self):
return '{} {}'.format(self.connectnet_areaname,self.connectnet_date)
class Meta:
db_table = 'connect_history'
ordering = ["-connectnet_date"]
verbose_name = "联网记录"
verbose_name_plural = "联网记录" | 27.591549 | 78 | 0.669219 | 230 | 1,959 | 5.421739 | 0.321739 | 0.105854 | 0.086608 | 0.115477 | 0.446672 | 0.317562 | 0.275862 | 0.214916 | 0 | 0 | 0 | 0.013566 | 0.209801 | 1,959 | 71 | 79 | 27.591549 | 0.79199 | 0.017356 | 0 | 0.25 | 0 | 0 | 0.079688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.020833 | 0.020833 | 0.083333 | 0.645833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3bc648d7577a48d53c343d95dc1ac69b209de7c4 | 11,380 | py | Python | subscribe/models.py | jonge-democraten/dyonisos | bebc5b28761bd5e036e4e6e219b5474d901026c3 | [
"MIT"
] | null | null | null | subscribe/models.py | jonge-democraten/dyonisos | bebc5b28761bd5e036e4e6e219b5474d901026c3 | [
"MIT"
] | 10 | 2016-10-31T21:14:06.000Z | 2021-01-07T22:34:42.000Z | subscribe/models.py | jonge-democraten/dyonisos | bebc5b28761bd5e036e4e6e219b5474d901026c3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2011,2014 Floor Terra <floort@gmail.com>
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import datetime
import logging
import traceback
from django.core.mail import EmailMessage
from django.db import models
from django.template import Context, Template
logger = logging.getLogger(__name__)
AFDELINGEN = (
("AMS", "Amsterdam"),
("AN", "Arnhem-Nijmegen"),
("BB", "Brabant"),
("FR", "Friesland"),
("GR", "Groningen"),
("LH", "Leiden-Haaglanden"),
("MS", "Limburg"),
("RD", "Rotterdam"),
("TW", "Overijssel"),
("UT", "Utrecht"),
("WN", "Wageningen"),
("INT", "Internationaal"),
)
def afdeling_text(afd):
for key, value in AFDELINGEN:
if key == afd:
return value
return None
QUESTION_TYPES = (
("INT", "Integer"),
("TXT", "Text Input"),
("AFD", "Afdeling"),
("BOOL", "Ja/Nee"),
("CHOICE", "Multiple Choice"),
("TEXT", "HTML Text"),
)
class Event(models.Model):
name = models.CharField(max_length=200)
slug = models.SlugField()
start_registration = models.DateTimeField()
end_registration = models.DateTimeField()
description = models.TextField()
contact_email = models.EmailField()
email_template = models.TextField(help_text="Enkele placeholders: {{voornaam}}, {{achternaam}}, {{inschrijf_opties}}")
price = models.IntegerField(help_text="Eurocenten", default=0)
max_registrations = models.IntegerField(default=0, help_text="Als groter dan 0, bepaalt maximaal aantal inschrijvingen")
class Meta:
ordering = ('-end_registration',)
def __str__(self):
return self.name
def subscribed(self):
return len(Registration.objects.filter(event=self))
def paid(self):
return len(Registration.objects.filter(event=self).filter(paid=True))
def total_paid(self):
return "\u20AC %.2f" % (sum([e.price for e in self.registrations.filter(paid=True)]) / 100.)
def form_link(self):
return "<a href=\"https://events.jongedemocraten.nl/inschrijven/%s/\">Inschrijven</a>" % (self.slug)
form_link.allow_tags = True
def all_free(self):
"""Are all event options free?"""
if self.price != 0:
return False
if len(EventOption.objects.filter(price__gt=0).filter(question__event=self)):
return False
return True
def active(self):
now = datetime.datetime.now()
if self.start_registration > now or self.end_registration < now:
return False
return True
# active.boolean = True
def price_str(self):
return "\u20AC %.2f" % (float(self.price) / 100)
def is_full(self):
if self.max_registrations <= 0:
return False
return self.registrations.count() >= self.max_registrations
is_full.boolean = True
def get_registrations_over_limit(self):
results = []
if self.max_registrations > 0:
results += self.registrations.order_by('pk')[int(self.max_registrations):]
for question in self.eventquestion_set.all():
for option in question.options.all():
results += option.get_registrations_over_limit()
return results
class EventQuestion(models.Model):
event = models.ForeignKey(Event)
name = models.CharField(max_length=64)
question_type = models.CharField(max_length=16, choices=QUESTION_TYPES)
required = models.BooleanField(default=False, help_text='Bij Ja/Nee: verplicht aanvinken; bij andere: verplicht invullen')
radio = models.BooleanField(default=False, help_text='Voor multiple-choice/afdeling: geen dropdown maar radio buttons')
order = models.IntegerField(default=0, help_text='Bepaalt volgorde op formulier; gebruik order<0 voor elementen vooraf aan voornaam, achternaam en email')
text = models.TextField(blank=True, default='', help_text='Voor "HTML Text"; geldige HTML tags: a, b/strong, code, em/i, h3, img, ul, ol, li, p, br; Geldige HTML attributen: class, style, a.href, a.target, img.src, img.alt')
def __str__(self):
return "%s (%s)" % (self.name, self.question_type)
def form_id(self):
return "q%d" % (self.id)
def delete_event_question(self):
return '<a href="/deleteEventQuestion/?optionId=%d">Delete</a>' % (self.id)
delete_event_question.allow_tags = True
class EventOption(models.Model):
question = models.ForeignKey('EventQuestion', related_name="options")
name = models.CharField(max_length=200)
price = models.IntegerField(help_text="Eurocenten", default=0)
active = models.BooleanField(default=True)
order = models.IntegerField(default=0)
limit = models.IntegerField(default=0, help_text="Aantal beschikbare plekken (0 = geen limiet)")
def __str__(self):
if self.price < 0:
return "%s: \u20AC %.2f korting" % (self.name, float(-self.price) / 100)
if self.price > 0:
return "%s: \u20AC %.2f" % (self.name, float(self.price) / 100)
else:
return "%s" % (self.name,)
def price_str(self):
return "\u20AC %.2f" % (float(self.price) / 100)
def delete_event_option(self):
return '<a href="/deleteEventOption/?optionId=%d">Delete</a>' % (self.id)
delete_event_option.allow_tags = True
def get_related_registrations(self):
return Registration.objects.filter(answers__option=self).order_by('pk')
def num_registrations(self):
registrations = self.get_related_registrations()
return registrations.count()
def is_full(self):
if self.limit <= 0:
return False
return self.num_registrations() >= self.limit
is_full.boolean = True
def limit_str(self):
if self.limit <= 0:
return "-"
return "{}/{}".format(self.num_registrations(), self.limit)
limit_str.short_description = "Limit usage"
def get_registrations_over_limit(self):
if self.limit <= 0:
return []
registrations = self.get_related_registrations()
return registrations[int(self.limit):]
def limit_reached(self):
return self.is_full()
limit_reached.boolean = True
class Registration(models.Model):
registration_date = models.DateTimeField(auto_now_add=True)
first_name = models.CharField(max_length=64)
last_name = models.CharField(max_length=64)
email = models.EmailField(blank=True)
event = models.ForeignKey(Event, related_name='registrations')
price = models.IntegerField(default=0)
paid = models.BooleanField(default=False)
status = models.CharField(max_length=64, default="", blank=True)
trxid = models.CharField(max_length=128, default="", blank=True)
def calculate_price(self):
self.price = self.event.price + sum([answer.option.price for answer in self.answers.exclude(option=None)])
def get_options_text(self):
results = []
added_default_fields = False
answers = {a.question: a.get_answer() for a in self.answers.all()}
for question in self.event.eventquestion_set.order_by('order'):
if question.order >= 0 and not added_default_fields:
results += ["Voornaam: {}".format(self.first_name)]
results += ["Achternaam: {}".format(self.last_name)]
results += ["Email: {}".format(self.email)]
added_default_fields = True
if question in answers:
results += ["{}: {}".format(question.name, answers[question])]
if not added_default_fields:
results += ["Voornaam: {}".format(self.first_name)]
results += ["Achternaam: {}".format(self.last_name)]
results += ["Email: {}".format(self.email)]
return '\n'.join(results)
def __str__(self):
return "%s %s - %s - %s" % (self.first_name, self.last_name, self.event, str(self.price))
def gen_subscription_id(self):
num_id = str(self.id)
safe = set("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")
return num_id + "x" + filter(lambda c: c in safe, self.get_options_name())[:15 - len(num_id)]
def send_confirmation_email(self):
t = Template(self.event.email_template)
c = Context({
"voornaam": self.first_name,
"achternaam": self.last_name,
"inschrijf_opties": self.get_options_text(),
})
rendered_mail = t.render(c)
email = EmailMessage(
subject="Inschrijfbevestiging: %s" % (self.event.name),
body=rendered_mail,
from_email=self.event.contact_email,
to=[self.email],
)
try:
email.send()
except:
logger.error("Could not send welcome mail to %s" % (self.email))
logger.error(traceback.format_exc())
raise
return rendered_mail
class Answer(models.Model):
# This should maybe be a "through" model
registration = models.ForeignKey(Registration, related_name='answers')
question = models.ForeignKey(EventQuestion)
int_field = models.IntegerField(default=0, null=True)
txt_field = models.CharField(max_length=256, blank=True)
bool_field = models.BooleanField(default=False)
option = models.ForeignKey(EventOption, default=None, null=True, blank=True)
def __str__(self):
return "%s - %s" % (self.question, self.get_answer())
def set_answer(self, ans):
if self.question.question_type == "INT":
self.int_field = ans
elif self.question.question_type == "TXT":
self.txt_field = ans
elif self.question.question_type == "AFD":
self.txt_field = ans
elif self.question.question_type == "BOOL":
self.bool_field = ans
if self.bool_field and len(self.question.options.all()):
self.option = self.question.options.all()[0]
else:
self.option = None
elif self.question.question_type == "CHOICE":
self.option = ans
def get_answer(self):
if self.question.question_type == "INT":
return self.int_field
elif self.question.question_type == "TXT":
return self.txt_field
elif self.question.question_type == "AFD":
return afdeling_text(self.txt_field)
elif self.question.question_type == "BOOL":
if self.option is not None:
return self.option
else:
return self.bool_field and 'Ja' or 'Nee'
elif self.question.question_type == "CHOICE":
return self.option
| 37.682119 | 228 | 0.644991 | 1,381 | 11,380 | 5.177408 | 0.242578 | 0.022378 | 0.027972 | 0.033566 | 0.276783 | 0.242238 | 0.14042 | 0.112727 | 0.058182 | 0.046154 | 0 | 0.010979 | 0.231634 | 11,380 | 301 | 229 | 37.807309 | 0.806725 | 0.075747 | 0 | 0.239316 | 0 | 0.004274 | 0.133657 | 0.019815 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132479 | false | 0 | 0.025641 | 0.064103 | 0.529915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3bcd4cf0614ab6f7c88bcffd7170ce176a5a3489 | 305 | py | Python | tests/text_processors/test_json_text_processor.py | lyteloli/NekoGram | f077471000b40a74e0eb4e98dfb570b5e34d23ab | [
"MIT"
] | 8 | 2020-08-21T07:43:52.000Z | 2022-01-27T06:48:01.000Z | tests/text_processors/test_json_text_processor.py | lyteloli/NekoGram | f077471000b40a74e0eb4e98dfb570b5e34d23ab | [
"MIT"
] | null | null | null | tests/text_processors/test_json_text_processor.py | lyteloli/NekoGram | f077471000b40a74e0eb4e98dfb570b5e34d23ab | [
"MIT"
] | 1 | 2022-01-27T06:48:02.000Z | 2022-01-27T06:48:02.000Z | from NekoGram import Neko, Bot
import json
def test_json_text_processor():
neko = Neko(bot=Bot(token='0:0', validate_token=False), validate_text_names=False)
raw_json = '{"x": {"text": "hello"} }'
neko.add_texts(texts=raw_json, lang='en')
assert neko.texts['en'] == json.loads(raw_json)
| 30.5 | 86 | 0.688525 | 47 | 305 | 4.255319 | 0.510638 | 0.105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007663 | 0.144262 | 305 | 9 | 87 | 33.888889 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0.104918 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bcdd1ca315307c12c5399ab4a8df2ed64ad6708 | 7,960 | py | Python | itdagene/app/meetings/migrations/0001_initial.py | itdagene-ntnu/itdagene | b972cd3d803debccebbc33641397a39834b8d69a | [
"MIT"
] | 9 | 2018-10-17T20:58:09.000Z | 2021-12-16T16:16:45.000Z | itdagene/app/meetings/migrations/0001_initial.py | itdagene-ntnu/itdagene | b972cd3d803debccebbc33641397a39834b8d69a | [
"MIT"
] | 177 | 2018-10-27T18:15:56.000Z | 2022-03-28T04:29:06.000Z | itdagene/app/meetings/migrations/0001_initial.py | itdagene-ntnu/itdagene | b972cd3d803debccebbc33641397a39834b8d69a | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [migrations.swappable_dependency(settings.AUTH_USER_MODEL)]
operations = [
migrations.CreateModel(
name="Meeting",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("date_created", models.DateTimeField(editable=False)),
("date_saved", models.DateTimeField(editable=False)),
("date", models.DateField(verbose_name="date")),
("start_time", models.TimeField(verbose_name="from time")),
(
"end_time",
models.TimeField(null=True, verbose_name="to time", blank=True),
),
(
"type",
models.PositiveIntegerField(
default=0,
verbose_name="type",
choices=[
(0, "Board meeting"),
(1, "Web"),
(2, "Banquet"),
(3, "Logistics"),
(4, "Marketing"),
(5, "Other"),
],
),
),
(
"location",
models.CharField(
max_length=40, verbose_name="location", blank=True
),
),
(
"abstract",
models.TextField(null=True, verbose_name="abstract", blank=True),
),
(
"is_board_meeting",
models.BooleanField(default=True, verbose_name="is board meeting"),
),
(
"creator",
models.ForeignKey(
related_name="meeting_creator",
editable=False,
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
(
"referee",
models.ForeignKey(
related_name="refereed_meetings",
verbose_name="referee",
blank=True,
on_delete=models.CASCADE,
to=settings.AUTH_USER_MODEL,
null=True,
),
),
(
"saved_by",
models.ForeignKey(
related_name="meeting_saved_by",
editable=False,
on_delete=models.CASCADE,
to=settings.AUTH_USER_MODEL,
),
),
],
options={"verbose_name": "meeting", "verbose_name_plural": "meetings"},
bases=(models.Model,),
),
migrations.CreateModel(
name="Penalty",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("date_created", models.DateTimeField(editable=False)),
("date_saved", models.DateTimeField(editable=False)),
(
"type",
models.CharField(
default="beer",
max_length=10,
verbose_name="type",
choices=[(b"beer", "Beer"), (b"wine", "Wine")],
),
),
(
"bottles",
models.PositiveIntegerField(
default=2, verbose_name="number of bottles"
),
),
("reason", models.TextField(verbose_name="reason")),
(
"creator",
models.ForeignKey(
related_name="penalty_creator",
editable=False,
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
(
"meeting",
models.ForeignKey(
verbose_name="meeting",
blank=True,
to="meetings.Meeting",
null=True,
on_delete=models.SET_NULL,
),
),
(
"saved_by",
models.ForeignKey(
related_name="penalty_saved_by",
editable=False,
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
(
"user",
models.ForeignKey(
related_name="penalties",
verbose_name="person",
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
],
options={"abstract": False},
bases=(models.Model,),
),
migrations.CreateModel(
name="ReplyMeeting",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("date_created", models.DateTimeField(editable=False)),
("date_saved", models.DateTimeField(editable=False)),
(
"is_attending",
models.NullBooleanField(default=False, verbose_name="attending"),
),
(
"creator",
models.ForeignKey(
related_name="replymeeting_creator",
editable=False,
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
(
"meeting",
models.ForeignKey(
related_name="replies",
verbose_name="meeting",
to="meetings.Meeting",
on_delete=models.CASCADE,
),
),
(
"saved_by",
models.ForeignKey(
related_name="replymeeting_saved_by",
editable=False,
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
(
"user",
models.ForeignKey(
verbose_name="user",
to=settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
),
),
],
options={"abstract": False},
bases=(models.Model,),
),
]
| 35.695067 | 87 | 0.358417 | 473 | 7,960 | 5.818182 | 0.211416 | 0.083939 | 0.055959 | 0.076308 | 0.533794 | 0.476381 | 0.415334 | 0.415334 | 0.415334 | 0.383358 | 0 | 0.003375 | 0.553266 | 7,960 | 222 | 88 | 35.855856 | 0.770529 | 0 | 0 | 0.663594 | 0 | 0 | 0.08593 | 0.002638 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013825 | 0 | 0.02765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bd3be55e7aed0ec74f1031095e8ca063b4aa8fd | 515 | py | Python | Applied Project/Web/app/config.py | rebeccabernie/CurrencyAnalyser | 1f57e5b5fee854912c205cb98f57c980027f0a03 | [
"MIT"
] | 27 | 2018-06-22T18:49:52.000Z | 2022-02-18T07:58:48.000Z | Applied Project/Web/app/config.py | taraokelly/CurrencyAnalyser | 1f57e5b5fee854912c205cb98f57c980027f0a03 | [
"MIT"
] | 10 | 2020-01-28T22:24:22.000Z | 2022-02-10T13:11:32.000Z | Applied Project/Web/app/config.py | taraokelly/CurrencyAnalyser | 1f57e5b5fee854912c205cb98f57c980027f0a03 | [
"MIT"
] | 6 | 2018-05-02T16:43:45.000Z | 2020-11-17T18:00:36.000Z | """ Global Flask Application Settings """
import os
from app import app
class Config(object):
DEBUG = False
TESTING = False
PRODUCTION = False
class Development(Config):
MODE = 'Development'
DEBUG = True
class Production(Config):
MODE = 'Production'
DEBUG = False
PRODUCTION = True
# Set FLASK_CONFIG env to 'Production' or 'Development' to set Config
flask_config = os.environ.get('FLASK_CONFIG', 'Development')
app.config.from_object('app.config.{}'.format(flask_config))
| 19.074074 | 69 | 0.700971 | 63 | 515 | 5.650794 | 0.396825 | 0.123596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192233 | 515 | 26 | 70 | 19.807692 | 0.855769 | 0.2 | 0 | 0.133333 | 0 | 0 | 0.140741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.866667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3bd4ef311d3ceb65a757cf2dcd1641a9fa9f94c6 | 5,420 | py | Python | app/v2/resources/users.py | fabischolasi/fast-food-fast-v1 | 492f0bdaaeadf12089a200a9b64bdfc22cd03d0c | [
"MIT"
] | 1 | 2019-10-16T07:56:31.000Z | 2019-10-16T07:56:31.000Z | app/v2/resources/users.py | fabzer0/FastFoodAPI | 492f0bdaaeadf12089a200a9b64bdfc22cd03d0c | [
"MIT"
] | null | null | null | app/v2/resources/users.py | fabzer0/FastFoodAPI | 492f0bdaaeadf12089a200a9b64bdfc22cd03d0c | [
"MIT"
] | null | null | null | from flask import Blueprint, jsonify, make_response
from flask_restful import Resource, Api, reqparse, inputs
from ..models.decorators import admin_required
from ..models.models import UserModel
import os
class SignUp(Resource):
def __init__(self):
"""
Validates both json and form-data input
"""
self.reqparse = reqparse.RequestParser()
self.reqparse.add_argument(
'username',
required=True,
help='kindly provide a valid username',
type=inputs.regex(r"(.*\S.*)"),
location=['form', 'json'])
self.reqparse.add_argument(
'email',
required=True,
help='kindly provide a valid email address',
location=['form', 'json'],
type=inputs.regex(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)"))
self.reqparse.add_argument(
'password',
required=True,
trim=True,
help='kindly provide a valid password',
location=['form', 'json'])
self.reqparse.add_argument(
'confirm_password',
required=True,
trim=True,
help='kindly provide a valid confirmation password',
location=['form', 'json'])
super(SignUp, self).__init__()
def post(self):
"""
Register a new user
"""
kwargs = self.reqparse.parse_args()
username = kwargs.get('username')
email = kwargs.get('email')
password = kwargs.get('password')
confirm_password = kwargs.get('confirm_password')
username_exist = UserModel.get_one('users', username=username)
if username_exist:
return make_response(jsonify({'message': 'username already taken'}), 400)
if password == confirm_password:
if len(password) >= 8:
email_exists = UserModel.get_one('users', email=email)
if not email_exists:
if username == os.getenv('ADMIN'):
user = UserModel(username=username, email=email, password=password)
user.create_user()
fetch_admin = UserModel.get_one('users', username=username)
data = {'admin': True}
UserModel.update('users', id=fetch_admin[0], data=data)
user = UserModel.get_one('users', id=fetch_admin[0])
return jsonify({'admin': UserModel.user_details(user)})
user = UserModel(username=username, email=email, password=password)
user.create_user()
user = UserModel.get_one('users', username=username)
return make_response(jsonify({'message': 'successfully registered', 'user': UserModel.user_details(user)}), 201)
return make_response(jsonify({'message': 'email already taken'}), 203)
return make_response(jsonify({'message': 'password should be atleast 8 characters'}), 400)
return make_response(jsonify({"message" : "password and confirm password should be identical"}), 400)
class AllUsers(Resource):
@admin_required
def get(self):
users = UserModel.get_all('users')
if not users:
return jsonify({'message': 'no users found yet'})
return make_response(jsonify({'all_users': [UserModel.user_details(user) for user in users]}))
class PromoteUser(Resource):
@admin_required
def put(self, user_id):
user = UserModel.get_one('users', id=user_id)
if not user:
return jsonify({'message': 'user not found'})
data = {'admin': True}
UserModel.update('users', id=user[0], data=data)
user = UserModel.get_one('users', id=user_id)
return jsonify({'user': UserModel.user_details(user)})
class Login(Resource):
def __init__(self):
self.reqparse = reqparse.RequestParser()
self.reqparse.add_argument(
'email',
required=True,
help='kindly provide a valid email address',
location=['form', 'json'],
type=inputs.regex(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)"))
self.reqparse.add_argument(
'password',
required=True,
trim=True,
help='kindly provide a valid password',
location=['form', 'json'])
super(Login, self).__init__()
def post(self):
kwargs = self.reqparse.parse_args()
email = kwargs.get('email')
password = kwargs.get('password')
user = UserModel.get_one('users', email=email)
if user is None:
return make_response(jsonify({'message': 'invalid email or password'}), 404)
if UserModel.validate_password(password=password, email=user[2]):
token = UserModel.generate_token(user)
return make_response(jsonify({'message': 'you are successfully logged in', 'token': token}), 200)
return make_response(jsonify({'message': 'invalid email or password'}), 401)
users_api = Blueprint('resources.users', __name__)
api = Api(users_api)
api.add_resource(SignUp, '/auth/signup', endpoint='signup')
api.add_resource(AllUsers, '/users')
api.add_resource(PromoteUser, '/users/<int:user_id>')
api.add_resource(Login, '/auth/login', endpoint='login')
| 38.992806 | 132 | 0.587823 | 596 | 5,420 | 5.206376 | 0.20302 | 0.038672 | 0.052208 | 0.07251 | 0.56816 | 0.458911 | 0.398324 | 0.331292 | 0.262005 | 0.204641 | 0 | 0.010753 | 0.279336 | 5,420 | 138 | 133 | 39.275362 | 0.783666 | 0.010886 | 0 | 0.477477 | 0 | 0.018018 | 0.182725 | 0.018818 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.162162 | 0.045045 | 0 | 0.252252 | 0.018018 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3bdfdc921f29e9f07e8dacf34bfc075882611de3 | 1,368 | py | Python | syd/syd_stitch_image.py | OpenSyd/syd | 0f7478c7dedb623ab955e906c103cb64a7abb4b3 | [
"Apache-2.0"
] | 4 | 2015-07-29T19:10:35.000Z | 2020-11-17T07:48:41.000Z | syd/syd_stitch_image.py | OpenSyd/syd | 0f7478c7dedb623ab955e906c103cb64a7abb4b3 | [
"Apache-2.0"
] | 9 | 2015-05-14T09:07:37.000Z | 2022-03-15T10:13:59.000Z | syd/syd_stitch_image.py | OpenSyd/syd | 0f7478c7dedb623ab955e906c103cb64a7abb4b3 | [
"Apache-2.0"
] | 3 | 2016-09-07T06:26:52.000Z | 2016-10-04T12:29:03.000Z | #!/usr/bin/env python3
import itk
import syd
# -----------------------------------------------------------------------------
def stitch_image(db, image1, image2):
print('image1', image1)
print('image2', image2)
im1 = syd.read_itk_image(db, image1)
im2 = syd.read_itk_image(db, image2)
im = stitch_itk_image(im1, im2)
print('TODO: insert an image')
return im
# -----------------------------------------------------------------------------
def stitch_itk_image(im1, im2):
# FIXME -> to put in a external file itk related
# check image size and type
# FIXME
# create an image
ImageType = type(im1)
print(ImageType)
image = ImageType.New()
# get sizes
region1 = im1.GetLargestPossibleRegion()
region2 = im2.GetLargestPossibleRegion()
a1 = im1.TransformIndexToPhysicalPoint(region1.GetIndex())
b1 = im1.TransformIndexToPhysicalPoint(region1.GetSize())
a2 = im2.TransformIndexToPhysicalPoint(region2.GetIndex())
b2 = im2.TransformIndexToPhysicalPoint(region2.GetSize())
print(a1, b1)
print(a2, b2)
# create new size
za = min(a1[2], a2[2], b1[2], b2[2])
zb = max(a1[2], a2[2], b1[2], b2[2])
# swap if decreasing coordinates
if (a1[2]>b1[2]):
zb,za = za,zb
a = a1
a[2] = za
b = b1
b[2] = zb
print(a, b)
return image
| 23.186441 | 79 | 0.557749 | 164 | 1,368 | 4.597561 | 0.359756 | 0.04244 | 0.015915 | 0.039788 | 0.129973 | 0.03183 | 0.03183 | 0.03183 | 0 | 0 | 0 | 0.05597 | 0.216374 | 1,368 | 58 | 80 | 23.586207 | 0.647388 | 0.240497 | 0 | 0 | 0 | 0 | 0.032132 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.21875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3be559b23f04ad4fbb4310964aaa62522258d721 | 8,529 | py | Python | mayan/apps/linking/api_views.py | darrenflexxu/Mayan-EDMS | 6707365bfacd137e625ddc1b990168012246fa07 | [
"Apache-2.0"
] | null | null | null | mayan/apps/linking/api_views.py | darrenflexxu/Mayan-EDMS | 6707365bfacd137e625ddc1b990168012246fa07 | [
"Apache-2.0"
] | 5 | 2021-03-19T22:59:52.000Z | 2022-03-12T00:13:16.000Z | mayan/apps/linking/api_views.py | Sumit-Kumar-Jha/mayan | 5b7ddeccf080b9e41cc1074c70e27dfe447be19f | [
"Apache-2.0"
] | 1 | 2020-07-29T21:03:27.000Z | 2020-07-29T21:03:27.000Z | from __future__ import absolute_import, unicode_literals
from django.shortcuts import get_object_or_404
from mayan.apps.acls.models import AccessControlList
from mayan.apps.documents.models import Document
from mayan.apps.documents.permissions import permission_document_view
from mayan.apps.rest_api import generics
from .models import SmartLink
from .permissions import (
permission_smart_link_create, permission_smart_link_delete,
permission_smart_link_edit, permission_smart_link_view
)
from .serializers import (
ResolvedSmartLinkDocumentSerializer, ResolvedSmartLinkSerializer,
SmartLinkConditionSerializer, SmartLinkSerializer,
WritableSmartLinkSerializer
)
class APIResolvedSmartLinkDocumentListView(generics.ListAPIView):
"""
get: Returns a list of the smart link documents that apply to the document.
"""
mayan_object_permissions = {'GET': (permission_document_view,)}
serializer_class = ResolvedSmartLinkDocumentSerializer
def get_document(self):
document = get_object_or_404(klass=Document, pk=self.kwargs['pk'])
AccessControlList.objects.check_access(
obj=document, permissions=(permission_document_view,),
user=self.request.user
)
return document
def get_smart_link(self):
smart_link = get_object_or_404(
klass=SmartLink.objects.get_for(document=self.get_document()),
pk=self.kwargs['smart_link_pk']
)
AccessControlList.objects.check_access(
obj=smart_link, permissions=(permission_smart_link_view,),
user=self.request.user
)
return smart_link
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
context = super(
APIResolvedSmartLinkDocumentListView, self
).get_serializer_context()
if self.kwargs:
context.update(
{
'document': self.get_document(),
'smart_link': self.get_smart_link(),
}
)
return context
def get_queryset(self):
return self.get_smart_link().get_linked_document_for(
document=self.get_document()
)
class APIResolvedSmartLinkView(generics.RetrieveAPIView):
"""
get: Return the details of the selected resolved smart link.
"""
lookup_url_kwarg = 'smart_link_pk'
mayan_object_permissions = {'GET': (permission_smart_link_view,)}
serializer_class = ResolvedSmartLinkSerializer
def get_document(self):
document = get_object_or_404(klass=Document, pk=self.kwargs['pk'])
AccessControlList.objects.check_access(
obj=document, permissions=(permission_document_view,),
user=self.request.user
)
return document
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
context = super(APIResolvedSmartLinkView, self).get_serializer_context()
if self.kwargs:
context.update(
{
'document': self.get_document(),
}
)
return context
def get_queryset(self):
return SmartLink.objects.get_for(document=self.get_document())
class APIResolvedSmartLinkListView(generics.ListAPIView):
"""
get: Returns a list of the smart links that apply to the document.
"""
mayan_object_permissions = {'GET': (permission_smart_link_view,)}
serializer_class = ResolvedSmartLinkSerializer
def get_document(self):
document = get_object_or_404(klass=Document, pk=self.kwargs['pk'])
AccessControlList.objects.check_access(
obj=document, permissions=(permission_document_view,),
user=self.request.user
)
return document
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
context = super(APIResolvedSmartLinkListView, self).get_serializer_context()
if self.kwargs:
context.update(
{
'document': self.get_document(),
}
)
return context
def get_queryset(self):
return SmartLink.objects.filter(
document_types=self.get_document().document_type
)
class APISmartLinkConditionListView(generics.ListCreateAPIView):
"""
get: Returns a list of all the smart link conditions.
post: Create a new smart link condition.
"""
serializer_class = SmartLinkConditionSerializer
def get_queryset(self):
return self.get_smart_link().conditions.all()
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
context = super(APISmartLinkConditionListView, self).get_serializer_context()
if self.kwargs:
context.update(
{
'smart_link': self.get_smart_link(),
}
)
return context
def get_smart_link(self):
if self.request.method == 'GET':
permission_required = permission_smart_link_view
else:
permission_required = permission_smart_link_edit
smart_link = get_object_or_404(klass=SmartLink, pk=self.kwargs['pk'])
AccessControlList.objects.check_access(
obj=smart_link, permissions=(permission_required,),
user=self.request.user
)
return smart_link
class APISmartLinkConditionView(generics.RetrieveUpdateDestroyAPIView):
"""
delete: Delete the selected smart link condition.
get: Return the details of the selected smart link condition.
patch: Edit the selected smart link condition.
put: Edit the selected smart link condition.
"""
lookup_url_kwarg = 'condition_pk'
serializer_class = SmartLinkConditionSerializer
def get_queryset(self):
return self.get_smart_link().conditions.all()
def get_serializer_context(self):
"""
Extra context provided to the serializer class.
"""
context = super(APISmartLinkConditionView, self).get_serializer_context()
if self.kwargs:
context.update(
{
'smart_link': self.get_smart_link(),
}
)
return context
def get_smart_link(self):
if self.request.method == 'GET':
permission_required = permission_smart_link_view
else:
permission_required = permission_smart_link_edit
smart_link = get_object_or_404(klass=SmartLink, pk=self.kwargs['pk'])
AccessControlList.objects.check_access(
obj=smart_link, permissions=(permission_required,),
user=self.request.user
)
return smart_link
class APISmartLinkListView(generics.ListCreateAPIView):
"""
get: Returns a list of all the smart links.
post: Create a new smart link.
"""
mayan_object_permissions = {'GET': (permission_smart_link_view,)}
mayan_view_permissions = {'POST': (permission_smart_link_create,)}
queryset = SmartLink.objects.all()
def get_serializer(self, *args, **kwargs):
if not self.request:
return None
return super(APISmartLinkListView, self).get_serializer(*args, **kwargs)
def get_serializer_class(self):
if self.request.method == 'GET':
return SmartLinkSerializer
else:
return WritableSmartLinkSerializer
class APISmartLinkView(generics.RetrieveUpdateDestroyAPIView):
"""
delete: Delete the selected smart link.
get: Return the details of the selected smart link.
patch: Edit the selected smart link.
put: Edit the selected smart link.
"""
mayan_object_permissions = {
'DELETE': (permission_smart_link_delete,),
'GET': (permission_smart_link_view,),
'PATCH': (permission_smart_link_edit,),
'PUT': (permission_smart_link_edit,)
}
queryset = SmartLink.objects.all()
def get_serializer(self, *args, **kwargs):
if not self.request:
return None
return super(APISmartLinkView, self).get_serializer(*args, **kwargs)
def get_serializer_class(self):
if self.request.method == 'GET':
return SmartLinkSerializer
else:
return WritableSmartLinkSerializer
| 30.569892 | 85 | 0.653183 | 866 | 8,529 | 6.193995 | 0.117783 | 0.088926 | 0.060216 | 0.034303 | 0.746085 | 0.731357 | 0.691089 | 0.678784 | 0.625466 | 0.587435 | 0 | 0.003349 | 0.264861 | 8,529 | 278 | 86 | 30.679856 | 0.852153 | 0.115019 | 0 | 0.568182 | 0 | 0 | 0.020151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113636 | false | 0 | 0.051136 | 0.028409 | 0.426136 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3becb3cb8a9347c5c892e9c12331df179e27be0f | 406 | py | Python | game/migrations/0011_onlinegame_playersready.py | dimamelnik22/drawfulru | da2d21ef4c0b6776fc7c1059dbdf617f591c4ef8 | [
"Apache-2.0"
] | null | null | null | game/migrations/0011_onlinegame_playersready.py | dimamelnik22/drawfulru | da2d21ef4c0b6776fc7c1059dbdf617f591c4ef8 | [
"Apache-2.0"
] | 7 | 2020-06-05T20:14:47.000Z | 2021-09-22T18:18:06.000Z | game/migrations/0011_onlinegame_playersready.py | dimamelnik22/drawfulru | da2d21ef4c0b6776fc7c1059dbdf617f591c4ef8 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0 on 2019-12-23 06:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('game', '0010_auto_20191223_0818'),
]
operations = [
migrations.AddField(
model_name='onlinegame',
name='playersready',
field=models.IntegerField(default=0),
),
]
| 21.368421 | 50 | 0.576355 | 40 | 406 | 5.75 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111913 | 0.317734 | 406 | 18 | 51 | 22.555556 | 0.718412 | 0.105911 | 0 | 0 | 1 | 0 | 0.142857 | 0.067055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bf70a1a9f2bab5e2d13cf95f5bb6e7cbc23fec9 | 3,582 | py | Python | examples/pipeline.py | nicolay-r/AREk | 19c39ec0dc9a17464cade03b9c4da0c6d1d21191 | [
"MIT"
] | null | null | null | examples/pipeline.py | nicolay-r/AREk | 19c39ec0dc9a17464cade03b9c4da0c6d1d21191 | [
"MIT"
] | null | null | null | examples/pipeline.py | nicolay-r/AREk | 19c39ec0dc9a17464cade03b9c4da0c6d1d21191 | [
"MIT"
] | null | null | null | from arekit.common.data.input.providers.label.multiple import MultipleLabelProvider
from arekit.common.data.row_ids.multiple import MultipleIDProvider
from arekit.common.data.storages.base import BaseRowsStorage
from arekit.common.data.views.samples import BaseSampleStorageView
from arekit.common.experiment.data_type import DataType
from arekit.common.labels.scaler import BaseLabelScaler
from arekit.contrib.experiment_rusentrel.labels.scalers.three import ThreeLabelScaler
from arekit.contrib.networks.context.architectures.pcnn import PiecewiseCNN
from arekit.contrib.networks.context.configurations.cnn import CNNConfig
from arekit.contrib.networks.core.ctx_inference import InferenceContext
from arekit.contrib.networks.core.feeding.bags.collection.single import SingleBagsCollection
from arekit.contrib.networks.core.input.helper_embedding import EmbeddingHelper
from arekit.contrib.networks.core.model import BaseTensorflowModel
from arekit.contrib.networks.core.model_io import NeuralNetworkModelIO
from arekit.contrib.networks.core.predict.provider import BasePredictProvider
from arekit.contrib.networks.core.predict.tsv_writer import TsvPredictWriter
from arekit.contrib.networks.shapes import NetworkInputShapes
from examples.input import EXAMPLES
from examples.repository import pipeline_serialize
def pipeline_infer(labels_scaler):
assert(isinstance(labels_scaler, BaseLabelScaler))
# Step 4. Deserialize data
network = PiecewiseCNN()
config = CNNConfig()
config.set_term_embedding(EmbeddingHelper.load_vocab("embedding.txt"))
inference_ctx = InferenceContext.create_empty()
inference_ctx.initialize(
dtypes=[DataType.Test],
create_samples_view_func=lambda data_type: BaseSampleStorageView(
storage=BaseRowsStorage.from_tsv("samples.txt"),
row_ids_provider=MultipleIDProvider()),
has_model_predefined_state=True,
vocab=EmbeddingHelper.load_vocab("vocab.txt"),
labels_count=3,
input_shapes=NetworkInputShapes(iter_pairs=[
(NetworkInputShapes.FRAMES_PER_CONTEXT, config.FramesPerContext),
(NetworkInputShapes.TERMS_PER_CONTEXT, config.TermsPerContext),
(NetworkInputShapes.SYNONYMS_PER_CONTEXT, config.SynonymsPerContext),
]),
bag_size=config.BagSize)
# Step 5. Model preparation.
model = BaseTensorflowModel(
nn_io=NeuralNetworkModelIO(
target_dir=".model",
full_model_name="PCNN",
model_name_tag="_"),
network=network,
config=config,
inference_ctx=inference_ctx,
bags_collection_type=SingleBagsCollection, # Используем на вход 1 пример.
)
model.predict()
# Step 6. Gather annotated contexts onto document level.
labeled_samples = model.get_labeled_samples_collection(data_type=DataType.Test)
predict_provider = BasePredictProvider()
# TODO. For now it is limited to tsv.
with TsvPredictWriter(filepath="out.txt") as out:
title, contents_it = predict_provider.provide(
sample_id_with_uint_labels_iter=labeled_samples.iter_non_duplicated_labeled_sample_row_ids(),
labels_scaler=labels_scaler)
out.write(title=title,
contents_it=contents_it)
if __name__ == '__main__':
text = EXAMPLES["simple"]
labels_scaler = ThreeLabelScaler()
label_provider = MultipleLabelProvider(label_scaler=labels_scaler)
pipeline_serialize(sentences_text_list=text, label_provider=label_provider)
pipeline_infer(labels_scaler)
| 40.704545 | 105 | 0.769961 | 397 | 3,582 | 6.715365 | 0.390428 | 0.063766 | 0.070143 | 0.093773 | 0.109152 | 0.052513 | 0 | 0 | 0 | 0 | 0 | 0.001652 | 0.155221 | 3,582 | 87 | 106 | 41.172414 | 0.879379 | 0.047739 | 0 | 0 | 0 | 0 | 0.01909 | 0 | 0 | 0 | 0 | 0.011494 | 0.015625 | 1 | 0.015625 | false | 0 | 0.296875 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3bf87ad7597d41df2c5bff20fab72d6e34dbefa1 | 2,443 | py | Python | src/PointClasses/Bisector.py | Lovely-XPP/tkzgeom | bf68e139dc05f759542d6611f4dc07f4f2727b92 | [
"MIT"
] | 41 | 2021-11-24T05:54:08.000Z | 2022-03-26T10:19:30.000Z | src/PointClasses/Bisector.py | Lovely-XPP/tkzgeom | bf68e139dc05f759542d6611f4dc07f4f2727b92 | [
"MIT"
] | 1 | 2022-02-28T04:34:51.000Z | 2022-03-07T10:49:27.000Z | src/PointClasses/Bisector.py | Lovely-XPP/tkzgeom | bf68e139dc05f759542d6611f4dc07f4f2727b92 | [
"MIT"
] | 10 | 2021-11-24T07:35:17.000Z | 2022-03-25T18:42:14.000Z | from Point import Point
import Constant as c
from GeometryMath import bisector_point
class Bisector(Point):
def __init__(self, item):
"""Construct Bisector."""
Point.__init__(self, item)
self.item["sub_type"] = c.Point.Definition.BISECTOR
def tikzify(self):
return '\\tkzDefLine[bisector](%s,%s,%s)\\tkzGetPoint{%s}' % (self.item["definition"]["A"],
self.item["definition"]["B"],
self.item["definition"]["C"],
self.get_id())
def recompute_canvas(self, items, window, width, height):
A = items[self.depends_on()[0]].get_canvas_coordinates()
B = items[self.depends_on()[1]].get_canvas_coordinates()
C = items[self.depends_on()[2]].get_canvas_coordinates()
self.set_canvas_coordinates(*bisector_point(A, B, C))
def __str__(self):
return "Bisector point (%s) of angle %s"\
% (self.item["id"], self.item["definition"]["A"]+self.item["definition"]["B"]+self.item["definition"]["C"])
def definition_builder(self, data, items=None):
if len(data) == 3:
return dict(zip(["A", "B", "C"], data))
def parse_into_definition(self, arguments, items):
# arguments length condition
if len(arguments) != 3:
return None
# all arguments are members of the regular expression for argument name
if not all(map(lambda x: self.name_pattern(x), arguments)):
return None
# all arguments are items that already exist
if not all(map(lambda x: x in items, arguments)):
return None
# the type of all arguments is of a certain type
if not all(map(lambda x: items[x].item["type"] == 'point', arguments)):
return None
# self-reference condition (self-reference is not permitted)
if self.get_id() in arguments:
return None
# condition for cross reference
for id in arguments:
deep_depends = items[id].deep_depends_on(items)
if self.get_id() in deep_depends:
return None
return self.definition_builder(arguments)
@staticmethod
def static_patterns():
return ["ppp"]
def patterns(self):
return ["ppp"]
| 40.04918 | 119 | 0.56447 | 287 | 2,443 | 4.665505 | 0.289199 | 0.059746 | 0.080657 | 0.040329 | 0.182226 | 0.125467 | 0.085138 | 0.085138 | 0.085138 | 0.085138 | 0 | 0.002989 | 0.315186 | 2,443 | 60 | 120 | 40.716667 | 0.79737 | 0.121163 | 0 | 0.181818 | 0 | 0 | 0.081461 | 0.02294 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.068182 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3bfc6525bf99e8218a93653bc016cb8baae15ea1 | 3,803 | py | Python | networkx/classes/tests/test_digraph_historical.py | KyleBenson/networkx | 26ccb4a380ba0e5304d7bbff53eb9859c6e4c93a | [
"BSD-3-Clause"
] | null | null | null | networkx/classes/tests/test_digraph_historical.py | KyleBenson/networkx | 26ccb4a380ba0e5304d7bbff53eb9859c6e4c93a | [
"BSD-3-Clause"
] | null | null | null | networkx/classes/tests/test_digraph_historical.py | KyleBenson/networkx | 26ccb4a380ba0e5304d7bbff53eb9859c6e4c93a | [
"BSD-3-Clause"
] | 1 | 2019-01-30T17:57:36.000Z | 2019-01-30T17:57:36.000Z | #!/usr/bin/env python
"""Original NetworkX graph tests"""
from nose.tools import *
import networkx
import networkx as nx
from networkx.testing.utils import *
from historical_tests import HistoricalTests
class TestDiGraphHistorical(HistoricalTests):
def setUp(self):
HistoricalTests.setUp(self)
self.G=nx.DiGraph
def test_in_degree(self):
G=self.G()
G.add_nodes_from('GJK')
G.add_edges_from([('A', 'B'), ('A', 'C'), ('B', 'D'),
('B', 'C'), ('C', 'D')])
assert_equal(sorted(d for n, d in G.in_degree()),[0, 0, 0, 0, 1, 2, 2])
assert_equal(dict(G.in_degree()),
{'A': 0, 'C': 2, 'B': 1, 'D': 2, 'G': 0, 'K': 0, 'J': 0})
def test_out_degree(self):
G=self.G()
G.add_nodes_from('GJK')
G.add_edges_from([('A', 'B'), ('A', 'C'), ('B', 'D'),
('B', 'C'), ('C', 'D')])
assert_equal(sorted([v for k,v in G.in_degree()]),
[0, 0, 0, 0, 1, 2, 2])
assert_equal(dict(G.out_degree()),
{'A': 2, 'C': 1, 'B': 2, 'D': 0, 'G': 0, 'K': 0, 'J': 0})
def test_degree_digraph(self):
H=nx.DiGraph()
H.add_edges_from([(1,24),(1,2)])
assert_equal(sorted(d for n, d in H.in_degree([1,24])), [0, 1])
assert_equal(sorted(d for n, d in H.out_degree([1,24])), [0, 2])
assert_equal(sorted(d for n, d in H.degree([1,24])), [1, 2])
def test_neighbors(self):
G=self.G()
G.add_nodes_from('GJK')
G.add_edges_from([('A', 'B'), ('A', 'C'), ('B', 'D'),
('B', 'C'), ('C', 'D')])
assert_equal(sorted(G.neighbors('C')),['D'])
assert_equal(sorted(G['C']),['D'])
assert_equal(sorted(G.neighbors('A')),['B', 'C'])
assert_raises(nx.NetworkXError,G.neighbors,'j')
assert_raises(nx.NetworkXError,G.neighbors,'j')
def test_successors(self):
G=self.G()
G.add_nodes_from('GJK')
G.add_edges_from([('A', 'B'), ('A', 'C'), ('B', 'D'),
('B', 'C'), ('C', 'D')])
assert_equal(sorted(G.successors('A')),['B', 'C'])
assert_equal(sorted(G.successors('A')),['B', 'C'])
assert_equal(sorted(G.successors('G')),[])
assert_equal(sorted(G.successors('D')),[])
assert_equal(sorted(G.successors('G')),[])
assert_raises(nx.NetworkXError,G.successors,'j')
assert_raises(nx.NetworkXError,G.successors,'j')
def test_predecessors(self):
G=self.G()
G.add_nodes_from('GJK')
G.add_edges_from([('A', 'B'), ('A', 'C'), ('B', 'D'),
('B', 'C'), ('C', 'D')])
assert_equal(sorted(G.predecessors('C')),['A', 'B'])
assert_equal(sorted(G.predecessors('C')),['A', 'B'])
assert_equal(sorted(G.predecessors('G')),[])
assert_equal(sorted(G.predecessors('A')),[])
assert_equal(sorted(G.predecessors('G')),[])
assert_equal(sorted(G.predecessors('A')),[])
assert_equal(sorted(G.successors('D')),[])
assert_raises(nx.NetworkXError,G.predecessors,'j')
assert_raises(nx.NetworkXError,G.predecessors,'j')
def test_reverse(self):
G=nx.complete_graph(10)
H=G.to_directed()
HR=H.reverse()
assert_true(nx.is_isomorphic(H,HR))
assert_equal(sorted(H.edges()),sorted(HR.edges()))
def test_reverse2(self):
H=nx.DiGraph()
foo=[H.add_edge(u,u+1) for u in range(0,5)]
HR=H.reverse()
for u in range(0,5):
assert_true(HR.has_edge(u+1,u))
def test_reverse3(self):
H=nx.DiGraph()
H.add_nodes_from([1,2,3,4])
HR=H.reverse()
assert_equal(sorted(HR.nodes()),[1, 2, 3, 4])
| 34.889908 | 79 | 0.519853 | 547 | 3,803 | 3.468007 | 0.137112 | 0.139167 | 0.197153 | 0.14233 | 0.639431 | 0.639431 | 0.596205 | 0.414866 | 0.397997 | 0.384291 | 0 | 0.023454 | 0.260058 | 3,803 | 108 | 80 | 35.212963 | 0.650675 | 0.013148 | 0 | 0.517647 | 0 | 0 | 0.029899 | 0 | 0 | 0 | 0 | 0 | 0.376471 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.188235 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce017638896c04f18c2cb7532f41f9850780cdae | 28,484 | py | Python | nimbleclient/v1/api/groups.py | prachiruparelia-hpe/nimble-python-sdk | a3e99d89e647291caf7936300ae853d21d94d6e5 | [
"Apache-2.0"
] | 1 | 2020-05-28T19:48:59.000Z | 2020-05-28T19:48:59.000Z | nimbleclient/v1/api/groups.py | prachiruparelia-hpe/nimble-python-sdk | a3e99d89e647291caf7936300ae853d21d94d6e5 | [
"Apache-2.0"
] | null | null | null | nimbleclient/v1/api/groups.py | prachiruparelia-hpe/nimble-python-sdk | a3e99d89e647291caf7936300ae853d21d94d6e5 | [
"Apache-2.0"
] | null | null | null | #
# © Copyright 2020 Hewlett Packard Enterprise Development LP
#
# This file was auto-generated by the Python SDK generator; DO NOT EDIT.
#
from ...resource import Resource, Collection
from ...exceptions import NimOSAPIOperationUnsupported
class Group(Resource):
"""Group is a collection of arrays operating together organized into storage pools.
# Parameters
id : Identifier of the group.
name : Name of the group.
smtp_server : Hostname or IP Address of SMTP Server.
smtp_port : Port number of SMTP Server.
smtp_auth_enabled : Whether SMTP Server requires authentication.
smtp_auth_username : Username to authenticate with SMTP Server.
smtp_auth_password : Password to authenticate with SMTP Server.
smtp_encrypt_type : Level of encryption for SMTP. Requires use of SMTP Authentication if encryption is enabled.
autosupport_enabled : Whether to send autosupport.
allow_analytics_gui : Specify whether to allow HPE Nimble Storage to use Google Analytics in the GUI. HPE Nimble Storage uses Google Analytics to gather
data related to GUI usage. The data gathered is used to evaluate and improve the product.
allow_support_tunnel : Whether to allow support tunnel.
proxy_server : Hostname or IP Address of HTTP Proxy Server. Setting this attribute to an empty string will unset all proxy settings.
proxy_port : Proxy Port of HTTP Proxy Server.
proxy_username : Username to authenticate with HTTP Proxy Server.
proxy_password : Password to authenticate with HTTP Proxy Server.
alert_to_email_addrs : Comma-separated list of email addresss to receive emails.
send_alert_to_support : Whether to send alert to Support.
alert_from_email_addr : From email address to use while sending emails.
alert_min_level : Minimum level of alert to be notified.
isns_enabled : Whether iSNS is enabled.
isns_server : Hostname or IP Address of iSNS Server.
isns_port : Port number for iSNS Server.
snmp_trap_enabled : Whether to enable SNMP traps.
snmp_trap_host : Hostname or IP Address to send SNMP traps.
snmp_trap_port : Port number of SNMP trap host.
snmp_get_enabled : Whether to accept SNMP get commands.
snmp_community : Community string to be used with SNMP.
snmp_get_port : Port number to which SNMP get requests should be sent.
snmp_sys_contact : Name of the SNMP administrator.
snmp_sys_location : Location of the group.
domain_name : Domain name for this group.
dns_servers : IP addresses for this group's dns servers.
ntp_server : Either IP address or hostname of the NTP server for this group.
timezone : Timezone in which this group is located.
user_inactivity_timeout : The amount of time in seconds that the user session is inactive before timing out.
syslogd_enabled : Is syslogd enabled on this system.
syslogd_server : Hostname of the syslogd server.
syslogd_port : Port number for syslogd server.
syslogd_servers : Hostname and/or port of the syslogd servers.
vvol_enabled : Are vvols enabled on this group.
iscsi_enabled : Whether iSCSI is enabled on this group.
fc_enabled : Whether FC is enabled on this group.
unique_name_enabled : Are new volume and volume collection names transformed on this group.
access_protocol_list : Protocol used to access this group.
group_target_enabled : Is group_target enabled on this group.
default_iscsi_target_scope : Newly created volumes are exported under iSCSI Group Target or iSCSI Volume Target.
tdz_enabled : Is Target Driven Zoning (TDZ) enabled on this group.
tdz_prefix : Target Driven Zoning (TDZ) prefix for peer zones created by TDZ.
group_target_name : Iscsi target name for this group.
default_volume_reserve : Amount of space to reserve for a volume as a percentage of volume size.
default_volume_warn_level : Default threshold for volume space usage as a percentage of volume size above which an alert is raised.
default_volume_limit : Default limit for a volume space usage as a percentage of volume size. Volume will be taken offline/made non-writable on exceeding its
limit.
default_snap_reserve : Amount of space to reserve for snapshots of a volume as a percentage of volume size.
default_snap_warn_level : Default threshold for snapshot space usage of a volume as a percentage of volume size above which an alert is raised.
default_snap_limit : This attribute is deprecated. The array does not limit a volume's snapshot space usage. The attribute is ignored on input and returns
max int64 value on output.
default_snap_limit_percent : This attribute is deprecated. The array does not limit a volume's snapshot space usage. The attribute is ignored on input and returns
-1 on output.
alarms_enabled : Whether alarm feature is enabled.
vss_validation_timeout : The amount of time in seconds to validate Microsoft VSS application synchronization before timing out.
auto_switchover_enabled : Whether automatic switchover of Group management services feature is enabled.
auto_switchover_messages : List of validation messages for automatic switchover of Group Management. This will be empty when there are no conflicts found.
merge_state : State of group merge.
merge_group_name : Group that we're being merged with.
tlsv1_enabled : Enable or disable TLSv1.0 and TLSv1.1.
cc_mode_enabled : Enable or disable Common Criteria mode.
group_snapshot_ttl : Snapshot Time-to-live(TTL) configured at group level for automatic deletion of unmanaged snapshots. Value 0 indicates unlimited TTL.
autoclean_unmanaged_snapshots_ttl_unit : Unit for unmanaged snapshot time to live.
autoclean_unmanaged_snapshots_enabled : Whether autoclean unmanaged snapshots feature is enabled.
leader_array_name : Name of the array where the group Management Service is running.
leader_array_serial : Serial number of the array where the group Management Service is running.
management_service_backup_array_name : Name of the array where backup the group Management Service is running.
management_service_backup_status : HA status of the group Management Service.
failover_mode : Failover mode of the group Management Service.
witness_status : Witness status from group Management Service array and group Management Service backup array.
member_list : Members of this group.
compressed_vol_usage_bytes : Compressed usage of volumes in the group.
compressed_snap_usage_bytes : Compressed usage of snapshots in the group.
uncompressed_vol_usage_bytes : Uncompressed usage of volumes in the group.
uncompressed_snap_usage_bytes : Uncompressed usage of snapshots in the group.
usable_capacity_bytes : Usable capacity bytes of the group.
usage : Used space of the group in bytes.
raw_capacity : Total capacity of the group.
usable_cache_capacity : Usable cache capacity of the group.
raw_cache_capacity : Total cache capacity of the group.
snap_usage_populated : Total snapshot usage as if each snapshot is deep copy of the volume.
pending_deletes : Usage for blocks that are not yet deleted.
num_connections : Number of connections to the group.
vol_compression_ratio : Compression ratio of volumes in the group.
snap_compression_ratio : Compression ratio of snapshots in the group.
compression_ratio : Compression savings for the group expressed as ratio.
dedupe_ratio : Dedupe savings for the group expressed as ratio.
clone_ratio : Clone savings for the group expressed as ratio.
vol_thin_provisioning_ratio : Thin provisioning savings for volumes in the group expressed as ratio.
savings_ratio : Overall savings in the group expressed as ratio.
data_reduction_ratio : Space savings in the group that does not include thin-provisioning savings expressed as ratio.
savings_dedupe : Space usage savings in the group due to deduplication.
savings_compression : Space usage savings in the group due to compression.
savings_clone : Space usage savings in the group due to cloning of volumes.
savings_vol_thin_provisioning : Space usage savings in the group due to thin provisioning of volumes.
savings_data_reduction : Space usage savings in the group that does not include thin-provisioning savings.
savings : Overall space usage savings in the group.
free_space : Free space of the pool in bytes.
unused_reserve_bytes : Reserved space that is not utilized.
usage_valid : Indicates whether the usage of group is valid.
space_info_valid : Is space info for this group valid.
version_current : Version of software running on the group.
version_target : Desired software version for the group.
version_rollback : Rollback software version for the group.
update_state : Group update state.
update_start_time : Start time of last update.
update_end_time : End time of last update.
update_array_names : Arrays in the group undergoing update.
update_progress_msg : Group update detailed progress message.
update_error_code : If the software update has failed, this indicates the error code corresponding to the failure.
update_downloading : Is software update package currently downloading.
update_download_error_code : If the software download has failed, this indicates the error code corresponding to the failure.
update_download_start_time : Start time of last update.
update_download_end_time : End time of last update.
iscsi_automatic_connection_method : Is iscsi reconnection automatic.
iscsi_connection_rebalancing : Does iscsi automatically rebalance connections.
repl_throttled_bandwidth : Current bandwidth throttle for replication, expressed either as megabits per second or as -1 to indicate that there is no throttle.
repl_throttled_bandwidth_kbps : Current bandwidth throttle for replication, expressed either as kilobits per second or as -1 to indicate that there is no throttle.
repl_throttle_list : All the replication bandwidth limits on the system.
volume_migration_status : Status of data migration activity related to volumes being relocated to different pools.
array_unassign_migration_status : Data migration status for arrays being removed from their pool.
data_rebalance_status : Status of data rebalancing operations for pools in the group.
scsi_vendor_id : SCSI vendor ID.
encryption_config : How encryption is configured for this group.
last_login : Time and user of last login to this group.
num_snaps : Number of snapshots in the group.
num_snapcolls : Number of snapshot collections in this group.
date : Unix epoch time local to the group.
login_banner_message : The message for the login banner that is displayed during user login activity.
login_banner_after_auth : Should the banner be displayed before the user credentials are prompted or after prompting the user credentials.
login_banner_reset : This will reset the banner to the version of the installed NOS. When login_banner_after_auth is specified, login_banner_reset can not
be set to true.
snap_retn_meter_high : Threshold for considering a volume as high retention.
snap_retn_meter_very_high : Threshold for considering a volume as very high retention.
"""
def reboot(self, **kwargs):
"""Reboot all arrays in the group.
# Parameters
id : ID of the group to reboot.
job_timeout: Job timeout in seconds.
"""
return self._collection.reboot(
self.id,
**kwargs
)
def halt(self, **kwargs):
"""Halt all arrays in the group.
# Parameters
id : ID of the group to halt.
force : Halt remaining arrays when one or more is unreachable.
job_timeout: Job timeout in seconds.
"""
return self._collection.halt(
self.id,
**kwargs
)
def test_alert(self, level, **kwargs):
"""Generate a test alert.
# Parameters
id : ID of the group.
level : Level of the test alert.
"""
return self._collection.test_alert(
self.id,
level,
**kwargs
)
def software_update_precheck(self, **kwargs):
"""Run software update precheck.
# Parameters
id : ID of the group.
skip_precheck_mask : Flag to allow skipping certain types of prechecks.
"""
return self._collection.software_update_precheck(
self.id,
**kwargs
)
def software_update_start(self, **kwargs):
"""Update the group software to the downloaded version.
# Parameters
id : ID of the group.
skip_start_check_mask : Flag to allow skipping certain types of checks.
"""
return self._collection.software_update_start(
self.id,
**kwargs
)
def software_download(self, version, **kwargs):
"""Download software update package.
# Parameters
id : ID of the group.
version : Version string to download.
force : Flag to force download.
"""
return self._collection.software_download(
self.id,
version,
**kwargs
)
def software_cancel_download(self, **kwargs):
"""Cancel ongoing download of software.
# Parameters
id : ID of the group.
"""
return self._collection.software_cancel_download(
self.id,
**kwargs
)
def software_update_resume(self, **kwargs):
"""Resume stopped software update.
# Parameters
id : ID of the group.
"""
return self._collection.software_update_resume(
self.id,
**kwargs
)
def get_group_discovered_list(self, **kwargs):
"""Get list of discovered groups with arrays that are initialized.
# Parameters
id : ID of the group.
group_name : Name of the group requested to be discovered.
"""
return self._collection.get_group_discovered_list(
self.id,
**kwargs
)
def validate_merge(self, src_group_ip, src_group_name, src_password, src_username, **kwargs):
"""Perform group merge validation.
# Parameters
id : ID of the group.
src_group_name : Name of the source group.
src_group_ip : IP address of the source group.
src_username : Username of the source group.
src_password : Password of the source group.
src_passphrase : Source group encryption passphrase.
skip_secondary_mgmt_ip : Skip check for secondary management IP address.
"""
return self._collection.validate_merge(
self.id,
src_group_ip,
src_group_name,
src_password,
src_username,
**kwargs
)
def merge(self, src_group_ip, src_group_name, src_password, src_username, **kwargs):
"""Perform group merge with the specified group.
# Parameters
id : ID of the group.
src_group_name : Name of the source group.
src_group_ip : IP address of the source group.
src_username : Username of the source group.
src_password : Password of the source group.
src_passphrase : Source group encryption passphrase.
force : Ignore warnings and forcibly merge specified group with this group.
skip_secondary_mgmt_ip : Skip check for secondary management IP address.
job_timeout : Job timeout in seconds.
"""
return self._collection.merge(
self.id,
src_group_ip,
src_group_name,
src_password,
src_username,
**kwargs
)
def get_eula(self, **kwargs):
"""Get URL to download EULA contents.
# Parameters
id : ID of the group.
locale : Locale of EULA contents. Default is en.
format : Format of EULA contents. Default is HTML.
phase : Phase of EULA contents. Default is setup.
force : Flag to force EULA.
"""
return self._collection.get_eula(
self.id,
**kwargs
)
def check_migrate(self, **kwargs):
"""Check if the group Management Service can be migrated to the group Management Service backup array.
# Parameters
id : ID of the group.
"""
return self._collection.check_migrate(
self.id,
**kwargs
)
def migrate(self, **kwargs):
"""Migrate the group Management Service to the current group Management Service backup array.
# Parameters
id : ID of the group.
"""
return self._collection.migrate(
self.id,
**kwargs
)
def get_timezone_list(self, **kwargs):
"""Get list of group timezones.
# Parameters
id : ID of the group.
"""
return self._collection.get_timezone_list(
self.id,
**kwargs
)
def create(self, **kwargs):
raise NimOSAPIOperationUnsupported("create operation not supported")
def delete(self, **kwargs):
raise NimOSAPIOperationUnsupported("delete operation not supported")
class GroupList(Collection):
resource = Group
resource_type = "groups"
def reboot(self, id, **kwargs):
"""Reboot all arrays in the group.
# Parameters
id : ID of the group to reboot.
job_timeout: Job timeout in seconds.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'reboot',
id=id,
**kwargs
)
def halt(self, id, **kwargs):
"""Halt all arrays in the group.
# Parameters
id : ID of the group to halt.
force : Halt remaining arrays when one or more is unreachable.
job_timeout: Job timeout in seconds.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'halt',
id=id,
**kwargs
)
def test_alert(self, id, level, **kwargs):
"""Generate a test alert.
# Parameters
id : ID of the group.
level : Level of the test alert.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'test_alert',
id=id,
level=level,
**kwargs
)
def software_update_precheck(self, id, **kwargs):
"""Run software update precheck.
# Parameters
id : ID of the group.
skip_precheck_mask : Flag to allow skipping certain types of prechecks.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'software_update_precheck',
id=id,
**kwargs
)
def software_update_start(self, id, **kwargs):
"""Update the group software to the downloaded version.
# Parameters
id : ID of the group.
skip_start_check_mask : Flag to allow skipping certain types of checks.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'software_update_start',
id=id,
**kwargs
)
def software_download(self, id, version, **kwargs):
"""Download software update package.
# Parameters
id : ID of the group.
version : Version string to download.
force : Flag to force download.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'software_download',
id=id,
version=version,
**kwargs
)
def software_cancel_download(self, id, **kwargs):
"""Cancel ongoing download of software.
# Parameters
id : ID of the group.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'software_cancel_download',
id=id,
**kwargs
)
def software_update_resume(self, id, **kwargs):
"""Resume stopped software update.
# Parameters
id : ID of the group.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'software_update_resume',
id=id,
**kwargs
)
def get_group_discovered_list(self, id, **kwargs):
"""Get list of discovered groups with arrays that are initialized.
# Parameters
id : ID of the group.
group_name : Name of the group requested to be discovered.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'get_group_discovered_list',
id=id,
**kwargs
)
def validate_merge(self, id, src_group_ip, src_group_name, src_password, src_username, **kwargs):
"""Perform group merge validation.
# Parameters
id : ID of the group.
src_group_name : Name of the source group.
src_group_ip : IP address of the source group.
src_username : Username of the source group.
src_password : Password of the source group.
src_passphrase : Source group encryption passphrase.
skip_secondary_mgmt_ip : Skip check for secondary management IP address.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'validate_merge',
id=id,
src_group_ip=src_group_ip,
src_group_name=src_group_name,
src_password=src_password,
src_username=src_username,
**kwargs
)
def merge(self, id, src_group_ip, src_group_name, src_password, src_username, **kwargs):
"""Perform group merge with the specified group.
# Parameters
id : ID of the group.
src_group_name : Name of the source group.
src_group_ip : IP address of the source group.
src_username : Username of the source group.
src_password : Password of the source group.
src_passphrase : Source group encryption passphrase.
force : Ignore warnings and forcibly merge specified group with this group.
skip_secondary_mgmt_ip : Skip check for secondary management IP address.
job_timeout : Job timeout in seconds.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'merge',
id=id,
src_group_ip=src_group_ip,
src_group_name=src_group_name,
src_password=src_password,
src_username=src_username,
**kwargs
)
def get_eula(self, id, **kwargs):
"""Get URL to download EULA contents.
# Parameters
id : ID of the group.
locale : Locale of EULA contents. Default is en.
format : Format of EULA contents. Default is HTML.
phase : Phase of EULA contents. Default is setup.
force : Flag to force EULA.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'get_eula',
id=id,
**kwargs
)
def check_migrate(self, id, **kwargs):
"""Check if the group Management Service can be migrated to the group Management Service backup array.
# Parameters
id : ID of the group.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'check_migrate',
id=id,
**kwargs
)
def migrate(self, id, **kwargs):
"""Migrate the group Management Service to the current group Management Service backup array.
# Parameters
id : ID of the group.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'migrate',
id=id,
**kwargs
)
def get_timezone_list(self, id, **kwargs):
"""Get list of group timezones.
# Parameters
id : ID of the group.
"""
return self._client.perform_resource_action(
self.resource_type,
id,
'get_timezone_list',
id=id,
**kwargs
)
def create(self, **kwargs):
raise NimOSAPIOperationUnsupported("create operation not supported")
def delete(self, **kwargs):
raise NimOSAPIOperationUnsupported("delete operation not supported")
| 44.5759 | 179 | 0.584082 | 3,172 | 28,484 | 5.084174 | 0.144388 | 0.041173 | 0.026043 | 0.029764 | 0.621132 | 0.568178 | 0.520804 | 0.470267 | 0.446456 | 0.429962 | 0 | 0.000833 | 0.367645 | 28,484 | 638 | 180 | 44.645768 | 0.894459 | 0.674168 | 0 | 0.563319 | 1 | 0 | 0.046565 | 0.015748 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148472 | false | 0.034935 | 0.008734 | 0 | 0.305677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce0ca8f2fe98f3ab332870eee82d60c59dac39aa | 719 | py | Python | setup.py | DewMaple/toolkit | a1f04d1b53420c64e15f684c83acb54276031346 | [
"BSD-3-Clause"
] | null | null | null | setup.py | DewMaple/toolkit | a1f04d1b53420c64e15f684c83acb54276031346 | [
"BSD-3-Clause"
] | null | null | null | setup.py | DewMaple/toolkit | a1f04d1b53420c64e15f684c83acb54276031346 | [
"BSD-3-Clause"
] | null | null | null | # from distutils.core import setup
from setuptools import setup, find_packages
setup(
name='py-toolkit',
version='0.0.3',
packages=find_packages(exclude=("tests",)),
url='https://github.com/DewMaple/toolkit',
description='python toolkit for common usage',
author='DewMaple',
author_email='dewmaple@gmail.com',
license='',
keywords=['python', "schema meta"],
classifiers=['Programming Language :: Python :: 3.6'],
project_urls={
'Bug Reports': 'https://github.com/DewMaple/toolkit/issues',
'Source': 'https://github.com/DewMaple/toolkit',
},
tests_require=[
"pytest",
"pytest-cov",
"pytest-xprocess",
],
zip_safe=True
) | 28.76 | 68 | 0.631433 | 80 | 719 | 5.6 | 0.625 | 0.073661 | 0.09375 | 0.147321 | 0.194196 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008726 | 0.20306 | 719 | 25 | 69 | 28.76 | 0.773124 | 0.044506 | 0 | 0 | 0 | 0 | 0.424198 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.043478 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce0da279383850a16ffabcd3fe15ce7341142e46 | 3,934 | py | Python | ui.py | xKynn/zerox-assistant | 292525bf55cd08f930338310869dba1c25a00cf4 | [
"MIT"
] | 1 | 2021-11-07T14:49:13.000Z | 2021-11-07T14:49:13.000Z | ui.py | xKynn/pyTunes | 292525bf55cd08f930338310869dba1c25a00cf4 | [
"MIT"
] | null | null | null | ui.py | xKynn/pyTunes | 292525bf55cd08f930338310869dba1c25a00cf4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'hnc.ui'
#
# Created by: PyQt5 UI code generator 5.15.6
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(QtWidgets.QMainWindow):
def __init__(self, cc, tm, parent=None):
super().__init__(parent)
self.cc = cc
self.setupUi(self)
self.tm = tm
self.connect_signals()
def update_stlbl(self):
if self.cc.poll():
msg = self.cc.recv()
self.statuslbl.setText(msg)
def comm(self):
self.cc.send("listen")
def connect_signals(self):
self.pushButton.clicked.connect(self.comm)
self.tm.timeout.connect(self.update_stlbl)
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(365, 403)
MainWindow.setStyleSheet("background-color: rgb(53, 53, 53);")
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(110, 230, 150, 150))
self.pushButton.setMinimumSize(QtCore.QSize(40, 40))
self.pushButton.setMaximumSize(QtCore.QSize(16777215, 16777215))
self.pushButton.setStyleSheet("QPushButton {\n"
" color: rgb(39, 212, 111);\n"
" border: 2px solid #555;\n"
" border-radius: 75px;\n"
" border-style: outset;\n"
" background:rgb(58, 255, 81);\n"
" padding: 5px;\n"
" }\n"
"\n"
"QPushButton:hover {\n"
" background: rgb(63, 231, 74)\n"
" }\n"
"\n"
"QPushButton:pressed {\n"
" border-style: inset;\n"
" background: rgb(53, 221, 64)\n"
" }")
self.pushButton.setText("")
self.pushButton.setIconSize(QtCore.QSize(40, 40))
self.pushButton.setObjectName("pushButton")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(29, 30, 81, 31))
self.label.setStyleSheet("font: 25 20pt \"Bahnschrift Light Condensed\";\n"
"color: rgb(96, 255, 60);")
self.label.setObjectName("label")
self.statuslbl = QtWidgets.QLabel(self.centralwidget)
self.statuslbl.setGeometry(QtCore.QRect(30, 180, 500, 31))
self.statuslbl.setStyleSheet("font: 25 15pt \"Bahnschrift Light Condensed\";\n"
"color: rgb(96, 255, 60);")
self.statuslbl.setText("")
self.statuslbl.setObjectName("statuslbl")
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", " "))
self.label.setText(_translate("MainWindow", "pyTunes"))
self.statuslbl.setText(_translate("MainWindow", "Press the button to begin.."))
def uifunc(cc):
app = QtWidgets.QApplication([])
tm = QtCore.QTimer()
win = Ui_MainWindow(cc, tm)
win.show()
tm.start(1000)
app.exec() | 42.76087 | 87 | 0.543976 | 383 | 3,934 | 5.54047 | 0.417755 | 0.059378 | 0.039585 | 0.014138 | 0.103676 | 0.069746 | 0.042413 | 0.042413 | 0.042413 | 0.042413 | 0 | 0.05253 | 0.341891 | 3,934 | 92 | 88 | 42.76087 | 0.767092 | 0.068124 | 0 | 0.082192 | 1 | 0 | 0.15824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09589 | false | 0 | 0.013699 | 0 | 0.123288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce1627eb06d19834ba84ea0cd7b1055080fe6187 | 595 | py | Python | oandapy/exceptions.py | extreme4all/oandapy | 48dcfbe154316a83ca6e62e6b939062165cabc3e | [
"MIT"
] | null | null | null | oandapy/exceptions.py | extreme4all/oandapy | 48dcfbe154316a83ca6e62e6b939062165cabc3e | [
"MIT"
] | null | null | null | oandapy/exceptions.py | extreme4all/oandapy | 48dcfbe154316a83ca6e62e6b939062165cabc3e | [
"MIT"
] | null | null | null | """Exceptions."""
class OandaError(Exception):
""" Generic error class, catches oanda response errors
"""
def __init__(self, error_response):
self.error_response = error_response
msg = f"OANDA API returned error code {error_response['code']} ({error_response['message']}) "
super(OandaError, self).__init__(msg)
class BadEnvironment(Exception):
"""environment should be: sandbox, practice or live."""
def __init__(self, environment):
msg = f"Environment '{environment}' does not exist"
super(BadEnvironment, self).__init__(msg)
| 29.75 | 103 | 0.67563 | 65 | 595 | 5.861538 | 0.476923 | 0.170604 | 0.057743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198319 | 595 | 19 | 104 | 31.315789 | 0.798742 | 0.196639 | 0 | 0 | 0 | 0 | 0.275488 | 0.114967 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce1a18c48b194d0b3451c941f83d9e8945a1714d | 4,139 | py | Python | tests/system/post_cars_positive_test.py | ikostan/REST_API_AUTOMATION | cdb4d30fbc7457b2a403b4dad6fe1efa2e754681 | [
"Unlicense"
] | 8 | 2020-03-17T09:15:28.000Z | 2022-01-29T19:50:45.000Z | tests/system/post_cars_positive_test.py | ikostan/REST_API_AUTOMATION | cdb4d30fbc7457b2a403b4dad6fe1efa2e754681 | [
"Unlicense"
] | 1 | 2021-06-02T00:26:58.000Z | 2021-06-02T00:26:58.000Z | tests/system/post_cars_positive_test.py | ikostan/REST_API_AUTOMATION | cdb4d30fbc7457b2a403b4dad6fe1efa2e754681 | [
"Unlicense"
] | 1 | 2021-11-22T16:10:27.000Z | 2021-11-22T16:10:27.000Z | #!/path/to/interpreter
"""
Flask App REST API testing: POST
"""
# Created by Egor Kostan.
# GitHub: https://github.com/ikostan
# LinkedIn: https://www.linkedin.com/in/egor-kostan/
import allure
import requests
from tests.system.base_test import BaseTestCase
from api.cars_app import USER_LIST
@allure.epic('Simple Flask App')
@allure.parent_suite('REST API')
@allure.suite("System Tests")
@allure.sub_suite("Positive Tests")
@allure.feature("POST")
@allure.story('Cars')
class PostCarsPositiveTestCase(BaseTestCase):
"""
Simple Flask App Positive Test: POST call
"""
def setUp(self) -> None:
"""
Test data preparation
:return:
"""
with allure.step("Prepare test data"):
self.cars_url = '/cars'
self.message = ''
self.new_car = {'name': 'Figo',
'brand': 'Ford',
'price_range': '2-3 lacs',
'car_type': 'hatchback'}
def tearDown(self) -> None:
"""
Post test procedure
:return:
"""
with allure.step("Remove new added car from the list"):
username = USER_LIST[0]['name']
password = USER_LIST[0]['password']
requests.delete(url=self.URL +
self.cars_url +
'/remove/' +
self.new_car['name'],
auth=(username,
password))
def test_post_car_admin(self):
"""
Add new car using admin user credentials.
:return:
"""
allure.dynamic.title("Add new car "
"using admin user credentials")
allure.dynamic.severity(allure.severity_level.BLOCKER)
with allure.step("Verify user permissions"):
username = USER_LIST[0]['name']
password = USER_LIST[0]['password']
self.assertEqual("admin",
USER_LIST[0]['perm'])
with allure.step("Send POST request"):
response = requests.post(self.URL +
self.cars_url +
'/add',
json=self.new_car,
auth=(username,
password))
with allure.step("Verify status code"):
self.assertEqual(200,
response.status_code)
with allure.step("Verify 'successful' flag"):
self.assertTrue(response.json()['successful'])
with allure.step("Verify retrieved cars list"):
self.assertDictEqual(self.new_car,
response.json()['car'])
def test_post_car_non_admin(self):
"""
Add new car using non admin user credentials.
:return:
"""
allure.dynamic.title("Add new car "
"using non admin user credentials")
allure.dynamic.severity(allure.severity_level.BLOCKER)
with allure.step("Verify user permissions"):
username = USER_LIST[1]['name']
password = USER_LIST[1]['password']
self.assertEqual("non_admin",
USER_LIST[1]['perm'])
with allure.step("Send POST request"):
response = requests.post(self.URL +
self.cars_url +
'/add',
json=self.new_car,
auth=(username,
password))
with allure.step("Verify status code"):
self.assertEqual(200,
response.status_code)
with allure.step("Verify 'successful' flag"):
self.assertTrue(response.json()['successful'])
with allure.step("Verify retrieved cars list"):
self.assertDictEqual(self.new_car,
response.json()['car'])
| 31.59542 | 64 | 0.490457 | 390 | 4,139 | 5.110256 | 0.269231 | 0.060211 | 0.084295 | 0.080281 | 0.576016 | 0.566984 | 0.557953 | 0.550928 | 0.540893 | 0.540893 | 0 | 0.006444 | 0.400097 | 4,139 | 130 | 65 | 31.838462 | 0.796214 | 0.090602 | 0 | 0.558442 | 0 | 0 | 0.157997 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 1 | 0.051948 | false | 0.077922 | 0.051948 | 0 | 0.116883 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ce1f98db217162180757b8a6044a17804f866924 | 4,794 | py | Python | imblearn/combine/tests/test_smote_enn.py | themrzmaster/imbalanced-learn | e1be8695b22ca58aa5443057b9ae3f2885a45d60 | [
"MIT"
] | 2 | 2019-09-14T23:23:35.000Z | 2019-09-16T18:17:19.000Z | imblearn/combine/tests/test_smote_enn.py | themrzmaster/imbalanced-learn | e1be8695b22ca58aa5443057b9ae3f2885a45d60 | [
"MIT"
] | null | null | null | imblearn/combine/tests/test_smote_enn.py | themrzmaster/imbalanced-learn | e1be8695b22ca58aa5443057b9ae3f2885a45d60 | [
"MIT"
] | 1 | 2021-04-23T04:46:10.000Z | 2021-04-23T04:46:10.000Z | """Test the module SMOTE ENN."""
# Authors: Guillaume Lemaitre <g.lemaitre58@gmail.com>
# Christos Aridas
# License: MIT
import pytest
import numpy as np
from sklearn.utils.testing import assert_allclose, assert_array_equal
from imblearn.combine import SMOTEENN
from imblearn.under_sampling import EditedNearestNeighbours
from imblearn.over_sampling import SMOTE
RND_SEED = 0
X = np.array([[0.11622591, -0.0317206], [0.77481731, 0.60935141], [
1.25192108, -0.22367336
], [0.53366841, -0.30312976], [1.52091956,
-0.49283504], [-0.28162401, -2.10400981],
[0.83680821,
1.72827342], [0.3084254, 0.33299982], [0.70472253, -0.73309052],
[0.28893132, -0.38761769], [1.15514042, 0.0129463], [
0.88407872, 0.35454207
], [1.31301027, -0.92648734], [-1.11515198, -0.93689695], [
-0.18410027, -0.45194484
], [0.9281014, 0.53085498], [-0.14374509, 0.27370049], [
-0.41635887, -0.38299653
], [0.08711622, 0.93259929], [1.70580611, -0.11219234]])
Y = np.array([0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0])
R_TOL = 1e-4
def test_sample_regular():
smote = SMOTEENN(random_state=RND_SEED)
X_resampled, y_resampled = smote.fit_resample(X, Y)
X_gt = np.array([[1.52091956, -0.49283504], [0.84976473, -0.15570176], [
0.61319159, -0.11571667
], [0.66052536, -0.28246518], [-0.28162401, -2.10400981],
[0.83680821, 1.72827342], [0.08711622, 0.93259929]])
y_gt = np.array([0, 0, 0, 0, 1, 1, 1])
assert_allclose(X_resampled, X_gt, rtol=R_TOL)
assert_array_equal(y_resampled, y_gt)
def test_sample_regular_pass_smote_enn():
smote = SMOTEENN(
smote=SMOTE(sampling_strategy='auto', random_state=RND_SEED),
enn=EditedNearestNeighbours(sampling_strategy='all'),
random_state=RND_SEED)
X_resampled, y_resampled = smote.fit_resample(X, Y)
X_gt = np.array([[1.52091956, -0.49283504], [0.84976473, -0.15570176], [
0.61319159, -0.11571667
], [0.66052536, -0.28246518], [-0.28162401, -2.10400981],
[0.83680821, 1.72827342], [0.08711622, 0.93259929]])
y_gt = np.array([0, 0, 0, 0, 1, 1, 1])
assert_allclose(X_resampled, X_gt, rtol=R_TOL)
assert_array_equal(y_resampled, y_gt)
def test_sample_regular_half():
sampling_strategy = {0: 10, 1: 12}
smote = SMOTEENN(
sampling_strategy=sampling_strategy, random_state=RND_SEED)
X_resampled, y_resampled = smote.fit_resample(X, Y)
X_gt = np.array([[1.52091956, -0.49283504], [-0.28162401, -2.10400981],
[0.83680821, 1.72827342], [0.08711622, 0.93259929]])
y_gt = np.array([0, 1, 1, 1])
assert_allclose(X_resampled, X_gt)
assert_array_equal(y_resampled, y_gt)
def test_validate_estimator_init():
smote = SMOTE(random_state=RND_SEED)
enn = EditedNearestNeighbours(sampling_strategy='all')
smt = SMOTEENN(smote=smote, enn=enn, random_state=RND_SEED)
X_resampled, y_resampled = smt.fit_resample(X, Y)
X_gt = np.array([[1.52091956, -0.49283504], [0.84976473, -0.15570176], [
0.61319159, -0.11571667
], [0.66052536, -0.28246518], [-0.28162401, -2.10400981],
[0.83680821, 1.72827342], [0.08711622, 0.93259929]])
y_gt = np.array([0, 0, 0, 0, 1, 1, 1])
assert_allclose(X_resampled, X_gt, rtol=R_TOL)
assert_array_equal(y_resampled, y_gt)
def test_validate_estimator_default():
smt = SMOTEENN(random_state=RND_SEED)
X_resampled, y_resampled = smt.fit_resample(X, Y)
X_gt = np.array([[1.52091956, -0.49283504], [0.84976473, -0.15570176], [
0.61319159, -0.11571667
], [0.66052536, -0.28246518], [-0.28162401, -2.10400981],
[0.83680821, 1.72827342], [0.08711622, 0.93259929]])
y_gt = np.array([0, 0, 0, 0, 1, 1, 1])
assert_allclose(X_resampled, X_gt, rtol=R_TOL)
assert_array_equal(y_resampled, y_gt)
def test_parallelisation():
# Check if default job count is 1
smt = SMOTEENN(random_state=RND_SEED)
smt._validate_estimator()
assert smt.n_jobs == 1
assert smt.smote_.n_jobs == 1
assert smt.enn_.n_jobs == 1
# Check if job count is set
smt = SMOTEENN(random_state=RND_SEED, n_jobs=8)
smt._validate_estimator()
assert smt.n_jobs == 8
assert smt.smote_.n_jobs == 8
assert smt.enn_.n_jobs == 8
@pytest.mark.parametrize(
"smote_params, err_msg",
[({'smote': 'rnd'}, "smote needs to be a SMOTE"),
({'enn': 'rnd'}, "enn needs to be an ")]
)
def test_error_wrong_object(smote_params, err_msg):
smt = SMOTEENN(**smote_params)
with pytest.raises(ValueError, match=err_msg):
smt.fit_resample(X, Y)
| 38.047619 | 79 | 0.635378 | 697 | 4,794 | 4.173601 | 0.20373 | 0.012375 | 0.011344 | 0.055689 | 0.627363 | 0.594706 | 0.569268 | 0.545892 | 0.545892 | 0.497078 | 0 | 0.265414 | 0.211723 | 4,794 | 125 | 80 | 38.352 | 0.504366 | 0.036713 | 0 | 0.42268 | 0 | 0 | 0.019314 | 0 | 0 | 0 | 0 | 0 | 0.175258 | 1 | 0.072165 | false | 0.010309 | 0.061856 | 0 | 0.134021 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce25004f312bc46b4d6a3d278373562bc87e4202 | 316 | py | Python | apps/listings/migrations/0002_remove_post_author.py | favours-io/favours | 6f26a207d2684e752857aa21e5fafa607a4707e6 | [
"MIT"
] | 11 | 2020-07-23T19:07:32.000Z | 2021-11-18T17:16:29.000Z | apps/listings/migrations/0002_remove_post_author.py | favours-io/favours | 6f26a207d2684e752857aa21e5fafa607a4707e6 | [
"MIT"
] | 16 | 2020-08-29T01:57:05.000Z | 2022-01-13T03:16:41.000Z | apps/listings/migrations/0002_remove_post_author.py | favours-io/favours | 6f26a207d2684e752857aa21e5fafa607a4707e6 | [
"MIT"
] | 4 | 2020-09-18T18:40:12.000Z | 2021-11-09T06:36:36.000Z | # Generated by Django 3.0.7 on 2020-09-22 05:14
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('listings', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='post',
name='author',
),
]
| 17.555556 | 47 | 0.575949 | 33 | 316 | 5.454545 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086758 | 0.306962 | 316 | 17 | 48 | 18.588235 | 0.73516 | 0.142405 | 0 | 0 | 1 | 0 | 0.111524 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce2810e264659103f1cf2c4c793eb498a673a023 | 2,990 | py | Python | workflower/services/workflow/loader.py | dmenezesgabriel/workflower | db2358abdd2d133b85baea726e013e71171e5cf3 | [
"MIT"
] | null | null | null | workflower/services/workflow/loader.py | dmenezesgabriel/workflower | db2358abdd2d133b85baea726e013e71171e5cf3 | [
"MIT"
] | null | null | null | workflower/services/workflow/loader.py | dmenezesgabriel/workflower | db2358abdd2d133b85baea726e013e71171e5cf3 | [
"MIT"
] | null | null | null | import logging
import os
import traceback
from typing import List
from workflower.adapters.sqlalchemy.setup import Session
from workflower.adapters.sqlalchemy.unit_of_work import SqlAlchemyUnitOfWork
from workflower.application.event.commands import CreateEventCommand
from workflower.application.workflow.commands import (
ActivateWorkflowCommand,
LoadWorkflowFromYamlFileCommand,
SetWorkflowTriggerCommand,
)
from workflower.domain.entities.workflow import Workflow
logger = logging.getLogger("workflower.loader")
class WorkflowLoaderService:
def __init__(self) -> None:
self._workflows = None
@property
def workflows(self) -> List[Workflow]:
return self._workflows
def load_one_workflow_file(self, path: str, trigger: str = "on_schedule"):
"""
Load one workflow from file.
Args:
- path (str): workflow file path
- trigger (str): expects "on_schedule" or "on_demand".
"""
session = Session()
uow = SqlAlchemyUnitOfWork(session)
# TODO
# Add strategy pattern
command = LoadWorkflowFromYamlFileCommand(uow, path)
workflow = None
try:
workflow = command.execute()
except Exception:
logger.error(f"Error loading {path}:" f" {traceback.format_exc()}")
create_event_command = CreateEventCommand(
uow,
model="workflow",
model_id=None,
name="workflow_load_error",
exception=traceback.format_exc(),
)
create_event_command.execute()
if workflow:
set_trigger_command = SetWorkflowTriggerCommand(
uow, workflow.id, trigger
)
set_trigger_command.execute()
activate_Workflow_command = ActivateWorkflowCommand(
uow, workflow.id
)
activate_Workflow_command.execute()
return workflow
def load_all_from_dir(
self, path: str, trigger: str = "on_schedule"
) -> List[Workflow]:
"""
Load all workflow files from a given directory
Args:
- path (str): workflows file path
- trigger (str): expects "on_schedule" or "on_demand".
"""
self._workflows = []
logger.info(f"Loading Workflows from directory: {path}")
counter = 0
for root, dirs, files in os.walk(path):
for file in files:
if file.endswith(".yml") or file.endswith(".yaml"):
workflow_path = os.path.join(root, file)
workflow = self.load_one_workflow_file(
workflow_path, trigger=trigger
)
if workflow:
self._workflows.append(workflow)
counter += 1
logger.info(f"Workflows Loaded {counter}")
return self._workflows
| 32.5 | 79 | 0.596321 | 285 | 2,990 | 6.098246 | 0.326316 | 0.040276 | 0.025892 | 0.036824 | 0.128884 | 0.128884 | 0.087457 | 0.051784 | 0.051784 | 0.051784 | 0 | 0.000987 | 0.322408 | 2,990 | 91 | 80 | 32.857143 | 0.856861 | 0.103679 | 0 | 0.061538 | 0 | 0 | 0.072368 | 0.009288 | 0 | 0 | 0 | 0.010989 | 0 | 1 | 0.061538 | false | 0 | 0.138462 | 0.015385 | 0.261538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce282fdf98dc253cf62921347890761e924022a6 | 1,211 | py | Python | lfs/portlet/models/pages.py | zhammami/django-lfs | b921295e71fe827377a67b5e7ae1a8bf7f72a1e6 | [
"BSD-3-Clause"
] | null | null | null | lfs/portlet/models/pages.py | zhammami/django-lfs | b921295e71fe827377a67b5e7ae1a8bf7f72a1e6 | [
"BSD-3-Clause"
] | null | null | null | lfs/portlet/models/pages.py | zhammami/django-lfs | b921295e71fe827377a67b5e7ae1a8bf7f72a1e6 | [
"BSD-3-Clause"
] | null | null | null | # django imports
from django import forms
from django.conf import settings
from django.core.cache import cache
from django.template.loader import render_to_string
# portlets imports
from portlets.models import Portlet
# lfs imports
from lfs.page.models import Page
class PagesPortlet(Portlet):
"""Portlet to display pages.
"""
class Meta:
app_label = 'portlet'
def __unicode__(self):
return u"%s" % self.id
def render(self, context):
"""Renders the portlet as html.
"""
request = context.get("request")
cache_key = "%s-pages" % settings.CACHE_MIDDLEWARE_KEY_PREFIX
pages = cache.get(cache_key)
if pages is None:
pages = Page.objects.filter(active=True, exclude_from_navigation=False)
cache.set(cache_key, pages)
return render_to_string("lfs/portlets/pages.html", request=request, context={
"title": self.title,
"pages": pages,
})
def form(self, **kwargs):
return PagesForm(instance=self, **kwargs)
class PagesForm(forms.ModelForm):
"""Form for the PagesPortlet.
"""
class Meta:
model = PagesPortlet
exclude = ()
| 24.714286 | 85 | 0.641618 | 144 | 1,211 | 5.277778 | 0.430556 | 0.052632 | 0.036842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255987 | 1,211 | 48 | 86 | 25.229167 | 0.843507 | 0.119736 | 0 | 0.071429 | 0 | 0 | 0.054389 | 0.021947 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.214286 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ce2b25ff23e864e881234a2380df580d2b3d114d | 829 | py | Python | feed/models.py | kassupto007/photo-sharing-app | 97ed237815134fd3d53431be348a050c505db499 | [
"Apache-2.0"
] | null | null | null | feed/models.py | kassupto007/photo-sharing-app | 97ed237815134fd3d53431be348a050c505db499 | [
"Apache-2.0"
] | null | null | null | feed/models.py | kassupto007/photo-sharing-app | 97ed237815134fd3d53431be348a050c505db499 | [
"Apache-2.0"
] | null | null | null | from django.conf import settings
from django.db import models
from django.utils import timezone
from users.models import Profile
class Post(models.Model):
description = models.CharField(max_length=255)
picture = models.ImageField(upload_to='posts', blank=True)
date_posted = models.DateTimeField(auto_now_add=True, auto_now=False)
owner = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
def __str__(self):
return self.description
class Comment(models.Model):
post = models.ForeignKey(Post, related_name='comments', on_delete=models.CASCADE)
owner = models.ForeignKey(settings.AUTH_USER_MODEL, related_name='comments', on_delete=models.CASCADE)
comment = models.CharField(max_length=255)
comment_date = models.DateTimeField(auto_now_add=True, auto_now=False)
| 37.681818 | 106 | 0.77684 | 112 | 829 | 5.535714 | 0.428571 | 0.045161 | 0.067742 | 0.101613 | 0.496774 | 0.409677 | 0.409677 | 0.145161 | 0.145161 | 0 | 0 | 0.008287 | 0.126659 | 829 | 21 | 107 | 39.47619 | 0.848066 | 0 | 0 | 0 | 0 | 0 | 0.025332 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.25 | 0.0625 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ce31a76d07584d9441c2b8024946e9ee56bc2a7f | 8,286 | py | Python | regulations/tests/layers_toc_applier_tests.py | contolini/regulations-site | c31a9ce3097910877657f61b4c19a4ccbd0f967f | [
"CC0-1.0"
] | 18 | 2015-01-14T15:58:45.000Z | 2019-08-17T06:15:59.000Z | regulations/tests/layers_toc_applier_tests.py | contolini/regulations-site | c31a9ce3097910877657f61b4c19a4ccbd0f967f | [
"CC0-1.0"
] | 142 | 2015-01-08T15:28:50.000Z | 2018-07-16T16:48:07.000Z | regulations/tests/layers_toc_applier_tests.py | contolini/regulations-site | c31a9ce3097910877657f61b4c19a4ccbd0f967f | [
"CC0-1.0"
] | 45 | 2015-01-26T16:24:46.000Z | 2021-02-20T10:50:59.000Z | from unittest import TestCase
from regulations.generator.layers.toc_applier import *
class TableOfContentsLayerTest(TestCase):
def test_section(self):
toc = TableOfContentsLayer(None)
el = {}
toc.section(el, {'index': ['1']})
self.assertEqual({}, el)
toc.section(el, {'index': ['1', '2', '3']})
self.assertEqual({}, el)
toc.section(el, {'index': ['1', 'B']})
self.assertEqual({}, el)
toc.section(el, {'index': ['1', 'Interpretations']})
self.assertEqual({}, el)
toc.section(el, {'index': ['1', '2'], 'title': '1.2 - Awesome'})
self.assertEqual(el, {
'is_section': True,
'section_id': '1-2',
'label': '1.2',
'sub_label': 'Awesome'
})
toc.section(el, {'index': ['2', '1'], 'title': '2.1Sauce'})
self.assertEqual(el, {
'is_section': True,
'section_id': '2-1',
'label': '2.1',
'sub_label': 'Sauce'
})
def test_appendix_supplement(self):
toc = TableOfContentsLayer(None)
el = {}
toc.appendix_supplement(el, {'index': ['1']})
self.assertEqual({}, el)
toc.appendix_supplement(el, {'index': ['1', '2', '3']})
self.assertEqual({}, el)
toc.appendix_supplement(el, {'index': ['1', 'B', '3']})
self.assertEqual({}, el)
toc.appendix_supplement(el, {'index': ['1', 'Interp', '3']})
self.assertEqual({}, el)
toc.appendix_supplement(el, {
'index': ['1', 'B'],
'title': 'Appendix B - Bologna'})
self.assertEqual(el, {
'is_appendix': True,
'is_first_appendix': True,
'label': 'Appendix B',
'sub_label': 'Bologna',
'section_id': '1-B'
})
el = {}
toc.appendix_supplement(el, {
'index': ['204', 'A'],
'title': 'Appendix A to Part 204 - Model Forms'})
self.assertEqual(el, {
'is_appendix': True,
'is_first_appendix': True,
'label': 'Appendix A to Part 204',
'sub_label': 'Model Forms',
'section_id': '204-A'
})
el = {}
toc.appendix_supplement(el, {
'index': ['1', 'Interp'],
'title': 'Supplement I to 8787 - I am Iron Man'})
self.assertEqual(el, {
'is_supplement': True,
'label': 'Supplement I to 8787',
'sub_label': 'I am Iron Man',
'section_id': '1-Interp'
})
def test_apply_layer_url(self):
toc = TableOfContentsLayer({'100': [
{'title': '100.1 Intro', 'index': ['100', '1']}]})
result = toc.apply_layer('100')
self.assertEqual('#100-1', result[1][0]['url'])
toc.sectional = True
toc.version = 'verver'
result = toc.apply_layer('100')
self.assertTrue('100-1/verver#100-1' in result[1][0]['url'])
def test_apply_layer_compatibility(self):
toc = TableOfContentsLayer({'100': [
{'title': '100.1 Intro', 'index': ['100', '1']},
{'title': 'Appendix A', 'index': ['100', 'A']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}]})
_, result = toc.apply_layer('100')
self.assertEqual(3, len(result))
toc = TableOfContentsLayer({
'100': [
{'title': 'Subpart A', 'index': ['100', 'Subpart', 'A']},
{'title': 'Appendix A', 'index': ['100', 'A']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}],
'100-Subpart-A': [
{'title': '100.1 Intro', 'index': ['100', '1']},
{'title': '100.2 Sec2', 'index': ['100', '2']},
{'title': '100.3 Sec3', 'index': ['100', '3']}]
})
_, result = toc.apply_layer('100')
self.assertEqual(3, len(result))
self.assertEqual(3, len(result[0]['sub_toc']))
def test_apply_layer_first_appendix(self):
toc = TableOfContentsLayer({'100': [
{'title': 'Appendix A', 'index': ['100', 'A']},
{'title': 'Appendix B', 'index': ['100', 'B']},
{'title': 'Appendix C', 'index': ['100', 'C']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}]})
_, result = toc.apply_layer('100')
self.assertEqual(4, len(result))
aA, aB, aC, sI = result
self.assertTrue(aA['is_first_appendix'])
self.assertFalse(aB['is_first_appendix'])
self.assertFalse(aC['is_first_appendix'])
self.assertFalse(sI.get('is_first_appendix', False))
toc = TableOfContentsLayer({'100': [
{'title': 'Supplement I', 'index': ['100', 'Interp']}]})
_, result = toc.apply_layer('100')
self.assertEqual(1, len(result))
self.assertFalse(result[0].get('is_first_appendix', False))
def test_apply_layer_interp_emptysubpart(self):
toc = TableOfContentsLayer({'100': [
{'title': '100.1 Intro', 'index': ['100', '1']},
{'title': '100.2 Second', 'index': ['100', '2']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}]})
_, result = toc.apply_layer('100')
self.assertEqual(3, len(result))
s1, s2, interp = result
self.assertEqual(1, len(interp['sub_toc']))
nosubpart = interp['sub_toc'][0]
self.assertEqual('Regulation Text', nosubpart['label'])
self.assertEqual(['100', 'Subpart', 'Interp'], nosubpart['index'])
toc = TableOfContentsLayer({'100': [
{'title': '100.1 Intro', 'index': ['100', '1']},
{'title': '100.2 Second', 'index': ['100', '2']},
{'title': 'Appendix A', 'index': ['100', 'A']},
{'title': 'Appendix C', 'index': ['100', 'C']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}]})
_, result = toc.apply_layer('100')
self.assertEqual(5, len(result))
s1, s2, appA, appC, interp = result
self.assertEqual(2, len(interp['sub_toc']))
nosubpart, appendices = interp['sub_toc']
self.assertEqual('Regulation Text', nosubpart['label'])
self.assertEqual(['100', 'Subpart', 'Interp'], nosubpart['index'])
self.assertEqual('Appendices', appendices['label'])
self.assertEqual(['100', 'Appendices', 'Interp'], appendices['index'])
def test_apply_layer_interp_subparts(self):
toc = TableOfContentsLayer({
'100': [
{'title': 'Subpart A', 'index': ['100', 'Subpart', 'A']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}],
'100-Subpart-A': [
{'title': '100.1 Intro', 'index': ['100', '1']},
{'title': '100.2 Second', 'index': ['100', '2']}]})
_, result = toc.apply_layer('100')
self.assertEqual(2, len(result))
subpartA, interp = result
self.assertEqual(2, len(subpartA['sub_toc']))
self.assertEqual(1, len(interp['sub_toc']))
nosubpart = interp['sub_toc'][0]
self.assertEqual('Subpart A', nosubpart['label'])
self.assertEqual(['100', 'Subpart', 'A', 'Interp'], nosubpart['index'])
toc = TableOfContentsLayer({
'100': [
{'title': 'Subpart A', 'index': ['100', 'Subpart', 'A']},
{'title': 'Appendix A', 'index': ['100', 'A']},
{'title': 'Appendix C', 'index': ['100', 'C']},
{'title': 'Supplement I', 'index': ['100', 'Interp']}],
'100-Subpart-A': [
{'title': '100.1 Intro', 'index': ['100', '1']},
{'title': '100.2 Second', 'index': ['100', '2']}]})
_, result = toc.apply_layer('100')
self.assertEqual(4, len(result))
subpartA, appA, appC, interp = result
self.assertEqual(2, len(interp['sub_toc']))
nosubpart, appendices = interp['sub_toc']
self.assertEqual('Subpart A', nosubpart['label'])
self.assertEqual(['100', 'Subpart', 'A', 'Interp'], nosubpart['index'])
self.assertEqual('Appendices', appendices['label'])
self.assertEqual(['100', 'Appendices', 'Interp'], appendices['index'])
| 40.223301 | 79 | 0.506758 | 867 | 8,286 | 4.743945 | 0.102653 | 0.145879 | 0.053732 | 0.046195 | 0.787503 | 0.722101 | 0.682956 | 0.664478 | 0.61415 | 0.580112 | 0 | 0.05777 | 0.285542 | 8,286 | 205 | 80 | 40.419512 | 0.636993 | 0 | 0 | 0.634831 | 0 | 0 | 0.240043 | 0 | 0 | 0 | 0 | 0 | 0.258427 | 1 | 0.039326 | false | 0 | 0.011236 | 0 | 0.05618 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ce35c483fa1d1e28e070fa3ddb8145549538c79c | 14,508 | py | Python | eventmanager/events/tests.py | karinakozarova/EventManager | b09fa7a788b4aa11761fc34096cc711304c288c7 | [
"MIT"
] | 4 | 2019-01-06T16:58:20.000Z | 2019-04-08T10:20:46.000Z | eventmanager/events/tests.py | EventManagerTeam/EventManager | b09fa7a788b4aa11761fc34096cc711304c288c7 | [
"MIT"
] | 297 | 2018-11-14T13:59:19.000Z | 2022-03-11T23:33:28.000Z | eventmanager/events/tests.py | karinakozarova/EventManager | b09fa7a788b4aa11761fc34096cc711304c288c7 | [
"MIT"
] | 1 | 2019-04-22T15:17:32.000Z | 2019-04-22T15:17:32.000Z | import datetime
import unittest
from accounts.models import AccountDetails
from categories.models import Category
from django.contrib.auth.models import User
from django.test import Client
from django.test import TestCase
from django.urls import reverse
from events.models import Comment
from events.models import Event
from events.models import Invite
from tasks.models import Task
class EventsTestCase(TestCase):
def setUp(self):
self.total_number_of_events = 25
self.client = Client()
self.client.login(username='john', password='johnpassword')
category = Category.objects.create(
name='test event category',
description='cool description',
slug='test',
)
for event_id in range(self.total_number_of_events):
eventstring = 'test' + str(event_id)
self.event = Event.objects.create(
title=eventstring,
description=eventstring,
)
self.event.save()
self.event.category.add(category)
self.event.save()
def test_list_events_lists_event(self):
url = reverse('events.list')
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertIn(b'test1', resp.content)
def test_list_events_lists_categories(self):
url = reverse('events.list')
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertIn(b'test event category', resp.content)
def test_create_event(self):
pass
def test_update_event(self):
pass
def test_delete_event(self):
pass
def test_view_event(self):
pass
def test_join_event(self):
pass
def test_unjoin_event(self):
pass
def test_add_teammate(self):
pass
class EventsFeedsTestCase(TestCase):
def setUp(self):
self.total_number_of_events = 25
self.client = Client()
self.client.login(username='john', password='johnpassword')
category = Category.objects.create(
name='test event',
description='cool description',
slug='test',
)
for event_id in range(self.total_number_of_events):
eventstring = 'test' + str(event_id)
self.event = Event.objects.create(
title=eventstring,
description=eventstring,
)
self.event.save()
self.event.category.add(category)
self.event.save()
def test_all_events_feed(self):
response = self.client.get(reverse('event_feed'))
latest_event = 'test' + str(self.total_number_of_events - 1)
self.assertContains(response, latest_event)
self.assertContains(response, 'test' + str(1))
def test_latest_events_feed(self):
response = self.client.get(reverse('latest_event_feed'))
first_event_title = 'test' + str(self.total_number_of_events)
self.assertNotContains(response, first_event_title)
latest_event_title = 'test' + str(1)
self.assertContains(response, latest_event_title)
class EventsUrlsTestClass(TestCase):
client = Client()
def setUp(self):
self.client = Client()
self.user = User.objects.create_user(
'john',
'lennon@thebeatles.com',
'johnpassword'
)
self.user.details = AccountDetails.objects.create(
user=self.user,
description='cool description',
slug='userslug'
)
self.client.login(username='john', password='johnpassword')
category = Category.objects.create(
name='test event',
description='cool description',
slug='test',
)
self.event = Event.objects.create(
title='testy',
description='cool description',
slug='event',
added_by=self.user,
)
self.event.save()
self.event.category.add(category)
self.event.team_members.add(self.user)
self.event.save()
def url_returns_200(self, url, status_code=200):
response = self.client.get(url)
self.assertEqual(response.status_code, status_code)
def test_list_events_url(self):
self.url_returns_200(reverse('events.list'))
def test_create_event_url(self):
self.url_returns_200(reverse('events.create_event'))
def test_delete_event_url(self):
user = User.objects.create_user(
'johnaaaa',
'lennonaaa@thebeatles.com',
'johnpasswordaaa'
)
self.client.login(username='johnaaaa', password='johnpasswordaaa')
category = Category.objects.create(
name='unisdjsd',
description='cool description',
slug='tesddssst',
)
event = Event.objects.create(
title='delete',
description='cool description',
slug='delete',
added_by=user,
)
event.save()
event.category.add(category)
self.url_returns_200(reverse('events.del', kwargs={'slug': 'delete'}))
def test_delete_event_url_unsuccessful(self):
user = User.objects.create_user(
'johnaaaa',
'lennonaaa@thebeatles.com',
'johnpasswordaaa'
)
user2 = User.objects.create_user(
'johnaaaa2',
'lennonaaa2@thebeatles.com',
'johnpasswordaaa2'
)
self.client.login(username='johnaaaa', password='johnpasswordaaa')
category = Category.objects.create(
name='unisdjsd',
description='cool description',
slug='tesddssst',
)
event = Event.objects.create(
title='delete',
description='cool description',
slug='delete',
added_by=user2,
)
event.save()
event.category.add(category)
response = self.client.get(
reverse(
'events.del', kwargs={
'slug': 'delete'}))
self.assertEquals(response.status_code, 403)
def test_view_event_url(self):
user2 = User.objects.create_user(
username='testuser2',
password='12345'
)
user2.details = AccountDetails.objects.create(
user=user2,
description='cool description',
slug='userslug2'
)
self.user.details.friends.add(user2)
self.url_returns_200(reverse('event', kwargs={'slug': 'event'}))
def test_all_events_feed_url(self):
self.url_returns_200(reverse('event_feed'))
def test_latest_events_feed_url(self):
self.url_returns_200(reverse('latest_event_feed'))
def test_join_event(self):
self.url_returns_200(reverse('events.join', kwargs={'slug': 'event'}))
def test_unjoin_event(self):
self.url_returns_200(
reverse(
'events.rm_join',
kwargs={
'slug': 'event'}))
def test_event_settings_url(self):
self.url_returns_200(
reverse(
'events.settings',
kwargs={
'slug': 'event'}))
def test_event_invites_url(self):
self.url_returns_200(reverse('events.invites'))
def test_event_invite_url(self):
self.url_returns_200(
reverse(
'events.invite',
kwargs={
'slug': 'userslug',
'event': 'event'}))
def test_event_url(self):
self.url_returns_200(reverse('events.event', kwargs={'slug': 'event'}))
def test_add_team_member(self):
user = User.objects.create_user(
'johnaaaa',
'lennonaaa@thebeatles.com',
'johnpasswordaaa'
)
event = Event.objects.create(
title='testy',
description='cool description',
slug='eventааааа',
added_by=user,
)
self.client.login(username='johnaaaa', password='johnpasswordaaa')
self.url_returns_200('events/userslug/eventааааа/add_teammate')
def test_get_tasks_no_tasks(self):
response = self.client.get(reverse('events.tasks'))
self.assertNotContains(response, 'TO DO:')
self.assertNotContains(response, 'DOING:')
self.assertEqual(response.status_code, 200)
# def test_get_tasks(self):
# task_title = 'Very cooollll'
# task = Task.objects.create(
# title=task_title,
# event=self.event,
# slug='event',
# assignee=self.user,
# status='TODO'
# )
# self.client.login(username='john', password='johnpassword')
# response = self.client.get(reverse('events.tasks'))
# self.assertContains(response, task_title)
# self.assertEqual(response.status_code, 200)
def test_confirm_invite(self):
user2 = User.objects.create_user(
'johnaaaa',
'lennonaaa@thebeatles.com',
'johnpasswordaaa'
)
Invite.objects.create(
invited_user=self.user,
invited_by=user2,
event=self.event)
self.url_returns_200(
reverse(
'events.confirm_invite',
kwargs={
'slug': self.event.slug}))
def test_decline_invite(self):
user2 = User.objects.create_user(
'johnaaaa',
'lennonaaa@thebeatles.com',
'johnpasswordaaa'
)
Invite.objects.create(
invited_user=self.user,
invited_by=user2,
event=self.event)
self.url_returns_200(
reverse(
'invites.decline_invite',
kwargs={
'slug': self.event.slug}))
def test_add_teammate_no_friends(self):
self.url_returns_200(
reverse(
'events.add_teammate',
kwargs={
'slug': self.event.slug}))
response = self.client.get(
reverse(
'events.add_teammate',
kwargs={
'slug': self.event.slug}))
self.assertEqual(response.status_code, 200)
self.assertNotContains(response, 'Find')
def test_add_teammate(self):
user2 = User.objects.create_user(
'friendddddddd',
'lennon@thebeatles.com',
'johnpassword'
)
user2.details = AccountDetails.objects.create(
user=user2,
description='cool description',
slug='useddddrslug'
)
self.user.details.friends.add(user2)
self.user.save()
self.user.details.save()
self.url_returns_200(
reverse(
'events.add_teammate',
kwargs={
'slug': self.event.slug
}
)
)
response = self.client.get(
reverse(
'events.add_teammate',
kwargs={
'slug': self.event.slug}))
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'Find')
def test_event_team_add(self):
user2 = User.objects.create_user(
'johnaaaa',
'lennonaaa@thebeatles.com',
'johnpasswordaaa'
)
response = self.client.get(
reverse(
'events.event_team_add',
kwargs={
'slug': self.event.slug,
'user': user2
}))
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'Success')
self.assertContains(response, user2.username)
def test_delete_comment_by_slug(self):
Comment.objects.create(
event=self.event,
author=self.user,
title='opaaa',
content='sdasdsa')
comment = Comment.objects.first()
self.url_returns_200(
reverse(
'events.comment.del',
kwargs={
'slug': self.event.slug,
'comment': comment.pk}))
def test_edit_comment_by_slug(self):
Comment.objects.create(
event=self.event,
author=self.user,
title='opaaa',
content='sdasdsa')
comment = Comment.objects.first()
response = self.client.get(
reverse(
'events.comment.edit',
kwargs={
'slug': self.event.slug,
'comment': comment.pk}))
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'opaaa')
def test_event_board(self):
self.url_returns_200(
reverse(
'events.board', kwargs={
'slug': self.event.slug}))
def test_my_events(self):
event = Event.objects.create(
title='testy',
description='cool description',
slug='eventааааа',
added_by=self.user,
)
event.attendees.add(self.user)
response = self.client.get(reverse('events.my_events'))
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'testy')
def test_events_I_host(self):
event = Event.objects.create(
title='testy',
description='cool description',
slug='eventааааа',
added_by=self.user,
)
event.attendees.add(self.user)
response = self.client.get(reverse('events.events_I_host'))
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'testy')
def test_show_random_event(self):
event = Event.objects.create(
title='testy',
description='cool description',
slug='eventааааа',
added_by=self.user,
)
response = self.client.get(reverse('events.show_random_event'))
self.assertEqual(response.status_code, 302)
def test_search_json(self):
response = self.client.get(reverse('events.search_json', kwargs={
'category_id': 1,
'slug': 'test'}))
self.assertEqual(response.status_code, 200)
| 30.543158 | 79 | 0.565826 | 1,447 | 14,508 | 5.50311 | 0.10159 | 0.034284 | 0.032651 | 0.040563 | 0.756373 | 0.698857 | 0.615095 | 0.572523 | 0.488886 | 0.449705 | 0 | 0.01409 | 0.324924 | 14,508 | 474 | 80 | 30.607595 | 0.798959 | 0.029915 | 0 | 0.607692 | 0 | 0 | 0.128023 | 0.02404 | 0 | 0 | 0 | 0.00211 | 0.071795 | 1 | 0.107692 | false | 0.058974 | 0.030769 | 0 | 0.148718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ce378179f8b40837991f7c71e128ec7eb52c6132 | 1,023 | py | Python | game.py | gustavonaldoni/command-line-hangman | a740a446ce1dfad2100ab7e6ea1db817c6a57a47 | [
"MIT"
] | null | null | null | game.py | gustavonaldoni/command-line-hangman | a740a446ce1dfad2100ab7e6ea1db817c6a57a47 | [
"MIT"
] | null | null | null | game.py | gustavonaldoni/command-line-hangman | a740a446ce1dfad2100ab7e6ea1db817c6a57a47 | [
"MIT"
] | null | null | null | from capture_words import capture_words_from_file
import random
def find_letter_indexes(letter, word):
indexes = []
for index, l in enumerate(word):
if l == letter:
indexes.append(index)
return indexes
def convert_word_to_asterisks(word):
final_word = '*' * len(word)
return final_word
def check_user_choice(word, user_choice):
if user_choice in word:
return True
return False
def get_last_formatted_word(real_word, new_formatted_word, user_choice):
new_formatted_word = list(new_formatted_word)
if check_user_choice(real_word, user_choice):
indexes = find_letter_indexes(user_choice, real_word)
for index in indexes:
new_formatted_word[index] = user_choice
return "".join(new_formatted_word)
def has_user_lost(user_chances):
if user_chances <= 0:
return True
return False
def has_user_won(real_word, user_word):
if real_word == user_word:
return True
return False
| 21.765957 | 72 | 0.691105 | 141 | 1,023 | 4.666667 | 0.29078 | 0.121581 | 0.121581 | 0.095745 | 0.117021 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001287 | 0.240469 | 1,023 | 47 | 73 | 21.765957 | 0.84556 | 0 | 0 | 0.2 | 0 | 0 | 0.000977 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.066667 | 0 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ce3b5d59730c0d6fb21fce8076ca9f2a4f217a30 | 2,506 | py | Python | hr_attendance_ex/models/sql_ser_config.py | alexhong121/odoo_model | 4eff41c672bd03084eaa6eae81c8f3d359c2fb8d | [
"MIT"
] | null | null | null | hr_attendance_ex/models/sql_ser_config.py | alexhong121/odoo_model | 4eff41c672bd03084eaa6eae81c8f3d359c2fb8d | [
"MIT"
] | null | null | null | hr_attendance_ex/models/sql_ser_config.py | alexhong121/odoo_model | 4eff41c672bd03084eaa6eae81c8f3d359c2fb8d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import logging
# import pyodbc
from odoo import models, fields, api, _
from odoo.exceptions import UserError, AccessError, MissingError
_logger = logging.getLogger(__name__)
class SQLConfig(models.Model):
_name = 'sql.config'
_description='sql_config'
_sql_constraints = [('check_syncing', 'UNIQUE(syncing)', '資料庫同步只能設定一台')]
name=fields.Char(string='名稱',required=True)
server = fields.Char(string='伺服器')
port = fields.Char(string='連接阜')
database = fields.Char(string='資料庫')
username = fields.Char(string='使用者名稱')
password = fields.Char(string='密碼')
odbc = fields.Selection([
('{ODBC Driver 17 for SQL Server}', 'ODBC Driver 17 for SQL Server')],
string='ODBC 驅動程式',
default='{ODBC Driver 17 for SQL Server}',
required=True
)
syncing=fields.Boolean(string='同步中')
@api.multi
def test_connection(self):
sql = self.sql_server_connection(
odbc=self.odbc, server=self.server, port=self.port, database=self.database, username=self.username,
password=self.password
)
#連線失敗時
if not sql['sqlstate']:
raise UserError(_(cursor['msg']))
# 連線成功時
if sql['sqlstate']:
raise UserError(_("Connection Test Succeeded!"))
def sql_server_connection(self, **kwargs):
try:
info = 'DRIVER={0}; SERVER={1},{2}; DATABASE={3}; UID={4}; PWD={5}'.format(
kwargs['odbc'], kwargs['server'], kwargs['port'], kwargs['database'], kwargs['username'],
kwargs['password']
)
sql = pyodbc.connect(info)
return {'sqlstate':True,'sql':sql,'msg':None}
except pyodbc.Error as err:
sqlmsg = err.args[1]
_logger.error(sqlmsg)
return {'sqlstate':False,'sql':None,'msg':sqlmsg}
class ResConfigSettings(models.TransientModel):
_inherit = 'res.config.settings'
@api.multi
def sql_ser_config(self):
self.ensure_one()
template_form = self.env.ref('hr_attendance_extend.SQL_ser_config_form_themes')
template_list = self.env.ref('hr_attendance_extend.SQL_ser_config_list_themes')
return {
'name': _('Choose'),
'type': 'ir.actions.act_window',
'view_mode': 'form',
'res_model': 'sql.config',
'views': [(template_list.id,'list'),(template_form.id, 'form')],
'target': 'current',
}
| 32.973684 | 111 | 0.602554 | 284 | 2,506 | 5.161972 | 0.415493 | 0.040928 | 0.065484 | 0.030696 | 0.103683 | 0.103683 | 0.05457 | 0.05457 | 0.05457 | 0 | 0 | 0.007451 | 0.2502 | 2,506 | 75 | 112 | 33.413333 | 0.772751 | 0.018356 | 0 | 0.034483 | 0 | 0.017241 | 0.226069 | 0.046843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0.051724 | 0.051724 | 0 | 0.396552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ce40a683df91507328100c3fd2d4f4e66c206aad | 4,981 | py | Python | application/helper/connection_check.py | HarshadKavathiya/acciom | 10e4d813c897bcf0078ab350d9432117cb708d1a | [
"MIT"
] | null | null | null | application/helper/connection_check.py | HarshadKavathiya/acciom | 10e4d813c897bcf0078ab350d9432117cb708d1a | [
"MIT"
] | 9 | 2019-07-23T09:55:15.000Z | 2022-02-19T01:45:12.000Z | application/helper/connection_check.py | accionlabs/acciom | 889958c0f8ec1d74db1958d0a6473c4678eaab3f | [
"MIT"
] | 21 | 2019-07-20T04:47:23.000Z | 2020-01-07T06:55:42.000Z | import cx_Oracle
import psycopg2
import pymysql
import pyodbc
from application.common.constants import APIMessages, SupportedDBType, \
GenericStrings
def connection_check(db_type_id, db_hostname, db_username, db_password,
db_name):
"""
Helper method to check the database connectivity for the given database
details.
Args:
db_type_id(int): type of the database
db_hostname(str): database hostname
db_username(str): database username
db_password(str): database password
db_name(str): database name
Returns(str):
Returns success only if connection can be establish
"""
# cnxn is a connection object
if db_type_id == SupportedDBType().get_db_id_by_name("mysql"):
try:
cnxn = pymysql.connect(host=db_hostname, user=db_username,
password=db_password, db=db_name)
except pymysql.err.InternalError as e:
if GenericStrings.UNKNOWN_DATABASE_MYSQL in e.args[1]:
return APIMessages.UNKNOWN_DATABASE.format(db_name)
elif GenericStrings.CANNOT_CONNECT_TO_REMOTE_SERVER_MYSQL in \
e.args[1]:
return APIMessages.CANNOT_CONNECT_TO_REMOTE_SERVER_MYSQL
else:
return e.args[1]
except pymysql.err.OperationalError as e:
if GenericStrings.AUTHENTICATION_FAILED_MYSQL in e.args[1]:
return APIMessages.AUTHENTICATION_FAILED.format(db_username)
elif GenericStrings.CANNOT_CONNECT_TO_SERVER_MYSQL in e.args[1]:
return APIMessages.CANNOT_CONNECT_TO_SERVER.format(
SupportedDBType().get_db_name_by_id(db_type_id),
db_hostname)
else:
return e.args[1]
cursor = cnxn.cursor()
if cursor:
return APIMessages.SUCCESS
elif db_type_id == SupportedDBType().get_db_id_by_name("mssql"):
server = db_hostname
database = db_name
username = db_username
password = db_password
# This code can handle Oracle Driver 17
# If other version 13 is given, code will fail
# TODO: Need to implement an approach that takes driver version
# based on user input
try:
cnxn = pyodbc.connect(
'DRIVER={0}'.format(GenericStrings.ORACLE_DRIVER) +
';SERVER=' + server +
';DATABASE=' + database +
';UID=' + username + ';PWD=' + password)
except pyodbc.ProgrammingError as e:
return APIMessages.UNKNOWN_DATABASE.format(db_name)
except pyodbc.InterfaceError as e:
return APIMessages.AUTHENTICATION_FAILED.format(db_username)
except pyodbc.OperationalError as e:
return APIMessages.CANNOT_CONNECT_TO_SERVER.format(
SupportedDBType().get_db_name_by_id(db_type_id),
db_hostname)
cursor = cnxn.cursor()
if cursor:
return APIMessages.SUCCESS
elif db_type_id == SupportedDBType().get_db_id_by_name("postgresql"):
try:
cnxn = psycopg2.connect(host=db_hostname, database=db_name,
user=db_username,
password=db_password)
except psycopg2.OperationalError as e:
if GenericStrings.UNKNOWN_DATABASE_POSTGRES in str(e):
return APIMessages.UNKNOWN_DATABASE.format(db_name)
elif GenericStrings.AUTHENTICATION_FAILED_POSTGRES in str(e):
return APIMessages.AUTHENTICATION_FAILED.format(db_username)
elif GenericStrings.CANNOT_CONNECT_TO_SERVER_POSTGRES in str(e):
return APIMessages.CANNOT_CONNECT_TO_SERVER.format(
SupportedDBType().get_db_name_by_id(db_type_id),
db_hostname)
else:
return e
cursor = cnxn.cursor()
if cursor:
return APIMessages.SUCCESS
elif db_type_id == SupportedDBType().get_db_id_by_name("oracle"):
try:
cnxn = cx_Oracle.connect(
"{0}/{1}@{2}/{3}".format(db_username, db_password, db_hostname,
db_name))
except cx_Oracle.DatabaseError as e:
if GenericStrings.UNKNOWN_DB_AUTHENTICATION_FAILED_ORACLE in str(
e):
return APIMessages.UNKNOWN_DB_AUTHENTICATION_FAILED.format(
db_name, db_username)
elif GenericStrings.CANNOT_CONNECT_TO_SERVER_ORACLE in str(
e):
return APIMessages.CANNOT_CONNECT_TO_SERVER.format(
SupportedDBType().get_db_name_by_id(db_type_id),
db_hostname)
else:
return e
cursor = cnxn.cursor()
if cursor:
return APIMessages.SUCCESS
| 43.313043 | 79 | 0.609717 | 547 | 4,981 | 5.281536 | 0.190128 | 0.09415 | 0.027691 | 0.050883 | 0.617515 | 0.535826 | 0.450675 | 0.441675 | 0.389754 | 0.334372 | 0 | 0.005337 | 0.322827 | 4,981 | 114 | 80 | 43.692982 | 0.851171 | 0.109617 | 0 | 0.473118 | 0 | 0 | 0.018041 | 0 | 0 | 0 | 0 | 0.008772 | 0 | 1 | 0.010753 | false | 0.064516 | 0.053763 | 0 | 0.27957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cbea98388f135a070422bda42a79198d77ccf817 | 546 | py | Python | 10_Exceptions_and_Errors/internal.py | MANOJPATRA1991/Python-Beyond-the-Basics | aed7bfd35e33c2b1759b48e1c89314aa149c56d0 | [
"MIT"
] | null | null | null | 10_Exceptions_and_Errors/internal.py | MANOJPATRA1991/Python-Beyond-the-Basics | aed7bfd35e33c2b1759b48e1c89314aa149c56d0 | [
"MIT"
] | null | null | null | 10_Exceptions_and_Errors/internal.py | MANOJPATRA1991/Python-Beyond-the-Basics | aed7bfd35e33c2b1759b48e1c89314aa149c56d0 | [
"MIT"
] | null | null | null | def modulus_three(n):
r = n % 3
if r == 0:
print("Multiple of 3")
elif r == 1:
print("Remainder 1")
else:
assert r == 2, "Remainder is not 2"
print("Remainder 2")
def modulus_four(n):
r = n % 4
if r == 0:
print("Multiple of 4")
elif r == 1:
print("Remainder 1")
elif r == 2:
print("Remainder 2")
elif r == 3:
print("Remainder 3")
else:
assert False, "This should never happen"
if __name__ == '__main__':
print(modulus_four(5)) | 20.222222 | 48 | 0.507326 | 77 | 546 | 3.454545 | 0.376623 | 0.263158 | 0.022556 | 0.067669 | 0.300752 | 0.300752 | 0 | 0 | 0 | 0 | 0 | 0.051136 | 0.355311 | 546 | 27 | 49 | 20.222222 | 0.704545 | 0 | 0 | 0.434783 | 0 | 0 | 0.239488 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.086957 | false | 0 | 0 | 0 | 0.086957 | 0.347826 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbeb6bdd865a57de9bfabcbd439111e0ae5d40b5 | 1,080 | py | Python | bot.py | m2Link/YouTube-Video-Search | 0512ea220af271dc1853925026f31c32990fa4ff | [
"MIT"
] | 9 | 2021-09-30T06:25:03.000Z | 2022-02-10T05:45:23.000Z | bot.py | m2Link/YouTube-Video-Search | 0512ea220af271dc1853925026f31c32990fa4ff | [
"MIT"
] | null | null | null | bot.py | m2Link/YouTube-Video-Search | 0512ea220af271dc1853925026f31c32990fa4ff | [
"MIT"
] | 7 | 2021-09-30T06:24:56.000Z | 2022-02-10T04:52:10.000Z | from pyrogram import Client ,filters
import os
from py_youtube import Data, Search
from pyrogram.types import *
TOKEN = os.environ.get("TOKEN", "")
APP_ID = int(os.environ.get("APP_ID", ""))
API_HASH = os.environ.get("API_HASH", "")
app = Client( "yt-search",
bot_token = TOKEN, api_id =API_ID , api_hash = API_HASH)
@Client.on_message(filters.private & filters.command(["start"]))
async def start(client,message):
await message.reply_text("Helo iam Youtube Video Search\nUse in inline mode")
@app.on_inline_query()
async def search_video(client,query):
search = []
result = Search(query.query.strip()).videos()
for i in result:
try:
title = i["title"]
id = i["id"]
thumb = i["thumb"][0]
data = i["simple_data"]
except:
pass
try:
search.append(
InlineQueryResultPhoto(
title=title,
description=data,
caption="https://youtu.be/"+id,
photo_url=thumb))
except:
pass
await query.answer(search)
app.run()
| 22.5 | 78 | 0.609259 | 140 | 1,080 | 4.578571 | 0.442857 | 0.043682 | 0.056162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001245 | 0.256481 | 1,080 | 47 | 79 | 22.978723 | 0.797011 | 0 | 0 | 0.171429 | 0 | 0 | 0.112963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.057143 | 0.114286 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cbf29fc594fa3d410506fc9b2b10ddf99a2f2899 | 1,569 | py | Python | test/const.py | DaniFdezAlvarez/shexerp3 | 80c3bdaac856a88d53359f5996477994774d34e2 | [
"Apache-2.0"
] | 3 | 2019-06-24T18:13:06.000Z | 2020-08-06T03:08:23.000Z | test/const.py | DaniFdezAlvarez/shexerp3 | 80c3bdaac856a88d53359f5996477994774d34e2 | [
"Apache-2.0"
] | 109 | 2019-05-22T11:53:05.000Z | 2021-03-15T11:09:18.000Z | test/const.py | DaniFdezAlvarez/shexerp3 | 80c3bdaac856a88d53359f5996477994774d34e2 | [
"Apache-2.0"
] | 2 | 2019-10-23T13:06:31.000Z | 2020-07-31T09:59:15.000Z | BASE_FILES = "C:\\Users\\Dani\\repos-git\\shexerp3\\test\\t_files\\"
BASE_FILES_GENERAL = BASE_FILES + "general\\"
G1 = BASE_FILES + "t_graph_1.ttl"
G1_NT = BASE_FILES + "t_graph_1.nt"
G1_TSVO_SPO = BASE_FILES + "t_graph_1.tsv"
G1_JSON_LD = BASE_FILES + "t_graph_1.json"
G1_XML = BASE_FILES + "t_graph_1.xml"
G1_N3 = BASE_FILES + "t_graph_1.n3"
G1_ALL_CLASSES_NO_COMMENTS = BASE_FILES_GENERAL + "g1_all_classes_no_comments.shex"
# PREFIX xml: <http://www.w3.org/XML/1998/namespace/>
# PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
# PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
# PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
# PREFIX foaf: <http://xmlns.com/foaf/0.1/>
# NAMESPACES_WITH_FOAF_AND_EX = {"http://example.org/" : "ex",
# "http://www.w3.org/XML/1998/namespace/" : "xml",
# "http://www.w3.org/1999/02/22-rdf-syntax-ns#": "rdf",
# "http://www.w3.org/2000/01/rdf-schema#" : "rdfs",
# "http://www.w3.org/2001/XMLSchema#": "xsd",
# "http://xmlns.com/foaf/0.1/": "foaf"
# }
def default_namespaces():
return {"http://example.org/": "ex",
"http://www.w3.org/XML/1998/namespace/": "xml",
"http://www.w3.org/1999/02/22-rdf-syntax-ns#": "rdf",
"http://www.w3.org/2000/01/rdf-schema#": "rdfs",
"http://www.w3.org/2001/XMLSchema#": "xsd",
"http://xmlns.com/foaf/0.1/": "foaf"
}
| 42.405405 | 86 | 0.560867 | 228 | 1,569 | 3.662281 | 0.263158 | 0.100599 | 0.129341 | 0.172455 | 0.722156 | 0.542515 | 0.491018 | 0.457485 | 0.42515 | 0.42515 | 0 | 0.084448 | 0.237731 | 1,569 | 36 | 87 | 43.583333 | 0.613712 | 0.464627 | 0 | 0 | 0 | 0 | 0.467722 | 0.102314 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0.058824 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbf53d52cd9777aefd5d176bd11a75c4a1b54abc | 303 | py | Python | Aula 07/ex6.py | diegorafaelvieira/Programacao-1 | 657a974f1215cec4aed68603e738d9a135131545 | [
"MIT"
] | null | null | null | Aula 07/ex6.py | diegorafaelvieira/Programacao-1 | 657a974f1215cec4aed68603e738d9a135131545 | [
"MIT"
] | null | null | null | Aula 07/ex6.py | diegorafaelvieira/Programacao-1 | 657a974f1215cec4aed68603e738d9a135131545 | [
"MIT"
] | null | null | null | val = int(input("Valor:"))
soma = val
maior = val
menor = val
for i in range(0,9):
val = int(input("Valor:"))
if val>maior:
maior = val
if val<menor:
menor=val
soma+=val
print("O maior valor é:",maior)
print("O menor valor é:",menor)
print("A média é:",(soma/10))
| 16.833333 | 31 | 0.570957 | 50 | 303 | 3.46 | 0.4 | 0.069364 | 0.127168 | 0.184971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017778 | 0.257426 | 303 | 17 | 32 | 17.823529 | 0.751111 | 0 | 0 | 0.428571 | 0 | 0 | 0.178218 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbf6bbc96905dc1f309f486dc863edc389cd8386 | 1,550 | py | Python | anchore/anchore-modules/queries/show-familytree.py | berez23/anchore | 594cce23f1d87d666397653054c22c2613247734 | [
"Apache-2.0"
] | 401 | 2016-06-16T15:29:48.000Z | 2022-03-24T10:05:16.000Z | anchore/anchore-modules/queries/show-familytree.py | berez23/anchore | 594cce23f1d87d666397653054c22c2613247734 | [
"Apache-2.0"
] | 63 | 2016-06-16T21:10:27.000Z | 2020-07-01T06:57:27.000Z | anchore/anchore-modules/queries/show-familytree.py | berez23/anchore | 594cce23f1d87d666397653054c22c2613247734 | [
"Apache-2.0"
] | 64 | 2016-06-16T13:05:57.000Z | 2021-07-16T10:03:45.000Z | #!/usr/bin/env python
import sys
import os
import re
import json
import traceback
import anchore.anchore_utils
# main routine
try:
config = anchore.anchore_utils.init_query_cmdline(sys.argv, "params: all\nhelp: shows dockerfile lines.")
except Exception as err:
print str(err)
sys.exit(1)
if not config:
sys.exit(0)
if len(config['params']) <= 0:
print "Query requires input: all"
warns = list()
outlist = list()
outlist.append(["Image_Id", "Repo_Tags", "Image Type"])
try:
idata = anchore.anchore_utils.load_image_report(config['imgid'])
ftree = idata['familytree']
for fid in ftree:
tags = "unknown"
itype = "unknown"
try:
fdata = anchore.anchore_utils.load_image_report(fid)
tags = ','.join(fdata['anchore_all_tags'])
if not tags:
tags = "none"
itype = fdata['meta']['usertype']
if not itype:
itype = "intermediate"
except:
warns.append("family tree id ("+str(fid)+") does not appear to have been analyzed, no data for this member of the tree")
outlist.append([fid, str(tags), str(itype)])
except Exception as err:
# handle the case where something wrong happened
import traceback
traceback.print_exc()
warns.append("query error: "+str(err))
pass
anchore.anchore_utils.write_kvfile_fromlist(config['output'], outlist)
if len(warns) > 0:
anchore.anchore_utils.write_plainfile_fromlist(config['output_warns'], warns)
sys.exit(0)
| 22.794118 | 132 | 0.645806 | 203 | 1,550 | 4.82266 | 0.463054 | 0.085802 | 0.116445 | 0.040858 | 0.069459 | 0.069459 | 0 | 0 | 0 | 0 | 0 | 0.004223 | 0.236129 | 1,550 | 67 | 133 | 23.134328 | 0.822635 | 0.051613 | 0 | 0.204545 | 0 | 0 | 0.202869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.022727 | 0.159091 | null | null | 0.068182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbfa1107b8b7c29048f818cde663861f0e4ac256 | 761 | py | Python | tests/test_binary_tree.py | avere001/dsplot | 89948c2f1b16e00bb3a240f73d0cb100b3eac847 | [
"MIT"
] | 8 | 2021-08-08T06:06:39.000Z | 2022-02-04T18:30:38.000Z | tests/test_binary_tree.py | avere001/dsplot | 89948c2f1b16e00bb3a240f73d0cb100b3eac847 | [
"MIT"
] | 1 | 2022-01-04T02:01:36.000Z | 2022-01-04T02:01:36.000Z | tests/test_binary_tree.py | avere001/dsplot | 89948c2f1b16e00bb3a240f73d0cb100b3eac847 | [
"MIT"
] | 2 | 2021-08-18T12:28:40.000Z | 2022-01-03T23:56:41.000Z | import os
import pytest
from dsplot.errors import InputException
from dsplot.tree import BinaryTree
def test_binary_tree():
tree = BinaryTree(nodes=[5, 4, 8, 11, None, 13, 4, 7, 2, None, None, 5, 1])
assert tree.root.val == 5
assert tree.root.right.left.val == 13
assert tree.root.right.right.left.val == 5
assert tree.preorder() == [5, 4, 11, 7, 2, 8, 13, 4, 5, 1]
assert tree.inorder() == [7, 11, 2, 4, 5, 13, 8, 5, 4, 1]
assert tree.postorder() == [7, 2, 11, 4, 13, 5, 1, 4, 8, 5]
tree.plot('tests/test_data/tree.png')
assert 'tree.png' in os.listdir('tests/test_data')
with pytest.raises(InputException) as e:
BinaryTree(nodes=[])
assert str(e.value) == 'Input list must have at least 1 element.'
| 29.269231 | 79 | 0.628121 | 129 | 761 | 3.674419 | 0.379845 | 0.147679 | 0.06962 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088333 | 0.211564 | 761 | 25 | 80 | 30.44 | 0.701667 | 0 | 0 | 0 | 0 | 0 | 0.114323 | 0.031537 | 0 | 0 | 0 | 0 | 0.470588 | 1 | 0.058824 | false | 0 | 0.235294 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0210ff2439d9da24bc21178720c18eee48ba770a | 1,224 | py | Python | COT/tests/test_doctests.py | morneaup/cot | 3d4dc7079a33aa0c09216ec339b44f84ab69ff4b | [
"MIT"
] | 81 | 2015-01-18T22:31:42.000Z | 2022-03-14T12:34:33.000Z | COT/tests/test_doctests.py | morneaup/cot | 3d4dc7079a33aa0c09216ec339b44f84ab69ff4b | [
"MIT"
] | 67 | 2015-01-05T15:24:39.000Z | 2021-08-16T12:44:58.000Z | COT/tests/test_doctests.py | morneaup/cot | 3d4dc7079a33aa0c09216ec339b44f84ab69ff4b | [
"MIT"
] | 20 | 2015-07-09T14:20:25.000Z | 2021-09-18T17:59:57.000Z | #!/usr/bin/env python
#
# test_doctests.py - test runner for COT doctests
#
# July 2016, Glenn F. Matthews
# Copyright (c) 2016-2017 the COT project developers.
# See the COPYRIGHT.txt file at the top-level directory of this distribution
# and at https://github.com/glennmatthews/cot/blob/master/COPYRIGHT.txt.
#
# This file is part of the Common OVF Tool (COT) project.
# It is subject to the license terms in the LICENSE.txt file found in the
# top-level directory of this distribution and at
# https://github.com/glennmatthews/cot/blob/master/LICENSE.txt. No part
# of COT, including this file, may be copied, modified, propagated, or
# distributed except according to the terms contained in the LICENSE.txt file.
"""Test runner for COT doctest tests."""
import logging
from logging import NullHandler
from doctest import DocTestSuite
from unittest import TestSuite
logging.getLogger('COT').addHandler(NullHandler())
def load_tests(*_):
"""Load doctests as unittest test suite.
For the parameters, see :mod:`unittest`. The parameters are unused here.
"""
suite = TestSuite()
suite.addTests(DocTestSuite('COT.data_validation'))
suite.addTests(DocTestSuite('COT.utilities'))
return suite
| 33.081081 | 78 | 0.750817 | 177 | 1,224 | 5.169492 | 0.491525 | 0.022951 | 0.028415 | 0.034973 | 0.222951 | 0.181421 | 0.181421 | 0.181421 | 0.181421 | 0.181421 | 0 | 0.011583 | 0.153595 | 1,224 | 36 | 79 | 34 | 0.871622 | 0.681373 | 0 | 0 | 0 | 0 | 0.098315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0211dbc40a6aa48e66ae666cbc2afb8294c1a296 | 297 | py | Python | apps/core/urls.py | tayyabRazzaq/opl-platform | 37b0efdb9327253a144c50bfd192132fac732619 | [
"MIT"
] | 2 | 2019-04-03T04:04:53.000Z | 2019-04-28T16:13:56.000Z | apps/core/urls.py | tayyabRazzaq/opl-platform | 37b0efdb9327253a144c50bfd192132fac732619 | [
"MIT"
] | 8 | 2021-06-04T21:57:30.000Z | 2022-03-11T23:48:38.000Z | apps/core/urls.py | tayyab-razzaq/opl-platform | 37b0efdb9327253a144c50bfd192132fac732619 | [
"MIT"
] | 7 | 2019-03-12T19:39:08.000Z | 2021-04-15T05:25:59.000Z | """ Here all the blog's urls routes will be mapped """
from django.urls import path
from django.conf.urls import include, url
from . import views
app_name = 'core'
urlpatterns = [
# path('', views.home, name='home-page'),
url(r'^api/', include('apps.core.api.urls', namespace='api')),
]
| 24.75 | 66 | 0.670034 | 45 | 297 | 4.4 | 0.622222 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 297 | 11 | 67 | 27 | 0.795181 | 0.296296 | 0 | 0 | 0 | 0 | 0.148515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
02154f47c33721ccd238e5aa1dcf948b5ec4704f | 1,308 | py | Python | Tools/RaiseCheck.py | 17320692835RGF/buptoj | 3d1e4719d757b4f0199e4451be7c0bee28e7c3ca | [
"MIT"
] | null | null | null | Tools/RaiseCheck.py | 17320692835RGF/buptoj | 3d1e4719d757b4f0199e4451be7c0bee28e7c3ca | [
"MIT"
] | null | null | null | Tools/RaiseCheck.py | 17320692835RGF/buptoj | 3d1e4719d757b4f0199e4451be7c0bee28e7c3ca | [
"MIT"
] | null | null | null |
import MySQLdb
from queue import Queue
import socket
import json
from time import sleep
import threading
import os
queue = Queue() # 全局判题列表
myjsonfile = open("./setting.json", 'r')
judgerjson = json.loads(myjsonfile.read())
if os.environ.get("DB_USER"):
judgerjson["db_ip"] = os.environ.get("DB_HOST")
judgerjson["db_pass"] = os.environ.get("DB_PASSWORD")
judgerjson["db_user"] = os.environ.get("DB_USER")
judgerjson["db_port"] = os.environ.get("DB_PORT")
try:
db = MySQLdb.connect(judgerjson["db_ip"], judgerjson["db_user"], judgerjson["db_pass"],
judgerjson["db_database"], int(judgerjson["db_port"]), charset='utf8')
except Exception as e:
print(e)
exit(1)
cursor = db.cursor()
cursor.execute("SELECT user, code from judgestatus_judgestatus")
data = cursor.fetchall()
raisenum = {}
for d in data:
id = str(d[0])
code = str(d[1])
raisenum[id] = 0
for d in data:
id = str(d[0])
code = str(d[1])
raisenum[id] = max(raisenum[id], code.count("raise"))
li = sorted(raisenum.items(), key=lambda item:item[1],reverse=True)
file = open("raisenum.txt", "w")
for l in li:
file.write(l[0]+" "+str(l[1])+'\n')
print(l[0]+" "+str(l[1]))
| 22.169492 | 96 | 0.603211 | 183 | 1,308 | 4.229508 | 0.393443 | 0.139535 | 0.077519 | 0.090439 | 0.18863 | 0.170543 | 0.170543 | 0.093023 | 0.093023 | 0.093023 | 0 | 0.011893 | 0.228593 | 1,308 | 58 | 97 | 22.551724 | 0.755203 | 0.004587 | 0 | 0.157895 | 0 | 0 | 0.154281 | 0.018578 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.184211 | 0 | 0.184211 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
021c36744a33f4725dc24d93c0aa09acf81e97bf | 2,193 | py | Python | tictac/tictac/cli.py | SteveDMurphy/tic_tac_go | 7e80dc1ec6fbeceb3c9879cee7fb32b7ecfe37a7 | [
"MIT"
] | null | null | null | tictac/tictac/cli.py | SteveDMurphy/tic_tac_go | 7e80dc1ec6fbeceb3c9879cee7fb32b7ecfe37a7 | [
"MIT"
] | null | null | null | tictac/tictac/cli.py | SteveDMurphy/tic_tac_go | 7e80dc1ec6fbeceb3c9879cee7fb32b7ecfe37a7 | [
"MIT"
] | null | null | null | import click
from random import randrange
from tictac import Tictac
@click.group()
def tictac():
pass
@tictac.command(name="games", help="Returns all started games, order by when they were created")
def view_games():
tictac_class = Tictac()
click.echo(tictac_class.view_games())
@tictac.command(name="gamemoves", help="Returns all moves in a specified game")
def view_game_moves():
game_id = click.prompt("Input a valid game ID", type=int)
tictac_class = Tictac()
game_moves = tictac_class.view_moves(game_id)
click.echo(game_moves)
@tictac.command(name="newgame", help="Creates a new game and walks moves through to completion")
def new_game():
tictac_class = Tictac()
tictac_class.create_new_game()
click.echo(f"playing game id: {tictac_class.game_id}")
game_complete = 0
while game_complete == 0:
available_moves = tictac_class.get_move_options()
if (tictac_class.number_of_moves() % 2) == 0:
# player to move here
click.echo("Possible moves:")
for move in available_moves:
click.echo(f"Position ID: {move[0]}, Position: {move[1]}")
move = click.prompt(
"Please pick a position id number for you next move", type=int
)
# TODO add some validation here
game_complete = tictac_class.take_turn(position_id=move)
else:
# selects a random position ID from the available moves
random_selection_id = randrange(len(available_moves))
computer_move = available_moves[random_selection_id][0]
game_complete = tictac_class.take_turn(position_id=computer_move, player_is_robot=1)
if game_complete == 1:
if tictac_class.winning_player_is_robot == 0:
click.echo("Congratulations! You win!")
else:
click.echo("OOF - sorry, the computer won this time...")
click.echo("Winning combination:")
click.echo(tictac_class.winning_combination)
elif game_complete == -1:
click.echo("oh dang, nobody won... try again?")
if __name__ == "__main__":
tictac()
| 33.227273 | 96 | 0.645691 | 286 | 2,193 | 4.727273 | 0.346154 | 0.105769 | 0.037722 | 0.029586 | 0.106509 | 0.060651 | 0.060651 | 0.060651 | 0 | 0 | 0 | 0.006716 | 0.253078 | 2,193 | 65 | 97 | 33.738462 | 0.818681 | 0.046968 | 0 | 0.106383 | 0 | 0 | 0.224353 | 0.010547 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.085106 | false | 0.021277 | 0.06383 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
022e5e8924eb3bc3c0fcb9bc827782f367ea128d | 565 | py | Python | homework5/app/config.py | sakost/tinkoff_fintech | 64b9d5a2a818b4db7c438b0dc53a8f31882f95ba | [
"MIT"
] | null | null | null | homework5/app/config.py | sakost/tinkoff_fintech | 64b9d5a2a818b4db7c438b0dc53a8f31882f95ba | [
"MIT"
] | null | null | null | homework5/app/config.py | sakost/tinkoff_fintech | 64b9d5a2a818b4db7c438b0dc53a8f31882f95ba | [
"MIT"
] | 2 | 2021-08-29T15:01:39.000Z | 2022-02-23T18:48:21.000Z | from typing import Any
from pydantic import BaseSettings
from .utils import singleton_cache
class Settings(BaseSettings):
TESTING: bool = False
SQLALCHEMY_DATABASE_URI: str = 'sqlite:///db.sqlite3'
FIRST_SUPERUSER: str = 'admin'
FIRST_SUPERUSER_PASSWORD: str = 'admin'
FIRST_SUPERUSER_ROLE: str = 'superuser'
USER_ROLE_NAME = 'user'
OBJECTS_PER_PAGE: int = 100
class Config:
case_sensitive = True
env_file = '.env'
@singleton_cache
def get_settings(**kwargs: Any) -> Settings:
return Settings(**kwargs)
| 20.178571 | 57 | 0.699115 | 68 | 565 | 5.573529 | 0.632353 | 0.110818 | 0.068602 | 0.116095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008969 | 0.210619 | 565 | 27 | 58 | 20.925926 | 0.840807 | 0 | 0 | 0 | 0 | 0 | 0.083186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0.176471 | 0.058824 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0232d872e8633ddbe199a54a9b7cd036c696f627 | 458 | py | Python | user/migrations/0017_auto_20200812_2149.py | Muia23/Grammer | dcc26937d88382c1da36a5f72306e6de367e90a3 | [
"Unlicense"
] | null | null | null | user/migrations/0017_auto_20200812_2149.py | Muia23/Grammer | dcc26937d88382c1da36a5f72306e6de367e90a3 | [
"Unlicense"
] | null | null | null | user/migrations/0017_auto_20200812_2149.py | Muia23/Grammer | dcc26937d88382c1da36a5f72306e6de367e90a3 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.29 on 2020-08-12 18:49
from __future__ import unicode_literals
from django.db import migrations
import tinymce.models
class Migration(migrations.Migration):
dependencies = [
('user', '0016_post_likes'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='bio',
field=tinymce.models.HTMLField(blank=True),
),
]
| 20.818182 | 55 | 0.617904 | 51 | 458 | 5.392157 | 0.803922 | 0.094545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065282 | 0.264192 | 458 | 21 | 56 | 21.809524 | 0.750742 | 0.150655 | 0 | 0 | 1 | 0 | 0.07513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02339931b6a314a7b42357abbf8fe125695e6d76 | 533 | py | Python | ocr.py | PI2-Braille-printer/OCR | 25511596efbe5e408fe43a92c0d04e513d7fea39 | [
"MIT"
] | null | null | null | ocr.py | PI2-Braille-printer/OCR | 25511596efbe5e408fe43a92c0d04e513d7fea39 | [
"MIT"
] | 6 | 2021-03-18T20:56:22.000Z | 2022-03-11T23:28:10.000Z | ocr.py | PI2-Braille-printer/OCR | 25511596efbe5e408fe43a92c0d04e513d7fea39 | [
"MIT"
] | null | null | null | from PIL import Image, ImageEnhance
import pytesseract
import os
#image = Image.open('f_test.jpg')
#enhance = ImageEnhance.Contrast(image)
#new_image = enhance.enhance(1.5)
#new_image.save('f_test__c_2.jpg')
for x in range(0,3):
os.system('./textcleaner -g -s 2 -a 1 ./Images/test_crop_'+str(x)+'.jpg ./Images/test_crop_'+str(x)+'_r.jpg')
result_string = pytesseract.image_to_string(Image.open('./Images/test_crop_'+str(x)+'_r.jpg'),lang='por')
print(result_string)
#result_string = result_string.split()
#print(result_string)
| 31.352941 | 110 | 0.739212 | 88 | 533 | 4.227273 | 0.465909 | 0.16129 | 0.112903 | 0.137097 | 0.166667 | 0.11828 | 0.11828 | 0 | 0 | 0 | 0 | 0.014344 | 0.084428 | 533 | 16 | 111 | 33.3125 | 0.747951 | 0.360225 | 0 | 0 | 0 | 0 | 0.310448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0236d5c96173fb20b1c62f540c0341822dff9bf5 | 788 | py | Python | test/point_test.py | markupCode/computational-geometry | 9a0a63a0b0c86e0618c18f82283b41baded21c50 | [
"MIT"
] | null | null | null | test/point_test.py | markupCode/computational-geometry | 9a0a63a0b0c86e0618c18f82283b41baded21c50 | [
"MIT"
] | null | null | null | test/point_test.py | markupCode/computational-geometry | 9a0a63a0b0c86e0618c18f82283b41baded21c50 | [
"MIT"
] | null | null | null | import unittest
from geometry.point import Point
class TestPoint(unittest.TestCase):
def get_points(self):
return [
Point(0, 0),
Point(1, 1),
Point(0, 1),
Point(-1, 1),
Point(-1, 0),
Point(-1, -1),
Point(1, -1)
]
def test_get_arc(self):
points = self.get_points()
self.assertEqual(points[0].get_arc(), 0)
self.assertEqual(points[1].get_arc(), 45)
self.assertEqual(points[2].get_arc(), 90)
self.assertEqual(points[3].get_arc(), 135)
self.assertEqual(points[4].get_arc(), 180)
self.assertEqual(points[5].get_arc(), 225)
self.assertEqual(points[6].get_arc(), 315)
if __name__ == '__main__':
unittest.main()
| 23.878788 | 50 | 0.549492 | 100 | 788 | 4.14 | 0.32 | 0.115942 | 0.355072 | 0.086957 | 0.096618 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068716 | 0.298223 | 788 | 32 | 51 | 24.625 | 0.679928 | 0 | 0 | 0 | 0 | 0 | 0.010152 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0.083333 | false | 0 | 0.083333 | 0.041667 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
023b3b94e54c17d3e9f985c30a7d72a9e9d96bce | 573 | py | Python | Qcover/backends/__init__.py | BAQIS-Quantum/Qcover | ca3776ed73fefa0cfef08042143a8cf842f8dad5 | [
"Apache-2.0"
] | 38 | 2021-12-22T03:12:01.000Z | 2022-03-17T06:57:10.000Z | Qcover/backends/__init__.py | BAQIS-Quantum/Qcover | ca3776ed73fefa0cfef08042143a8cf842f8dad5 | [
"Apache-2.0"
] | null | null | null | Qcover/backends/__init__.py | BAQIS-Quantum/Qcover | ca3776ed73fefa0cfef08042143a8cf842f8dad5 | [
"Apache-2.0"
] | 13 | 2021-12-22T07:32:44.000Z | 2022-02-28T06:47:41.000Z | from .backend import Backend
from .circuitbyqiskit import CircuitByQiskit
from .circuitbyprojectq import CircuitByProjectq
from .circuitbycirq import CircuitByCirq
from .circuitbyqulacs import CircuitByQulacs
# from .circuitbytket import CircuitByTket
from .circuitbytensor import CircuitByTensor
from .circuitbyqton import CircuitByQton
import warnings
warnings.filterwarnings("ignore")
__all__ = [
'Backend',
'CircuitByCirq',
'CircuitByQiskit',
'CircuitByProjectq',
'CircuitByTensor',
'CircuitByQulacs',
'CircuitByQton'
]
| 27.285714 | 49 | 0.767888 | 45 | 573 | 9.688889 | 0.311111 | 0.087156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167539 | 573 | 20 | 50 | 28.65 | 0.914046 | 0.069808 | 0 | 0 | 0 | 0 | 0.197652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
024430ea1d89420e6939d1c770a6a86ca49668e5 | 4,626 | py | Python | example/F3Dp/F3D_syn.py | Chunfang/defmod-swpc | 74fe7c02b24a46aa24bca7438738aa5adb72e2b6 | [
"MIT"
] | 26 | 2017-05-12T08:11:57.000Z | 2022-03-06T01:44:24.000Z | example/F3Dp/F3D_syn.py | Soengmou/defmod-swpc | 75740fca3b36107e9d18201a5623c955f6010740 | [
"MIT"
] | 4 | 2019-09-11T15:35:16.000Z | 2020-06-23T10:49:34.000Z | example/F3Dp/F3D_syn.py | Chunfang/defmod-swpc | 74fe7c02b24a46aa24bca7438738aa5adb72e2b6 | [
"MIT"
] | 8 | 2017-05-22T18:40:13.000Z | 2021-02-10T08:04:39.000Z | #!/usr/bin/env python
import numpy as np
import os,sys
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as plt
import argparse
ap=argparse.ArgumentParser()
ap.add_argument('-vis') # 1 plot cropped point cloud
ap.add_argument('-refine') # 1 refine mesh
ap.add_argument('-clean') # 1 remove tmp files
if ap.parse_args().vis==None:
vis=0
else:
vis=int(ap.parse_args().vis)
if ap.parse_args().refine==None:
refine=0
else:
refine=int(ap.parse_args().refine)
if ap.parse_args().clean==None:
clean=0
else:
clean=int(ap.parse_args().clean)
# Synthetic fault pixels
z=np.linspace(.2, -.8, num=100)
y=np.linspace(-.625,.625, num=120)
grid=np.meshgrid(y,z)
x=np.zeros((len(z)*len(y),1),dtype=np.float)
dat_vert=np.hstack((x,grid[0].reshape(x.shape),grid[1].reshape(x.shape)))
# weak
wl=np.linspace(.12,.18,num=8); amp=.03125*np.sqrt(wl)
e=1.025; r=-.2
dip=70.; zcnt=-.35
omg=[ 0.82976173, 0.89624834, 0.03829284, -0.50016345, -1.06606012, 1.40505898, -1.24256034, 1.28623393]
#omg=(np.random.rand(wl.shape[0])-.5)*np.pi
L=dat_vert[1,:].max()-dat_vert[1,:].min()
zmax=z.max(); zmin=z.min()
for i in range(len(wl)):
phs=dat_vert[:,1]/wl[i]*np.pi+omg[i]
dat_vert[:,0]=dat_vert[:,0]+amp[i]*np.cos(phs)*(e*zmax-dat_vert[:,2])/(e*zmax-zmin)*np.exp(r*abs(phs)/np.pi)
dat_vert[:,0]=dat_vert[:,0]+(zcnt-dat_vert[:,2])*np.tan((90.-dip)/180.*np.pi)
# ridge patch
def flt_patch(dat_vert,slope1,slope2,trunc1,trunc2,hlw,hup):
b1=-slope1*trunc1-.7
b2=-slope2*trunc2-.7
in_id=np.where(np.logical_and(dat_vert[:,2]-slope1*dat_vert[:,1]<b1, dat_vert[:,2]-slope2*dat_vert[:,1]<b2))[0]
out_id=np.setdiff1d(np.array(range(len(dat_vert)),dtype=np.int32),in_id)
x_shift=dat_vert[in_id,0]
# ridge patch
k=0
zup=dat_vert[:,2].max()
zlw=dat_vert[:,2].min()
for i in in_id:
r=abs(dat_vert[i,1]-.5*(trunc1+trunc2))
R=.5*((dat_vert[i,2]-b2)/slope2-(dat_vert[i,2]-b1)/slope1)
h=hlw+(dat_vert[i,2]-zlw)/(zup-zlw)*(hup-hlw)
x_shift[k]=x_shift[k]+np.cos(r/R*np.pi/2.)*h
k+=1
dat_vert=np.vstack((dat_vert[out_id,:],
np.hstack((x_shift.reshape(len(in_id),1),
dat_vert[in_id,1].reshape(len(in_id),1),
dat_vert[in_id,2].reshape(len(in_id),1)))))
return dat_vert
slope1=10.;slope2=-10.
trunc1=.1;trunc2=.6
hup=0.;hlw=.08
#dat_vert=flt_patch(dat_vert,slope1,slope2,trunc1,trunc2,hlw,hup)
print omg
fout='F3D_syn.xyz'
f=open(fout,'w+')
np.savetxt(f,dat_vert,delimiter=' ', fmt='%.6f '*3)
f.close()
from subprocess import call
fin=fout
fout=fout.rsplit('.')[0]+'.stl'
mxl='xyz2stl.mlx'
call(['meshlabserver', '-i',fin,'-o',fout,'-s',mxl])
if clean==1: os.remove(fin)
# Mesh
fin=fout
if refine==1:
fout=fout.rsplit('.')[0]+'_dns.exo'
else:
fout=fout.rsplit('.')[0]+'.exo'
jou='F3D_tet.jou'
txt_jou=open(jou,'r')
txt_jou_tmp=open('tmp.jou','w+')
hf=0.0025 # fault grid length (0.0025 for ~100 m tet model, 0.003 for ~40 m)
hm=0.0075 # matrix grid length (0.0075 for ~100 m tet model, 0.010 for ~40 m)
for line in txt_jou:
line=line.strip('\r\n')
if 'import' in line.lower():
line='import stl "'+fin+'"'
if 'export' in line.lower():
line='export mesh "'+fout+'" dimension 3 overwrite'
if 'surface 46 94 95 97 size' in line.lower():
line='surface 46 94 95 97 size %0.6f' %(2*hf)
if 'volume all size' in line.lower():
line='volume all size %0.6f' %(2*hm)
txt_jou_tmp.write(line+'\n')
if 'mesh volume all' in line.lower() and refine==1:
txt_jou_tmp.write('refine volume all\n')
txt_jou.close();txt_jou_tmp.close()
call(['trelis','-nojournal','-nographics','tmp.jou'])
if clean==1: os.remove('tmp.jou')
# Preprocessing msh=>inp
dt_dyn=2E-5 #1E-5 for dns 100 m tet model, 8E-5 for 40 m tet, 8E-4 for ~1 m tet
import F3D_msh2inp
_=F3D_msh2inp.msh2inp(fout,dt_dyn)
# Fault plot
if vis==1:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(dat_vert[:,0], dat_vert[:,1], dat_vert[:,2], c='b', marker='.')
# Create cubic bounding box to simulate equal aspect ratio
max_range = np.array([np.max(dat_vert[:,0])-np.min(dat_vert[:,0]),np.max(dat_vert[:,1])\
-np.min(dat_vert[:,1]), np.max(dat_vert[:,2])-np.min(dat_vert[:,2])]).max()
Xb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten()
Yb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten()
Zb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten()
for xb, yb, zb in zip(Xb, Yb, Zb):
ax.plot([xb], [yb], [zb], 'w',)
plt.title('fault [km]')
plt.grid()
plt.show()
| 34.266667 | 115 | 0.635754 | 884 | 4,626 | 3.222851 | 0.270362 | 0.09828 | 0.025272 | 0.009828 | 0.147069 | 0.113373 | 0.077571 | 0.07722 | 0.07722 | 0.058968 | 0 | 0.085096 | 0.141375 | 4,626 | 134 | 116 | 34.522388 | 0.632175 | 0.115435 | 0 | 0.053097 | 0 | 0 | 0.086626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.079646 | null | null | 0.00885 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0244e0d25129f6105b7892408951f27b584d128e | 2,850 | py | Python | fltk/util/data_loader_utils.py | tudelft-eemcs-dml/fltk-testbed-gr-5 | 72afa24a37cd1f8f5f49665c83ccbd730d76ad21 | [
"BSD-2-Clause"
] | null | null | null | fltk/util/data_loader_utils.py | tudelft-eemcs-dml/fltk-testbed-gr-5 | 72afa24a37cd1f8f5f49665c83ccbd730d76ad21 | [
"BSD-2-Clause"
] | 2 | 2021-05-11T12:48:14.000Z | 2021-05-11T12:49:24.000Z | fltk/util/data_loader_utils.py | tudelft-eemcs-dml/fltk-testbed-gr-5 | 72afa24a37cd1f8f5f49665c83ccbd730d76ad21 | [
"BSD-2-Clause"
] | 2 | 2021-05-03T17:40:18.000Z | 2021-05-11T09:34:30.000Z | import numpy
from torch.utils.data import DataLoader
import os
import pickle
import random
from ..datasets import Dataset
def generate_data_loaders_from_distributed_dataset(distributed_dataset, batch_size):
"""
Generate data loaders from a distributed dataset.
:param distributed_dataset: Distributed dataset
:type distributed_dataset: list(tuple)
:param batch_size: batch size for data loader
:type batch_size: int
"""
data_loaders = []
for worker_training_data in distributed_dataset:
data_loaders.append(Dataset.get_data_loader_from_data(batch_size, worker_training_data[0], worker_training_data[1], shuffle=True))
return data_loaders
def load_train_data_loader(logger, args):
"""
Loads the training data DataLoader object from a file if available.
:param logger: loguru.Logger
:param args: Arguments
"""
if os.path.exists(args.get_train_data_loader_pickle_path()):
dl = load_data_loader_from_file(logger, args.get_train_data_loader_pickle_path())
return dl
else:
logger.error("Couldn't find train data loader stored in file")
raise FileNotFoundError("Couldn't find train data loader stored in file")
def generate_train_loader(args, dataset):
train_dataset = dataset.get_train_dataset()
X, Y = shuffle_data(args, train_dataset)
return dataset.get_data_loader_from_data(args.get_batch_size(), X, Y)
def load_test_data_loader(logger, args):
"""
Loads the test data DataLoader object from a file if available.
:param logger: loguru.Logger
:param args: Arguments
"""
if os.path.exists(args.get_test_data_loader_pickle_path()):
return load_data_loader_from_file(logger, args.get_test_data_loader_pickle_path())
else:
logger.error("Couldn't find test data loader stored in file")
raise FileNotFoundError("Couldn't find train data loader stored in file")
def load_data_loader_from_file(logger, filename) -> DataLoader:
"""
Loads DataLoader object from a file if available.
:param logger: loguru.Logger
:param filename: string
"""
logger.info("Loading data loader from file: {}".format(filename))
with open(filename, "rb") as f:
return load_saved_data_loader(f)
def generate_test_loader(args, dataset):
test_dataset = dataset.get_test_dataset()
X, Y = shuffle_data(args, test_dataset)
return dataset.get_data_loader_from_data(args.get_test_batch_size(), X, Y)
def shuffle_data(args, dataset):
data = list(zip(dataset[0], dataset[1]))
random.shuffle(data)
X, Y = zip(*data)
X = numpy.asarray(X)
Y = numpy.asarray(Y)
return X, Y
def load_saved_data_loader(file_obj):
return pickle.load(file_obj)
def save_data_loader_to_file(data_loader, file_obj):
pickle.dump(data_loader, file_obj)
| 31.318681 | 138 | 0.729825 | 407 | 2,850 | 4.845209 | 0.194103 | 0.116633 | 0.049696 | 0.040568 | 0.486308 | 0.466024 | 0.364097 | 0.314402 | 0.278905 | 0.270791 | 0 | 0.001715 | 0.181754 | 2,850 | 90 | 139 | 31.666667 | 0.843911 | 0.19193 | 0 | 0.085106 | 1 | 0 | 0.099001 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.191489 | false | 0 | 0.12766 | 0.021277 | 0.489362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
024c6205dd81c6aee9436b9f31977f458d63fa70 | 3,384 | py | Python | tools/test.py | EMinsight/MPh | 2b967b77352f9ce7effcd50ad4774bf5eaf731ea | [
"MIT"
] | null | null | null | tools/test.py | EMinsight/MPh | 2b967b77352f9ce7effcd50ad4774bf5eaf731ea | [
"MIT"
] | null | null | null | tools/test.py | EMinsight/MPh | 2b967b77352f9ce7effcd50ad4774bf5eaf731ea | [
"MIT"
] | null | null | null | """
Runs all tests in the intended order.
Each test script (in the `tests` folder) contains a group of tests.
These scripts must be run in separate processes as most of them start
and stop the Java virtual machine, which can only be done once per
process. This is why simply calling pyTest (with `python -m pytest`
in the root folder) will not work.
This script here runs each test group in a new subprocess. It also
imposes a logical order: from the tests covering the most most basic
functionality to the high-level abstractions.
Here, as opposed to the similar script `coverage.py`, we don't actually
run the tests through pyTest. Rather, we run the scripts directly so
that the output is less verbose. Note, however, that pyTest still needs
to be installed as some of the test fixtures require it.
The verbosity can be increased by passing `--log` as a command-line
argument. This will display the log messages produced by MPh as the
tests are running. You can also pass the name of a test group to run
only that one. For example, passing "model" will only run the tests
defined in `test_model.py`.
"""
from subprocess import run
from pathlib import Path
from timeit import default_timer as now
from argparse import ArgumentParser
from sys import executable as python
from sys import exit
from os import environ, pathsep
# Define order of test groups.
groups = ['meta', 'config', 'discovery', 'server', 'session', 'standalone',
'client', 'multi', 'node', 'model', 'exit']
# Determine path of project root folder.
here = Path(__file__).resolve().parent
root = here.parent
# Run MPh in project folder, not a possibly different installed version.
if 'PYTHONPATH' in environ:
environ['PYTHONPATH'] = str(root) + pathsep + environ['PYTHONPATH']
else:
environ['PYTHONPATH'] = str(root)
# Parse command-line arguments.
parser = ArgumentParser(prog='test.py',
description='Runs the MPh test suite.',
add_help=False,
allow_abbrev=False)
parser.add_argument('--help',
help='Show this help message.',
action='help')
parser.add_argument('--log',
help='Display log output.',
action='store_true')
parser.add_argument('--groups',
help='List all test groups.',
action='store_true')
parser.add_argument('group',
help='Run only this group of tests.',
nargs='?')
arguments = parser.parse_args()
if arguments.groups:
for group in groups:
print(group)
exit()
if arguments.group:
group = arguments.group
if group.startswith('test_'):
group = group[5:]
if group.endswith('.py'):
group = group[:-3]
groups = [group]
options = []
if arguments.log:
options.append('--log')
# Run each test group in new process.
for group in groups:
if groups.index(group) > 0:
print()
print(f'Running test group "{group}".')
t0 = now()
process = run([python, f'test_{group}.py'] + options, cwd=root/'tests')
if process.returncode == 0:
print(f'Passed in {now()-t0:.0f} s.')
else:
print(f'Failed after {now()-t0:.0f} s.')
exit(1)
| 36 | 76 | 0.636525 | 460 | 3,384 | 4.647826 | 0.41087 | 0.025257 | 0.031805 | 0.014032 | 0.029935 | 0.029935 | 0 | 0 | 0 | 0 | 0 | 0.004023 | 0.265366 | 3,384 | 93 | 77 | 36.387097 | 0.855591 | 0.060284 | 0 | 0.105263 | 0 | 0 | 0.201523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.017544 | 0.122807 | null | null | 0.087719 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0254feaa1c998dfb2faf7f35247b0cc22066d85a | 326 | py | Python | main/migrations_old/0007_remove_profile_rated_recipes.py | ggetzie/nnr | a8b1b1d771027edee2c19062f39fa982cfd024b0 | [
"MIT"
] | null | null | null | main/migrations_old/0007_remove_profile_rated_recipes.py | ggetzie/nnr | a8b1b1d771027edee2c19062f39fa982cfd024b0 | [
"MIT"
] | 5 | 2020-07-28T12:41:50.000Z | 2022-01-21T23:27:15.000Z | main/migrations_old/0007_remove_profile_rated_recipes.py | ggetzie/nnr | a8b1b1d771027edee2c19062f39fa982cfd024b0 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.4 on 2019-09-29 13:12
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0006_recipe_tags'),
]
operations = [
migrations.RemoveField(
model_name='profile',
name='rated_recipes',
),
]
| 18.111111 | 47 | 0.588957 | 35 | 326 | 5.371429 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082969 | 0.297546 | 326 | 17 | 48 | 19.176471 | 0.737991 | 0.138037 | 0 | 0 | 1 | 0 | 0.143369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02591832a76c44befd1384a4984c9e645f451a38 | 3,077 | py | Python | conference_lib/confemailrecipients.py | allankellynet/mimas | 10025d43bba9e84f502a266760786842e7158a05 | [
"MIT"
] | null | null | null | conference_lib/confemailrecipients.py | allankellynet/mimas | 10025d43bba9e84f502a266760786842e7158a05 | [
"MIT"
] | 1 | 2020-02-05T13:00:29.000Z | 2020-02-05T13:00:29.000Z | conference_lib/confemailrecipients.py | allankellynet/mimas | 10025d43bba9e84f502a266760786842e7158a05 | [
"MIT"
] | null | null | null | #-----------------------------------------------------
# Mimas: conference submission and review system
# (c) Allan Kelly 2016-2020 http://www.allankelly.net
# Licensed under MIT License, see LICENSE file
# -----------------------------------------------------
# System imports
# Google imports
from google.appengine.ext import ndb
# Local imports
import confoptions
from scaffold import sorrypage, userrightsnames
import basehandler
class ConferenceEmailsPage(basehandler.BaseHandler):
def get(self):
if not(self.request.params.has_key("conf")):
sorrypage.redirect_sorry(self, "ConfKeyMissing")
return
conf_key = ndb.Key(urlsafe=self.request.get("conf"))
conference = conf_key.get()
if not(conference.user_rights().has_permission(self.get_crrt_user().email(),
userrightsnames.CONF_CREATOR)):
sorrypage.redirect_sorry(self, "NoAccess")
return
self.write_page('conference_lib/confemailrecipients.html', {
"crrt_conf": conference,
"tracks": conference.track_options(),
"conf_key": conference.key,
"email_ack_cc": conference.ack_cc_addresses(),
"email_ack_bcc": conference.ack_bcc_addresses(),
"email_accept_cc": conference.accept_cc_addresses(),
})
# TODO Extract and unit test
def add_for_selected(self, conf_key, email):
if self.request.get("AckCC"):
confoptions.make_conference_option(confoptions.AcknowledgementEmailCCAddresses, conf_key, email)
if self.request.get("AckBCC"):
confoptions.make_conference_option(confoptions.AcknowledgementEmailBCCAddresses, conf_key, email)
if self.request.get("AcceptCC"):
confoptions.make_conference_option(confoptions.AcceptEmailCCAddress, conf_key, email)
# TODO Extract and unit test
def add_email(self):
conf_key = ndb.Key(urlsafe=self.request.get("crrt_conf_key"))
email = self.request.get("NewMail")
if len(email)>0:
self.add_for_selected(conf_key, email)
self.redirect('/confemailcopy?conf=' + self.request.get("crrt_conf_key"))
def delete_email(self, check_field, Option_Class):
conf_key = ndb.Key(urlsafe=self.request.get("crrt_conf_key"))
for opt in self.request.get_all(check_field):
confoptions.delete_option(Option_Class, conf_key, opt)
self.redirect('/confemailcopy?conf=' + conf_key.urlsafe())
def post(self):
if self.request.get("NewMail"):
self.add_email()
elif self.request.get("DeleteAckCCEmails"):
self.delete_email("selectAckCCEmail", confoptions.AcknowledgementEmailCCAddresses)
elif self.request.get("DeleteAckBCCEmails"):
self.delete_email("selectAckBCCEmail", confoptions.AcknowledgementEmailBCCAddresses)
elif self.request.get("DeleteAcceptCCEmails"):
self.delete_email("selectAcceptCCEmail", confoptions.AcceptEmailCCAddress)
| 40.486842 | 109 | 0.653559 | 325 | 3,077 | 5.990769 | 0.32 | 0.053929 | 0.093477 | 0.032871 | 0.213148 | 0.148433 | 0.135593 | 0.063688 | 0.046225 | 0.046225 | 0 | 0.003705 | 0.210595 | 3,077 | 75 | 110 | 41.026667 | 0.797859 | 0.113422 | 0 | 0.08 | 0 | 0 | 0.129139 | 0.014349 | 0 | 0 | 0 | 0.013333 | 0 | 1 | 0.1 | false | 0 | 0.08 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0259bea6f07ec94194968114adbb7688e3c79035 | 236 | py | Python | basic/Pyshop/products/models.py | IsAlbertLiu/Python-basics | 49c0c93fb7d1abb70548854b69346eb5837ba00d | [
"MIT"
] | null | null | null | basic/Pyshop/products/models.py | IsAlbertLiu/Python-basics | 49c0c93fb7d1abb70548854b69346eb5837ba00d | [
"MIT"
] | null | null | null | basic/Pyshop/products/models.py | IsAlbertLiu/Python-basics | 49c0c93fb7d1abb70548854b69346eb5837ba00d | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class Product(models.Model):
name = models.CharField(max_length=255)
price = models.FloatField()
stack = models.IntegerField()
image_url = models.CharField(2083)
| 23.6 | 43 | 0.724576 | 30 | 236 | 5.633333 | 0.766667 | 0.177515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035897 | 0.173729 | 236 | 9 | 44 | 26.222222 | 0.830769 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0259fbe373b86b3d2859b384b23af03bfb7c829a | 758 | py | Python | examples/delta_setitem/001_check_setitem.py | pkicsiny/xpart | cddf3eb65ffc198c22dd37204139ce3177a9bd96 | [
"MIT"
] | null | null | null | examples/delta_setitem/001_check_setitem.py | pkicsiny/xpart | cddf3eb65ffc198c22dd37204139ce3177a9bd96 | [
"MIT"
] | null | null | null | examples/delta_setitem/001_check_setitem.py | pkicsiny/xpart | cddf3eb65ffc198c22dd37204139ce3177a9bd96 | [
"MIT"
] | null | null | null | import numpy as np
import xpart as xp
import xobjects as xo
#context = xo.ContextPyopencl()
context = xo.ContextCpu()
ctx2np = context.nparray_from_context_array
particles = xp.Particles(_context=context, p0c=26e9, delta=[1,2,3])
assert ctx2np(particles.delta[2]) == 3
assert np.isclose(ctx2np(particles.rvv[2]), 1.00061, rtol=0, atol=1e-5)
assert np.isclose(ctx2np(particles.rpp[2]), 0.25, rtol=0, atol=1e-10)
assert np.isclose(ctx2np(particles.ptau[2]), 3.001464*particles._xobject.beta0[0],
rtol=0, atol=1e-6)
particles.delta[1] = particles.delta[2]
assert particles.delta[2] == particles.delta[1]
assert particles.ptau[2] == particles.ptau[1]
assert particles.rpp[2] == particles.rpp[1]
assert particles.rvv[2] == particles.rvv[1]
| 32.956522 | 82 | 0.726913 | 121 | 758 | 4.512397 | 0.322314 | 0.128205 | 0.082418 | 0.115385 | 0.164835 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 0.114776 | 758 | 22 | 83 | 34.454545 | 0.731744 | 0.039578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
025e3d2d32267b02443190a02969375302ba67a9 | 978 | py | Python | ietf/review/migrations/0020_auto_20191115_2059.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 25 | 2022-03-05T08:26:52.000Z | 2022-03-30T15:45:42.000Z | ietf/review/migrations/0020_auto_20191115_2059.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 219 | 2022-03-04T17:29:12.000Z | 2022-03-31T21:16:14.000Z | ietf/review/migrations/0020_auto_20191115_2059.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 22 | 2022-03-04T15:34:34.000Z | 2022-03-28T13:30:59.000Z | # Copyright The IETF Trust 2019-2020, All Rights Reserved
# -*- coding: utf-8 -*-
# Generated by Django 1.11.26 on 2019-11-15 20:59
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('review', '0019_auto_20191023_0829'),
]
operations = [
migrations.AddField(
model_name='reviewsecretarysettings',
name='days_to_show_in_reviewer_list',
field=models.IntegerField(blank=True, help_text='Maximum number of days to show in reviewer list for completed items.', null=True),
),
migrations.AddField(
model_name='reviewsecretarysettings',
name='max_items_to_show_in_reviewer_list',
field=models.IntegerField(blank=True, help_text='Maximum number of completed items to show for one reviewer in the reviewer list view, the list is also filtered by the days to show in reviews list setting.', null=True),
),
]
| 36.222222 | 231 | 0.677914 | 125 | 978 | 5.16 | 0.536 | 0.046512 | 0.049612 | 0.055814 | 0.443411 | 0.443411 | 0.232558 | 0.232558 | 0.232558 | 0.232558 | 0 | 0.055851 | 0.231084 | 978 | 26 | 232 | 37.615385 | 0.801862 | 0.127812 | 0 | 0.352941 | 1 | 0.058824 | 0.426384 | 0.155477 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
025e72e9d1d41e03246451d111dab4b24c0f7bd1 | 442 | py | Python | AlgoExpert/PalindromeCheck.py | akhil-ece/160Days | 545d1c70c79c6ef2341137a88e6a09f81f330ea4 | [
"MIT"
] | null | null | null | AlgoExpert/PalindromeCheck.py | akhil-ece/160Days | 545d1c70c79c6ef2341137a88e6a09f81f330ea4 | [
"MIT"
] | null | null | null | AlgoExpert/PalindromeCheck.py | akhil-ece/160Days | 545d1c70c79c6ef2341137a88e6a09f81f330ea4 | [
"MIT"
] | null | null | null | def isPalindrome(string, i = 0):
j = len(string) - 1 -i
return True if i > j else string[i] == string[j] and isPalindrome(string, i+1)
def isPalindrome(string):
return string == string[::-1]
def isPalindromeUsingIndexes(string):
lIx = 0
rIdx = len(string) -1
while lIx < rIdx:
if(string[lIx] != string [rIdx]):
return False
else:
lIx += 1
rIdx -=1
return True
| 24.555556 | 82 | 0.561086 | 58 | 442 | 4.275862 | 0.310345 | 0.217742 | 0.169355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026403 | 0.31448 | 442 | 17 | 83 | 26 | 0.792079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.066667 | 0.466667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02623225e5d363b265ee6e56ba38be5191b44c1f | 435 | py | Python | scripts/issues/issue6.py | slamer59/awesome-panel | 91c30bd6d6859eadf9c65b1e143952f7e64d5290 | [
"Apache-2.0"
] | 179 | 2019-12-04T14:54:53.000Z | 2022-03-30T09:08:38.000Z | scripts/issues/issue6.py | slamer59/awesome-panel | 91c30bd6d6859eadf9c65b1e143952f7e64d5290 | [
"Apache-2.0"
] | 62 | 2019-12-14T16:51:28.000Z | 2022-03-19T18:47:12.000Z | scripts/issues/issue6.py | slamer59/awesome-panel | 91c30bd6d6859eadf9c65b1e143952f7e64d5290 | [
"Apache-2.0"
] | 35 | 2019-12-08T13:19:53.000Z | 2022-03-25T10:33:02.000Z | import panel as pn
def main():
text_error = """
This is not formatted correctly by Markdown due to the indentation!"""
text_ok = """
This is formatted correctly by Markdown!
"""
app = pn.Column(
pn.pane.Markdown(text_error),
pn.pane.HTML(
"<hr>",
sizing_mode="stretch_width",
),
pn.pane.Markdown(text_ok),
)
app.servable()
main()
| 19.772727 | 75 | 0.542529 | 51 | 435 | 4.509804 | 0.588235 | 0.078261 | 0.173913 | 0.243478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.337931 | 435 | 21 | 76 | 20.714286 | 0.798611 | 0 | 0 | 0 | 0 | 0 | 0.316425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
026b557b15ada072d61283c89f10a088c8637df4 | 1,416 | py | Python | webapp/app.py | aleksandergurin/news | 9e7d3c35857600445cb6df42ba18d289dc0e37a9 | [
"BSD-3-Clause"
] | 3 | 2015-08-20T11:08:28.000Z | 2018-01-28T21:22:53.000Z | webapp/app.py | aleksandergurin/news | 9e7d3c35857600445cb6df42ba18d289dc0e37a9 | [
"BSD-3-Clause"
] | null | null | null | webapp/app.py | aleksandergurin/news | 9e7d3c35857600445cb6df42ba18d289dc0e37a9 | [
"BSD-3-Clause"
] | null | null | null |
from flask import Flask, render_template
from config import configs
from .extensions import login_manager, db
from .account import account
from .frontend import frontend
from webapp.session import RedisSessionInterface
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(configs[config_name])
register_session_storage(app, configs[config_name])
register_blueprints(app)
init_extensions(app)
add_error_pages(app)
return app
def register_session_storage(app, conf):
if hasattr(conf, 'REDIS'):
from redis import Redis
host = conf.REDIS['host']
port = conf.REDIS['port']
db_num = conf.REDIS['db']
app.session_interface = RedisSessionInterface(Redis(host, port, db_num))
def register_blueprints(app):
app.register_blueprint(frontend)
app.register_blueprint(account)
def init_extensions(app):
login_manager.init_app(app)
db.init_app(app)
def add_error_pages(app):
@app.errorhandler(401)
def unauthorized(e):
return render_template('errors/401.html'), 401
@app.errorhandler(403)
def forbidden(e):
return render_template('errors/403.html'), 403
@app.errorhandler(404)
def not_found(e):
return render_template('errors/404.html'), 404
@app.errorhandler(500)
def internal_server_error(e):
return render_template('errors/500.html'), 500
| 24.413793 | 80 | 0.711864 | 184 | 1,416 | 5.271739 | 0.282609 | 0.072165 | 0.053608 | 0.086598 | 0.11134 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031304 | 0.187853 | 1,416 | 57 | 81 | 24.842105 | 0.812174 | 0 | 0 | 0 | 0 | 0 | 0.053004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225 | false | 0 | 0.175 | 0.1 | 0.525 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
027194484ee86822b39b3b119ff07d71c83e4daa | 895 | py | Python | setup.py | oleks/gigalixir-cli | d1b1c303e24be548ddc895165e34652c378f4347 | [
"MIT"
] | null | null | null | setup.py | oleks/gigalixir-cli | d1b1c303e24be548ddc895165e34652c378f4347 | [
"MIT"
] | null | null | null | setup.py | oleks/gigalixir-cli | d1b1c303e24be548ddc895165e34652c378f4347 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='gigalixir',
url='https://github.com/gigalixir/gigalixir-cli',
author='Jesse Shieh',
author_email='jesse@gigalixir.com',
version='1.1.10',
packages=find_packages(),
include_package_data=True,
install_requires=[
'click~=6.7',
'requests~=2.20.0',
'stripe~=1.51.0',
'rollbar~=0.13.11',
'pygments~=2.2.0',
],
entry_points='''
[console_scripts]
gigalixir=gigalixir:cli
''',
setup_requires=[
'pytest-runner',
],
tests_require=[
'pytest',
'HTTPretty',
'sure',
],
extras_require={
'dev': [
'Sphinx',
'sphinx_rtd_theme',
'sphinx-tabs',
],
'test': [
'pytest',
'HTTPretty',
'sure',
],
}
)
| 20.813953 | 53 | 0.492737 | 85 | 895 | 5.035294 | 0.658824 | 0.056075 | 0.098131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037543 | 0.345251 | 895 | 42 | 54 | 21.309524 | 0.692833 | 0 | 0 | 0.268293 | 0 | 0 | 0.348603 | 0.025698 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.02439 | 0 | 0.02439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02723743e00e16a13861c25c7c9d6a4bb4b3f93e | 254 | py | Python | runTest.py | Amedeo91/cushypost_integration | fc7ffc9daf535ed5bcfdee4933a7a57340a583b2 | [
"MIT"
] | 1 | 2021-10-06T06:23:40.000Z | 2021-10-06T06:23:40.000Z | runTest.py | Amedeo91/cushypost_integration | fc7ffc9daf535ed5bcfdee4933a7a57340a583b2 | [
"MIT"
] | null | null | null | runTest.py | Amedeo91/cushypost_integration | fc7ffc9daf535ed5bcfdee4933a7a57340a583b2 | [
"MIT"
] | null | null | null | import os
import unittest
dir_path = os.path.dirname(os.path.realpath(__file__))
suite = unittest.TestLoader().discover(dir_path, pattern='test_*.py')
result = unittest.TextTestRunner(verbosity=3).run(suite)
print(result)
assert result.wasSuccessful()
| 25.4 | 69 | 0.787402 | 34 | 254 | 5.676471 | 0.647059 | 0.072539 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004255 | 0.074803 | 254 | 9 | 70 | 28.222222 | 0.817021 | 0 | 0 | 0 | 0 | 0 | 0.035433 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0273c9fe7bf28f09a7dc46bd636570ab46c8a8fa | 611 | py | Python | FusionIIIT/applications/gymkhana/migrations/0007_auto_20200608_2210.py | sabhishekpratap5/sonarcubeTest2 | 9bd8105e457f6feb8c38fa94b335e54783fca99e | [
"bzip2-1.0.6"
] | 2 | 2020-01-24T16:34:54.000Z | 2020-08-01T05:09:24.000Z | FusionIIIT/applications/gymkhana/migrations/0007_auto_20200608_2210.py | sabhishekpratap5/sonarcubeTest2 | 9bd8105e457f6feb8c38fa94b335e54783fca99e | [
"bzip2-1.0.6"
] | 1 | 2021-05-05T09:50:22.000Z | 2021-05-05T09:50:22.000Z | FusionIIIT/applications/gymkhana/migrations/0007_auto_20200608_2210.py | sabhishekpratap5/sonarcubeTest2 | 9bd8105e457f6feb8c38fa94b335e54783fca99e | [
"bzip2-1.0.6"
] | 4 | 2020-01-16T17:00:08.000Z | 2020-06-30T15:58:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.27 on 2020-06-08 22:10
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('gymkhana', '0006_form_available'),
]
operations = [
migrations.RemoveField(
model_name='form_available',
name='id',
),
migrations.AddField(
model_name='form_available',
name='roll',
field=models.CharField(default=2016001, max_length=7, primary_key=True, serialize=False),
),
]
| 24.44 | 101 | 0.610475 | 65 | 611 | 5.538462 | 0.753846 | 0.108333 | 0.072222 | 0.122222 | 0.144444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067568 | 0.273322 | 611 | 24 | 102 | 25.458333 | 0.743243 | 0.11293 | 0 | 0.235294 | 1 | 0 | 0.113173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
02869a45220bc3cd768ae9f192b46417fa96c690 | 4,354 | py | Python | plugin_manager/accounts/models.py | ahharu/plugin-manager | 43d5e2c6e25ed8f50eedf7fd876fbc04f75d94bb | [
"MIT"
] | null | null | null | plugin_manager/accounts/models.py | ahharu/plugin-manager | 43d5e2c6e25ed8f50eedf7fd876fbc04f75d94bb | [
"MIT"
] | null | null | null | plugin_manager/accounts/models.py | ahharu/plugin-manager | 43d5e2c6e25ed8f50eedf7fd876fbc04f75d94bb | [
"MIT"
] | null | null | null | """
Custom user model for deployments.
"""
import urllib
import hashlib
import base64
import random
from authtools.models import AbstractEmailUser
from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.db.models.signals import post_save
from .managers import DeployUserManager
from plugin_manager.hosts.models import Host
from plugin_manager.accounts.model_managers import DeployUserActiveManager
from plugin_manager.core.mixins.models import TrackingFields
class DeployUser(AbstractEmailUser, TrackingFields):
"""
Custom user class for deployments. Email as username using
django-custom-user.
"""
AMELIA = 'amelia.min.css'
CERULEAN = 'cerulean.min.css'
COSMO = 'cosmo.min.css'
CYBORG = 'cyborg.min.css'
DARKLY = 'darkly.min.css'
FLATLY = 'flatly.min.css'
JOURNAL = 'journal.min.css'
LUMEN = 'lumen.min.css'
READABLE = 'readable.min.css'
SIMPLEX = 'simplex.min.css'
SLATE = 'slate.min.css'
SPACELAB = 'spacelab.min.css'
SUPERHERO = 'superhero.min.css'
UNITED = 'united.min.css'
YETI = 'yeti.min.css'
TEMPLATES = (
(AMELIA, 'Amelia'),
(CERULEAN, 'Cerulean'),
(COSMO, 'Cosmo'),
(CYBORG, 'Cyborg'),
(DARKLY, 'Darkly'),
(FLATLY, 'Flatly'),
(JOURNAL, 'Journal'),
(LUMEN, 'Lumen'),
(READABLE, 'Readable'),
(SIMPLEX, 'Simplex'),
(SLATE, 'Slate'),
(SPACELAB, 'Spacelab'),
(SUPERHERO, 'Superhero'),
(UNITED, 'United'),
(YETI, 'Yeti'),
)
active_records = DeployUserActiveManager()
first_name = models.CharField(_('first name'), max_length=30, blank=False)
last_name = models.CharField(_('last name'), max_length=30,
blank=False)
template = models.CharField(max_length=255, blank=True,
choices=TEMPLATES, default=YETI)
objects = DeployUserManager()
def __unicode__(self):
return u'{} {}'.format(self.first_name, self.last_name)
@property
def role(self):
"""
Assumes the user is only assigned to one role and return it
"""
return self.group_strigify()
def _get_groups(self):
if not hasattr(self, '_cached_groups'):
self._cached_groups = list(self.groups.values_list("name",
flat=True))
return self._cached_groups
def user_is_admin(self):
if not self.pk:
return False
return "Admin" in self._get_groups()
def user_is_deployer(self):
if not self.pk:
return False
return "Deployer" in self._get_groups()
def user_is_historian(self):
if not self.pk:
return False
return "Historian" in self._get_groups()
def group_strigify(self):
"""
Converts this user's group(s) to a string and returns it.
"""
return "/".join(self._get_groups())
def gravatar(self, size=20):
"""
Construct a gravatar image address for the user
"""
default = "mm"
gravatar_url = "http://www.gravatar.com/avatar/" + hashlib.md5(
self.email.lower()).hexdigest() + "?"
gravatar_url += urllib.urlencode({'d': default, 's': str(size)})
return gravatar_url
class APIKey(models.Model):
apikey = models.CharField(max_length=255, primary_key=True)
deployuser = models.ForeignKey(DeployUser)
class Meta:
unique_together = (("apikey", "deployuser"),)
class PermissionHost(models.Model):
user = models.ForeignKey(DeployUser)
host = models.ForeignKey(Host)
def __unicode__(self):
return u'User: {} Host: {}'.format(self.user, self.host)
def generate_APIKey(sender, instance, created, **kwargs):
if created:
apikey = APIKey()
apikey.apikey = base64.b64encode(hashlib.sha256(
str(random.getrandbits(256))).digest(),
random.choice(
['rA', 'aZ', 'gQ', 'hH', 'hG',
'aR', 'DD'])).rstrip('==')
apikey.deployuser = instance
apikey.save()
post_save.connect(generate_APIKey, sender=DeployUser)
| 29.221477 | 78 | 0.595315 | 472 | 4,354 | 5.366525 | 0.341102 | 0.035531 | 0.014212 | 0.025266 | 0.121595 | 0.076589 | 0.05685 | 0.0379 | 0 | 0 | 0 | 0.00801 | 0.283188 | 4,354 | 148 | 79 | 29.418919 | 0.803589 | 0.064079 | 0 | 0.07767 | 0 | 0 | 0.11611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097087 | false | 0 | 0.116505 | 0.019417 | 0.61165 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
028a79224d1b3b0d7d2cc26a3b2408f89ff5f8c5 | 7,252 | py | Python | lstm_toyexample.py | dsriaditya999/LSTM-Toy-Example | 850f7923122b547c1fd25b3b1dc739e8c5db2570 | [
"MIT"
] | null | null | null | lstm_toyexample.py | dsriaditya999/LSTM-Toy-Example | 850f7923122b547c1fd25b3b1dc739e8c5db2570 | [
"MIT"
] | null | null | null | lstm_toyexample.py | dsriaditya999/LSTM-Toy-Example | 850f7923122b547c1fd25b3b1dc739e8c5db2570 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Importing Libraries
"""
# Commented out IPython magic to ensure Python compatibility.
import torch
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import torch.nn as nn
from tqdm import tqdm_notebook
from sklearn.preprocessing import MinMaxScaler
# %matplotlib inline
torch.manual_seed(0)
"""# Loading Dataset"""
sns.get_dataset_names()
flight_data = sns.load_dataset("flights")
flight_data.head()
"""# Preprocessing"""
# Changing the plot size
figsize = plt.rcParams["figure.figsize"]
figsize[0] = 15
figsize[1] = 5
plt.rcParams["figure.figsize"] = figsize
# Plotting the data
plt.title("Time Series Representation of Data")
plt.xlabel("Months")
plt.ylabel("Passengers")
plt.grid(True)
plt.autoscale(axis = "x",tight=True)
plt.plot(flight_data["passengers"])
#Please note that this is univariate time series data : consisting of one variable passengers
#
data = flight_data["passengers"].values.astype(float)
print(data)
print(len(data))
# Train-Test Split
# Consider the last the 12 months data as evaluation data for testing the model's behaviour
train_window = 12
train_data = data[:-train_window]
test_data = data[-train_window:]
print(len(train_data))
print(len(test_data))
# Normalizing the train-data
scaler = MinMaxScaler(feature_range=(-1,1))
train_data_normalized = scaler.fit_transform(train_data.reshape(-1,1))
print(train_data_normalized[:10])
# Converting to Torch Tensor
train_data_normalized = torch.FloatTensor(train_data_normalized).view(-1)
print(train_data_normalized)
# Final step is creating sequences of length 12 (12 months data) from the train-data and
# the label for each sequence is the passenger_data for the (12+1)th Month
def create_in_sequences(input_data,tw):
in_seq = []
L = len(input_data)
for i in range(L-tw):
train_seq = input_data[i:i+tw]
train_label = input_data[i+tw:i+tw+1]
in_seq.append((train_seq,train_label))
return in_seq
# Therefore, we get 120 train sequences along with the label value
train_in_seq = create_in_sequences(train_data_normalized,train_window)
print(len(train_in_seq))
print(train_in_seq[:5])
"""# The Model
Please note that the model considered here is:
1. LSTM layer with a univariate input sequence of length 12 and LSTM's previous hidden cell consisting of previous hidden state and previous cell state of length 100 and also , the size of LSTM's output is 100
2. The second layer is a Linear layer of 100 inputs from the LSTM's output and a single output size
"""
class LSTM(nn.Module): #LSTM Class inheriting the inbuilt nn.Module class for neural networks
def __init__(self,input_size = 1,hidden_layer_size = 100, output_size = 1):
super().__init__() #Calls the init function of the nn.Module superclass for being able to access its features
self.hidden_layer_size = hidden_layer_size # Defines the size of h(t-1) [Previous hidden output] and c(t-1) [previous cell state]
self.lstm = nn.LSTM(input_size,hidden_layer_size,dropout = 0.45) # definining the LSTM with univariate input,output and with a dropout regularization of 0.45
self.linear = nn.Linear(hidden_layer_size,output_size) # Linear layer which returns the weighted sum of 100 outputs from the LSTM
self.hidden_cell = (torch.ones(1,1,self.hidden_layer_size), # This is the previous hidden state
torch.ones(1,1,self.hidden_layer_size)) # This is the previous cell state
def forward(self,input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq),1,-1),self.hidden_cell) #returns 1200 outputs from each of the 100 output neurons for the 12 valued sequence
predictions = self.linear(lstm_out.view(len(input_seq),-1)) # Reshaped to make it compatible as an input to the linear layer
return predictions[-1] # The last element contains the prediction
model = LSTM()
print(model)
"""# Loss Function and Learning Algorithm (Optimizer)
Please note that for this simple model ,
* Loss Function considered is *Mean Squared Error* and
* Optimization Function used is Stochastic Version of **Adam** *Optimizer*.
"""
loss_fn = nn.MSELoss() # Mean Squared Error Loss Function
optimizer = torch.optim.Adam(model.parameters(),lr = 0.0002) # Adam Learning Algorithm
"""# Training"""
epochs = 450
loss_plot = []
for epoch in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
for seq,label in train_in_seq:
optimizer.zero_grad() # makes the gradients zero for each new sequence
model.hidden_cell = (torch.zeros(1,1,model.hidden_layer_size), # Initialising the previous hidden state and cell state for each new sequence
torch.zeros(1,1,model.hidden_layer_size))
y_pred = model(seq) # Automatically calls the forward pass
loss = loss_fn(y_pred,label) # Determining the loss
loss.backward() # Backpropagation of loss and gradients computation
optimizer.step() # Weights and Bias Updation
loss_plot.append(loss.item()) # Some Bookkeeping
plt.plot(loss_plot,'r-')
plt.xlabel("Epochs")
plt.ylabel("Loss : MSE")
plt.show()
print(loss_plot[-1])
"""# Making Prediction
Please note that for comparison purpose we use the training data's values and predicted data values to predict the number of passengers for the test data months and then compare them
"""
fut_pred = 12
test_inputs = train_data_normalized[-train_window: ].tolist()
print(test_inputs)
print(len(test_inputs))
model.eval() # Makes the model ready for evaluation
for i in range(fut_pred):
seq = torch.FloatTensor(test_inputs[-train_window: ]) # Converting to a tensor
with torch.no_grad(): # Stops adding to the computational flow graph (stops being prepared for backpropagation)
model.hidden_cell = (torch.zeros(1,1,model.hidden_layer_size),
torch.zeros(1,1,model.hidden_layer_size))
test_inputs.append(model(seq).item())
predicted_outputs_normalized = []
predicted_outputs_normalized = test_inputs[-train_window: ]
print(predicted_outputs_normalized)
print(len(predicted_outputs_normalized))
"""# Postprocessing"""
predicted_outputs = scaler.inverse_transform(np.array(predicted_outputs_normalized).reshape(-1,1))
print(predicted_outputs)
x = np.arange(132, 144, 1)
print(x)
"""# Final Output"""
figsize = plt.rcParams["figure.figsize"]
figsize[0] = 15
figsize[1] = 5
plt.rcParams["figure.figsize"] = figsize
plt.title('Month vs Passenger')
plt.ylabel('Total Passengers')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
plt.plot(flight_data['passengers'])
plt.plot(x,predicted_outputs)
plt.show()
figsize = plt.rcParams["figure.figsize"]
figsize[0] = 15
figsize[1] = 5
plt.rcParams["figure.figsize"] = figsize
plt.title('Month vs Passenger')
plt.ylabel('Total Passengers')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
plt.plot(flight_data['passengers'][-train_window-5: ])
plt.plot(x,predicted_outputs)
plt.show()
"""**Please observe that the model is able to get the trend of the passengers but it can be further fine-tuned by adding appropriate regularization methods**"""
| 34.046948 | 212 | 0.734694 | 1,088 | 7,252 | 4.765625 | 0.264706 | 0.020829 | 0.031823 | 0.027772 | 0.199421 | 0.164899 | 0.164899 | 0.152941 | 0.140598 | 0.140598 | 0 | 0.018936 | 0.162576 | 7,252 | 212 | 213 | 34.207547 | 0.834843 | 0.280612 | 0 | 0.27027 | 0 | 0 | 0.066943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0.081081 | 0 | 0 | 0.054054 | 0.144144 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
65f47a4c6cbf9c3cbfef8996d91a66023d1ce4f0 | 1,475 | py | Python | leetcode/minimumAreaRectangle.py | federicoemartinez/problem_solving | d0352f76bc21ed67d6851a159a00f70a892934b9 | [
"MIT"
] | null | null | null | leetcode/minimumAreaRectangle.py | federicoemartinez/problem_solving | d0352f76bc21ed67d6851a159a00f70a892934b9 | [
"MIT"
] | null | null | null | leetcode/minimumAreaRectangle.py | federicoemartinez/problem_solving | d0352f76bc21ed67d6851a159a00f70a892934b9 | [
"MIT"
] | null | null | null | # https://leetcode.com/problems/minimum-area-rectangle/description/
"""
Given a set of points in the xy-plane, determine the minimum area of a rectangle formed from these points, with sides parallel to the x and y axes.
If there isn't any rectangle, return 0.
Example 1:
Input: [[1,1],[1,3],[3,1],[3,3],[2,2]]
Output: 4
Example 2:
Input: [[1,1],[1,3],[3,1],[3,3],[4,1],[4,3]]
Output: 2
Note:
1 <= points.length <= 500
0 <= points[i][0] <= 40000
0 <= points[i][1] <= 40000
All points are distinct.
"""
from collections import defaultdict
class Solution:
def minAreaRect(self, points):
"""
:type points: List[List[int]]
:rtype: int
"""
same_x = defaultdict(lambda:set())
same_y = defaultdict(lambda:set())
for (x,y) in points:
same_x[x].add((x,y))
same_y[y].add((x,y))
best_area = None
for (x,y) in points:
for same_x_point in same_x[x]:
if same_x_point[1] < y:continue
for same_y_point in same_y[y]:
if same_y_point[0] < x: continue
if (same_y_point[0], same_x_point[1]) in same_y[same_x_point[1]]:
area = abs(same_x_point[1] - y) * abs(same_y_point[0] - x)
if area > 0 and (best_area is None or best_area > area): best_area = area
return 0 if best_area is None else best_area
| 27.314815 | 147 | 0.553898 | 232 | 1,475 | 3.387931 | 0.323276 | 0.050891 | 0.063613 | 0.05598 | 0.14631 | 0.033079 | 0.033079 | 0.033079 | 0.033079 | 0 | 0 | 0.05315 | 0.311186 | 1,475 | 53 | 148 | 27.830189 | 0.720472 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a0258dc0630fde008fae59e8ca2f2322000aca2 | 732 | py | Python | UnitTests/FullAtomModel/PDB2Coords/test.py | dendisuhubdy/TorchProteinLibrary | 89f0f6c311658b9313484cd92804682a251b1b97 | [
"MIT"
] | null | null | null | UnitTests/FullAtomModel/PDB2Coords/test.py | dendisuhubdy/TorchProteinLibrary | 89f0f6c311658b9313484cd92804682a251b1b97 | [
"MIT"
] | null | null | null | UnitTests/FullAtomModel/PDB2Coords/test.py | dendisuhubdy/TorchProteinLibrary | 89f0f6c311658b9313484cd92804682a251b1b97 | [
"MIT"
] | null | null | null | import sys
import os
import matplotlib.pylab as plt
import numpy as np
import mpl_toolkits.mplot3d.axes3d as p3
import seaborn as sea
import torch
from TorchProteinLibrary import FullAtomModel
if __name__=='__main__':
# p2c = FullAtomModel.PDB2Coords.PDB2CoordsBiopython()
p2c = FullAtomModel.PDB2CoordsUnordered()
coords, res, anames, num_atoms = p2c(["f4TQ1_B.pdb"])
print (coords.size())
print (res.size())
print (anames.size())
print (num_atoms)
coords = coords.numpy()
coords = coords.reshape(int(coords.shape[1]/3), 3)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = coords[:,0]
y = coords[:,1]
z = coords[:,2]
ax.scatter(x,y,z)
plt.show()
| 24.4 | 58 | 0.674863 | 99 | 732 | 4.858586 | 0.575758 | 0.056133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035593 | 0.193989 | 732 | 29 | 59 | 25.241379 | 0.779661 | 0.071038 | 0 | 0 | 0 | 0 | 0.030973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5a054c6f2f48cad9dc180b59f6e0034f5b144f73 | 331 | py | Python | codes/day06/03.py | Youngfellows/HPyBaseCode | 94d11872795d85b8c4387b650e82edcd20da0667 | [
"Apache-2.0"
] | null | null | null | codes/day06/03.py | Youngfellows/HPyBaseCode | 94d11872795d85b8c4387b650e82edcd20da0667 | [
"Apache-2.0"
] | null | null | null | codes/day06/03.py | Youngfellows/HPyBaseCode | 94d11872795d85b8c4387b650e82edcd20da0667 | [
"Apache-2.0"
] | null | null | null | class Dog:
def __init__(self, newColor):
self.color = newColor
def bark(self):
print("---旺旺叫----")
def printColor(self):
print("颜色为:%s"%self.color)
def test(AAA):
AAA.printColor()
wangcai = Dog("白")
#wangcai.printColor()
xiaoqiang = Dog("黑")
#xiaoqiang.printColor()
test(wangcai)
| 15.045455 | 34 | 0.592145 | 39 | 331 | 4.923077 | 0.487179 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232628 | 331 | 21 | 35 | 15.761905 | 0.755906 | 0.126888 | 0 | 0 | 0 | 0 | 0.062718 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.416667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a0e75196f538319c5078d09117599bf367b0df0 | 1,208 | py | Python | app/api/utlis/models.py | jurekpawlikowski/flask-boilerplate | 15b7e6c4e0241a7d59dbca543e023a22b17b9903 | [
"MIT"
] | 3 | 2017-08-05T08:57:37.000Z | 2021-03-03T09:09:03.000Z | app/api/utlis/models.py | jurekpawlikowski/flask-boilerplate | 15b7e6c4e0241a7d59dbca543e023a22b17b9903 | [
"MIT"
] | null | null | null | app/api/utlis/models.py | jurekpawlikowski/flask-boilerplate | 15b7e6c4e0241a7d59dbca543e023a22b17b9903 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from datetime import datetime
from sqlalchemy.event import listen
from app.factory import db
class BaseModel(db.Model):
"""
Base model with `created_at` and `updated_at` fields
"""
__abstract__ = True
fields_to_serialize = []
created_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
updated_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
def save(self):
db.session.add(self)
db.session.commit()
def delete(self):
db.session.delete(self)
db.session.commit()
@classmethod
def all(cls):
return cls.query.all()
def serialize(self, fields=None):
serialized = {}
for field in fields or self.fields_to_serialize:
field_value = getattr(self, field)
if isinstance(field_value, datetime):
field_value = field_value.isoformat()
serialized[field] = field_value
return serialized
def set_updated_at(target, value, oldvalue):
"""
Set updated_at value
"""
value.updated_at = datetime.now()
listen(BaseModel, "before_update", set_updated_at)
| 23.686275 | 80 | 0.647351 | 148 | 1,208 | 5.121622 | 0.432432 | 0.07124 | 0.068602 | 0.031662 | 0.14248 | 0.14248 | 0.14248 | 0.14248 | 0.14248 | 0.14248 | 0 | 0.001095 | 0.244205 | 1,208 | 50 | 81 | 24.16 | 0.829135 | 0.096026 | 0 | 0.071429 | 0 | 0 | 0.012264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.107143 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5a0fd6978a62253af90bdbf0d79e056e97e5921d | 1,391 | py | Python | source/tweaks/cms_plugins.py | mverleg/svsite | 5c9dbcacf81020cf0c1960e337bdd33113acd597 | [
"BSD-3-Clause"
] | null | null | null | source/tweaks/cms_plugins.py | mverleg/svsite | 5c9dbcacf81020cf0c1960e337bdd33113acd597 | [
"BSD-3-Clause"
] | 142 | 2015-06-05T07:53:09.000Z | 2020-03-31T18:37:07.000Z | source/tweaks/cms_plugins.py | mdilli/svsite | 5c9dbcacf81020cf0c1960e337bdd33113acd597 | [
"BSD-3-Clause"
] | null | null | null |
"""
Raw HTML widget.
Adapted/copied from https://github.com/makukha/cmsplugin-raw-html
"""
from cms.plugin_base import CMSPluginBase
from cms.plugin_pool import plugin_pool
from django.template import Template
from django.utils.safestring import mark_safe
from .models import RawHtmlModel, CMSMember
from django.utils.translation import ugettext as _
class RawHtmlPlugin(CMSPluginBase):
model = RawHtmlModel
name = 'HTML'
render_template = 'cms/raw_html_widget.html'
text_enabled = True
def render(self, context, instance, placeholder):
context.update({
'body': mark_safe(Template(instance.body).render(context)),
'object': instance,
'placeholder': placeholder
})
return context
plugin_pool.register_plugin(RawHtmlPlugin)
class MemberPlugin(CMSPluginBase):
"""
This needs to be defined in `tweaks` because it has to be after `cms`, whereas
`AUTH_USER_MODEL` needs to be loaded before `cms`.
"""
model = CMSMember # model where plugin data are saved
module = _('Member')
name = _('Member info') # name of the plugin in the interface
render_template = 'members_widget.html'
def render(self, context, instance, placeholder):
context.update(dict(
inst=instance,
title=instance.title,
description=instance.description,
users=instance.members.all(),
))
return context
plugin_pool.register_plugin(MemberPlugin) # register the plugin
| 24.839286 | 79 | 0.757009 | 177 | 1,391 | 5.836158 | 0.457627 | 0.038722 | 0.025169 | 0.038722 | 0.172314 | 0.172314 | 0.100678 | 0.100678 | 0 | 0 | 0 | 0 | 0.145219 | 1,391 | 55 | 80 | 25.290909 | 0.868797 | 0.217829 | 0 | 0.121212 | 0 | 0 | 0.079812 | 0.022535 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.181818 | 0 | 0.606061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5a1046d61cc7585c8ffb76dc65a2afa1c14d62a9 | 3,296 | py | Python | tests/test_trackings.py | EugeneLiu/aftership-sdk-python | 37184272869452734d616b31295a4ac883051f5d | [
"MIT"
] | null | null | null | tests/test_trackings.py | EugeneLiu/aftership-sdk-python | 37184272869452734d616b31295a4ac883051f5d | [
"MIT"
] | null | null | null | tests/test_trackings.py | EugeneLiu/aftership-sdk-python | 37184272869452734d616b31295a4ac883051f5d | [
"MIT"
] | null | null | null | from unittest import TestCase, mock
import pytest
from requests import Response
import aftership
class TrackingTestCase(TestCase):
def setUp(self):
self.slug = "4px"
self.tracking_number = "HH19260817"
self.tracking_id = "k5lh7dy7vvqeck71p5loe011"
@pytest.mark.vcr()
def test_create_tracking(self):
response = aftership.tracking.create_tracking(
tracking={"slug": self.slug, "tracking_number": self.tracking_number}
)
@pytest.mark.vcr()
def test_get_tracking(self):
response = aftership.tracking.get_tracking(slug=self.slug, tracking_number=self.tracking_number)
# @pytest.mark.vcr()
# def test_delete_tracking(self):
# response = aftership.tracking.delete_tracking(slug='china-ems',tracking_number='1234567890')
@pytest.mark.vcr()
def test_list_trackings(self):
response = aftership.tracking.list_trackings(slug=self.slug, limit=1)
@pytest.mark.vcr()
def test_update_tracking(self):
response = aftership.tracking.update_tracking(tracking_id=self.tracking_id, tracking={"title": "new title"})
@pytest.mark.vcr()
def test_retrack(self):
response = aftership.tracking.retrack(tracking_id=self.tracking_id)
@pytest.mark.vcr()
def test_get_last_checkpoint(self):
response = aftership.tracking.get_last_checkpoint(tracking_id=self.tracking_id)
class TrackingWithAdditionalFieldsTestCase(TestCase):
def setUp(self):
self.tracking_id = "wuuxyb7ohjx55kmpt5r7y017"
self.slug = "postnl-3s"
self.tracking_number = "3SKAAG5995399"
self.destination_country = "ESP"
self.postal_code = "46970"
@pytest.mark.vcr()
def test_create_tracking(self):
response = aftership.tracking.create_tracking(
tracking={
"slug": self.slug,
"tracking_number": self.tracking_number,
"tracking_destination_country": self.destination_country,
"tracking_postal_code": self.postal_code,
}
)
@pytest.mark.vcr()
def test_get_tracking(self):
response = aftership.tracking.get_tracking(
slug=self.slug,
tracking_number=self.tracking_number,
tracking_destination_country=self.destination_country,
tracking_postal_code=self.postal_code,
)
@pytest.mark.vcr()
def test_get_tracking_by_id(self):
response = aftership.tracking.get_tracking(tracking_id=self.tracking_id)
@pytest.mark.vcr()
def test_update_tracking(self):
response = aftership.tracking.update_tracking(tracking_id=self.tracking_id, tracking={"title": "new title"})
@pytest.mark.vcr()
def test_get_last_checkpoint(self):
response = aftership.tracking.get_last_checkpoint(tracking_id=self.tracking_id)
@pytest.mark.vcr()
def test_get_tracking_with_internal_error(self):
with self.assertRaises(aftership.exception.InternalError):
response = aftership.tracking.get_tracking(
slug=self.slug,
tracking_number=self.tracking_number,
tracking_destination_country=self.destination_country,
tracking_postal_code=self.postal_code,
)
| 34.694737 | 116 | 0.681129 | 365 | 3,296 | 5.89589 | 0.158904 | 0.083643 | 0.078532 | 0.096654 | 0.725372 | 0.67658 | 0.657993 | 0.654275 | 0.654275 | 0.654275 | 0 | 0.019806 | 0.21875 | 3,296 | 94 | 117 | 35.06383 | 0.815922 | 0.0446 | 0 | 0.527778 | 0 | 0 | 0.065183 | 0.024165 | 0 | 0 | 0 | 0 | 0.013889 | 1 | 0.194444 | false | 0 | 0.055556 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a118345944c61aa57f158d2bab247572f49c59f | 353 | py | Python | images/auth-service/settings.d/00-settings.py | ESGF/esgf-docker | 95f5b76c85be65920810795484786a13865f4ac1 | [
"Apache-2.0"
] | 3 | 2018-04-16T00:58:30.000Z | 2020-10-07T17:58:02.000Z | images/auth-service/settings.d/00-settings.py | ESGF/esgf-docker | 95f5b76c85be65920810795484786a13865f4ac1 | [
"Apache-2.0"
] | 115 | 2017-01-10T20:12:42.000Z | 2021-03-03T16:11:48.000Z | images/auth-service/settings.d/00-settings.py | ESGF/esgf-docker | 95f5b76c85be65920810795484786a13865f4ac1 | [
"Apache-2.0"
] | 21 | 2017-08-28T15:20:24.000Z | 2021-02-09T00:08:49.000Z | # Application definition
INSTALLED_APPS = [
'django.contrib.staticfiles',
'django.contrib.sessions',
'authenticate',
]
ROOT_URLCONF = 'auth_service.urls'
WSGI_APPLICATION = 'auth_service.wsgi.application'
# Use a non database session engine
SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
SESSION_COOKIE_SECURE = False
| 23.533333 | 66 | 0.776204 | 40 | 353 | 6.625 | 0.675 | 0.14717 | 0.158491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124646 | 353 | 14 | 67 | 25.214286 | 0.857605 | 0.15864 | 0 | 0 | 0 | 0 | 0.52381 | 0.42517 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a13150c841953f716f3e772e7c48bc269734ed8 | 3,701 | py | Python | rackspace/heat_store/catalog/tests.py | rohithkumar-rackspace/rcbops | fb690bc528499bbf9aebba3ab0cce0b4dffd9e35 | [
"Apache-2.0"
] | null | null | null | rackspace/heat_store/catalog/tests.py | rohithkumar-rackspace/rcbops | fb690bc528499bbf9aebba3ab0cce0b4dffd9e35 | [
"Apache-2.0"
] | null | null | null | rackspace/heat_store/catalog/tests.py | rohithkumar-rackspace/rcbops | fb690bc528499bbf9aebba3ab0cce0b4dffd9e35 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import unittest
import mox
import six.moves.urllib.request as urlrequest
from six import StringIO
from solution import Solution
class TestSolution(unittest.TestCase):
def setUp(self):
self.mox = mox.Mox()
pass
def tearDown(self):
self.mox.VerifyAll()
self.mox.UnsetStubs()
def test_create_solution(self):
s = Solution('test_data/example/info.yaml')
self.assertEqual(len(s.id), 32)
self.assertEqual(s.title, 'the_title')
self.assertEqual(s.logo, 'test_data/example/the_logo.png')
self.assertEqual(s.release, '0.1')
self.assertEqual(s.short_description, '<p>The short description</p>')
self.assertIn('This is the <em>long</em> description',
s.long_description)
self.assertIn('src="test_data/example/diagram.png"', s.long_description)
self.assertIn('alt="here is a diagram"', s.long_description)
self.assertIn('<strong>architecture</strong>', s.architecture)
self.assertIn('Design spec #1', s.design_specs)
self.assertIn('Design spec #2', s.design_specs)
self.assertEqual(s.heat_template, 'example.yaml')
self.assertEqual(s.env_file, 'env.yaml')
def test_incomplete_solution(self):
lines = open('test_data/example/info.yaml').readlines()
# ensure the solution isn't imported if any of the items
# below are missing
missing_list = ['name', 'short_desc', 'long_desc', 'heat_template',
'release']
self.mox.StubOutWithMock(urlrequest, 'urlopen')
for missing in missing_list:
yaml = [line for line in lines if missing not in line]
urlrequest.urlopen(
'http://example.com/no-{0}.yaml'.format(missing)) \
.AndReturn(StringIO('\n'.join(yaml)))
self.mox.ReplayAll()
for missing in missing_list:
with self.assertRaises(KeyError):
Solution('http://example.com/no-{0}.yaml'.format(missing))
def test_parameter_types(self):
s = Solution('test_data/example/info.yaml')
params = s.get_parameter_types(None)
self.assertEqual(len(params), 5)
self.assertEqual(params[0]['name'], 'floating-network-id')
self.assertEqual(params[0]['type'], 'comma_delimited_list')
self.assertIn(params[0]['default'],
params[0]['constraints'][0]['allowed_values'])
self.assertIn('_mapping', params[0])
self.assertEqual(params[0]['_mapping'], {'first_network': '11',
'second_network': '22',
'third_network': '33'})
self.assertEqual(s.map_parameter(params, 'floating-network-id',
'second_network'), '22')
self.assertEqual(params[1]['name'], 'flavor')
self.assertEqual(params[1]['type'], 'comma_delimited_list')
self.assertIn(params[1]['default'], 'm1.small')
self.assertEqual(params[2]['name'], 'image')
self.assertEqual(params[2]['type'], 'comma_delimited_list')
self.assertIn(params[2]['default'],
params[2]['constraints'][0]['allowed_values'])
self.assertEqual(params[3]['name'], 'image-count')
self.assertEqual(params[3]['type'], 'number')
self.assertEqual(params[4]['name'], 'keyname')
self.assertEqual(params[4]['type'], 'comma_delimited_list')
self.assertIn(params[4]['default'],
params[4]['constraints'][0]['allowed_values'])
if __name__ == '__main__':
unittest.main()
| 39.795699 | 80 | 0.599838 | 428 | 3,701 | 5.058411 | 0.313084 | 0.138568 | 0.106697 | 0.040647 | 0.236028 | 0.138568 | 0.138568 | 0.064665 | 0 | 0 | 0 | 0.014446 | 0.251824 | 3,701 | 92 | 81 | 40.228261 | 0.767425 | 0.025128 | 0 | 0.056338 | 0 | 0 | 0.228849 | 0.048544 | 0 | 0 | 0 | 0 | 0.450704 | 1 | 0.070423 | false | 0.014085 | 0.070423 | 0 | 0.15493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a18641e63b3fcad6914df89d4ba92c48cbaed17 | 951 | py | Python | source/odp/migrations/0003_auto_20201121_0919.py | kssvrk/BhoonidhiODP | e222087629250ea4ccd1ae8d8903d9ff400c13b4 | [
"BSD-3-Clause"
] | null | null | null | source/odp/migrations/0003_auto_20201121_0919.py | kssvrk/BhoonidhiODP | e222087629250ea4ccd1ae8d8903d9ff400c13b4 | [
"BSD-3-Clause"
] | null | null | null | source/odp/migrations/0003_auto_20201121_0919.py | kssvrk/BhoonidhiODP | e222087629250ea4ccd1ae8d8903d9ff400c13b4 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.1.2 on 2020-11-21 09:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('odp', '0002_auto_20201121_0659'),
]
operations = [
migrations.CreateModel(
name='ProcessGroup',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('group_name', models.CharField(max_length=100, unique=True)),
('group_description', models.CharField(blank=True, max_length=5000, null=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
),
migrations.AddField(
model_name='processcatalogue',
name='group_id',
field=models.ManyToManyField(to='odp.ProcessGroup'),
),
]
| 32.793103 | 114 | 0.59306 | 97 | 951 | 5.639175 | 0.618557 | 0.054845 | 0.076782 | 0.091408 | 0.102377 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055313 | 0.277603 | 951 | 28 | 115 | 33.964286 | 0.740902 | 0.047319 | 0 | 0.090909 | 1 | 0 | 0.142699 | 0.025442 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a1bae372e9a9d499e2d0814cd4b789a6fdb51ad | 2,072 | py | Python | test/test_thirty.py | jakubtuchol/dailycodingproblem | 9f0f3193f1746e949e16febace5aa5622dc5d4dc | [
"MIT"
] | 1 | 2020-10-13T20:54:37.000Z | 2020-10-13T20:54:37.000Z | test/test_thirty.py | jakubtuchol/dailycodingproblem | 9f0f3193f1746e949e16febace5aa5622dc5d4dc | [
"MIT"
] | null | null | null | test/test_thirty.py | jakubtuchol/dailycodingproblem | 9f0f3193f1746e949e16febace5aa5622dc5d4dc | [
"MIT"
] | null | null | null | from src.thirty import edit_distance
from src.thirty import find_second_largest_node
from src.thirty import make_palindrome
from src.thirty import powerset
from src.data_structures import BinaryNode
class TestEditDistance:
"""
Problem #31
"""
def test_provided_example(self):
assert 3 == edit_distance('kitten', 'sitting')
def test_empty_examples(self):
assert 7 == edit_distance('', 'sitting')
assert 6 == edit_distance('kitten', '')
def test_equal_examples(self):
assert 0 == edit_distance('', '')
assert 0 == edit_distance('kitten', 'kitten')
def test_none_in_common(self):
assert 3 == edit_distance('abc', 'xyz')
class TestMakePalindrome:
"""
Problem #34
"""
def test_provided_example(self):
assert 'ecarace' == make_palindrome('race')
def test_another_example(self):
assert 'elgoogle' == make_palindrome('google')
class TestFindSecondLargestNode:
"""
Problem #36
"""
def test_on_left(self):
root = BinaryNode(5)
root.left = BinaryNode(3)
assert 3 == find_second_largest_node(root).val
def test_on_right(self):
root = BinaryNode(2)
root.right = BinaryNode(4)
assert 2 == find_second_largest_node(root).val
def test_balanced(self):
root = BinaryNode(2)
root.left = BinaryNode(1)
root.right = BinaryNode(3)
assert 2 == find_second_largest_node(root).val
def test_less_than_two_elements(self):
root = BinaryNode(2)
assert not find_second_largest_node(root)
class TestPowerset:
"""
Problem #37
"""
def test_three(self):
expected_set = [
[],
[1], [2], [3],
[1, 2], [1, 3], [2, 3],
[1, 2, 3],
]
calculated_set = powerset([1, 2, 3])
assert len(calculated_set) == len(expected_set)
for elt in calculated_set:
assert elt in expected_set
def test_empty(self):
assert [[]] == powerset([])
| 22.769231 | 55 | 0.605695 | 247 | 2,072 | 4.854251 | 0.295547 | 0.070058 | 0.070892 | 0.087573 | 0.241868 | 0.152627 | 0.099249 | 0.099249 | 0.070058 | 0.070058 | 0 | 0.026684 | 0.276544 | 2,072 | 90 | 56 | 23.022222 | 0.773182 | 0.022683 | 0 | 0.134615 | 0 | 0 | 0.035132 | 0 | 0 | 0 | 0 | 0 | 0.288462 | 1 | 0.230769 | false | 0 | 0.096154 | 0 | 0.403846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a1be255168c22e03a6a98004add6394315035a9 | 3,947 | py | Python | src/google_music_proto/musicmanager/utils.py | ddboline/google-music-proto | d3af3a1fe911edcd083482c9a6e8bde5a2902462 | [
"MIT"
] | null | null | null | src/google_music_proto/musicmanager/utils.py | ddboline/google-music-proto | d3af3a1fe911edcd083482c9a6e8bde5a2902462 | [
"MIT"
] | null | null | null | src/google_music_proto/musicmanager/utils.py | ddboline/google-music-proto | d3af3a1fe911edcd083482c9a6e8bde5a2902462 | [
"MIT"
] | null | null | null | __all__ = [
'generate_client_id',
'get_album_art',
'get_transcoder',
'transcode_to_mp3',
]
import os
import shutil
import subprocess
from base64 import b64encode
from binascii import unhexlify
from hashlib import md5
import audio_metadata
# The id is found by: getting md5sum of audio, base64 encode md5sum, removing trailing '='.
def generate_client_id(song):
if not isinstance(song, audio_metadata.Format):
song = audio_metadata.load(song)
md5sum = None
if isinstance(song, audio_metadata.FLAC):
md5sum = unhexlify(song.streaminfo.md5)
else:
m = md5()
audio_size = song.streaminfo._size
with open(song.filepath, 'rb') as f:
f.seek(song.streaminfo._start)
# Speed up by reading in chunks
read = 0
while True:
read_size = min(audio_size - read, 65536)
if not read_size:
break
read += read_size
data = f.read(read_size)
m.update(data)
md5sum = m.digest()
client_id = b64encode(md5sum).rstrip(b'=').decode('ascii')
return client_id
def get_album_art(song):
if not isinstance(song, audio_metadata.Format):
song = audio_metadata.load(song)
album_art = next(
(
picture.data
for picture in song.pictures
if picture.type == 3
),
None
)
return album_art
def get_transcoder():
"""Return the path to a transcoder (ffmpeg or avconv) with MP3 support."""
transcoders = ['ffmpeg', 'avconv']
transcoder_details = {}
for transcoder in transcoders:
command_path = shutil.which(transcoder)
if command_path is None:
transcoder_details[transcoder] = 'Not installed.'
continue
stdout = subprocess.run(
[command_path, '-codecs'],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
universal_newlines=True,
).stdout
mp3_encoding_support = (
'libmp3lame' in stdout
and 'disable-libmp3lame' not in stdout
)
if mp3_encoding_support:
transcoder_details[transcoder] = "MP3 encoding support."
break
else:
transcoder_details[transcoder] = "No MP3 encoding support."
else:
raise ValueError(
f"ffmpeg or avconv must be in the path and support mp3 encoding."
"\nDetails: {transcoder_details}"
)
return command_path
def _transcode(command, input_=None):
try:
transcode = subprocess.run(
command,
input=input_,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
transcode.check_returncode()
except (OSError, subprocess.CalledProcessError) as e:
error_msg = f"Transcode command '{' '.join(command)}' failed: {e}. "
if 'No such file or directory' in str(e):
error_msg += '\nffmpeg or avconv must be installed PATH.'
if transcode.stderr is not None:
error_msg += f"\nstderr: '{transcode.stderr}'"
e.message = error_msg
raise
else:
return transcode.stdout
def transcode_to_mp3(song, *, slice_start=None, slice_duration=None, quality='320k'):
command_path = get_transcoder()
input_ = None
if isinstance(song, audio_metadata.Format):
if hasattr(song.filepath, 'read'):
raise ValueError("Audio metadata must be from a file.")
# command = [command_path, '-i', '-']
# input_ = song.filepath.read()
else:
command = [command_path, '-i', song.filepath]
elif isinstance(song, bytes):
command = [command_path, '-i', '-']
input_ = song
elif isinstance(song, str):
command = [command_path, '-i', song]
elif isinstance(song, os.PathLike):
command = [command_path, '-i', song.__fspath__()]
else:
raise ValueError(
"'song' must be os.PathLike, filepath string, a file/bytes-like object, or binary data."
)
if slice_duration is not None:
command.extend(['-t', str(slice_duration)])
if slice_start is not None:
command.extend(['-ss', str(slice_start)])
if isinstance(quality, int):
command.extend(['-q:a', str(quality)])
elif isinstance(quality, str):
command.extend(['-b:a', str(quality)])
# Use 's16le' to not output id3 headers.
command.extend(['-f', 's16le', '-c', 'libmp3lame', '-'])
return _transcode(command, input_=input_)
| 23.777108 | 91 | 0.701292 | 527 | 3,947 | 5.096774 | 0.294118 | 0.040953 | 0.037975 | 0.035369 | 0.166418 | 0.095309 | 0.049888 | 0.049888 | 0.049888 | 0.049888 | 0 | 0.013198 | 0.174563 | 3,947 | 165 | 92 | 23.921212 | 0.811234 | 0.074487 | 0 | 0.129032 | 1 | 0.008065 | 0.16168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040323 | false | 0 | 0.056452 | 0 | 0.137097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a1c4a115bd07e61146d8a14b7bb3639da60f1ea | 8,731 | py | Python | Other/SocialNetwork/Solver.py | lesyk/Evolife | 8e3dd1aab84061f7ce082f3a4b1bac0b2e31bc4a | [
"MIT"
] | null | null | null | Other/SocialNetwork/Solver.py | lesyk/Evolife | 8e3dd1aab84061f7ce082f3a4b1bac0b2e31bc4a | [
"MIT"
] | null | null | null | Other/SocialNetwork/Solver.py | lesyk/Evolife | 8e3dd1aab84061f7ce082f3a4b1bac0b2e31bc4a | [
"MIT"
] | null | null | null | ## {{{ http://code.activestate.com/recipes/303396/ (r1)
'''equation solver using attributes and introspection'''
from __future__ import division
class Solver(object):
'''takes a function, named arg value (opt.) and returns a Solver object'''
def __init__(self,f,**args):
self._f=f
self._args={}
# see important note on order of operations in __setattr__ below.
for arg in f.func_code.co_varnames[0:f.func_code.co_argcount]:
self._args[arg]=None
self._setargs(**args)
def __repr__(self):
argstring=','.join(['%s=%s' % (arg,str(value)) for (arg,value) in
self._args.items()])
if argstring:
return 'Solver(%s,%s)' % (self._f.func_code.co_name, argstring)
else:
return 'Solver(%s)' % self._f.func_code.co_name
def __getattr__(self,name):
'''used to extract function argument values'''
self._args[name]
return self._solve_for(name)
def __setattr__(self,name,value):
'''sets function argument values'''
# Note - once self._args is created, no new attributes can
# be added to self.__dict__. This is a good thing as it throws
# an exception if you try to assign to an arg which is inappropriate
# for the function in the solver.
if self.__dict__.has_key('_args'):
if name in self._args:
self._args[name]=value
else:
raise KeyError, name
else:
object.__setattr__(self,name,value)
def _setargs(self,**args):
'''sets values of function arguments'''
for arg in args:
self._args[arg] # raise exception if arg not in _args
setattr(self,arg,args[arg])
def _solve_for(self,arg):
'''Newton's method solver'''
TOL=0.0000001 # tolerance
ITERLIMIT=1000 # iteration limit
CLOSE_RUNS=10 # after getting close, do more passes
args=self._args
if self._args[arg]:
x0=self._args[arg]
else:
x0=1
if x0==0:
x1=1
else:
x1=x0*1.1
def f(x):
'''function to solve'''
args[arg]=x
return self._f(**args)
fx0=f(x0)
n=0
while 1: # Newton's method loop here
fx1 = f(x1)
if fx1==0 or x1==x0: # managed to nail it exactly
break
if abs(fx1-fx0)<TOL: # very close
close_flag=True
if CLOSE_RUNS==0: # been close several times
break
else:
CLOSE_RUNS-=1 # try some more
else:
close_flag=False
if n>ITERLIMIT:
print "Failed to converge; exceeded iteration limit"
break
slope=(fx1-fx0)/(x1-x0)
if slope==0:
if close_flag: # we're close but have zero slope, finish
break
else:
print 'Zero slope and not close enough to solution'
break
x2=x0-fx0/slope # New 'x1'
fx0 = fx1
x0=x1
x1=x2
n+=1
self._args[arg]=x1
return x1
## end of http://code.activestate.com/recipes/303396/ }}}
######### Example ############
##from math import cos
##
##def toto(x,A):
## return A-cos(x)
##
##T = Solver(toto)
##T.A = 0
##print 2 * T.x
######### Fin Example ############
def competence(BottomCompetence, Quality):
return BottomCompetence + (1-BottomCompetence)*Quality
def Profit(b, K, friendQuality, r, NFriends):
Risk = 1
for f in range(NFriends):
Risk *= (1 - K * r**f * competence(b, friendQuality))
return 1 - Risk
def IntegralProfit(b, K, friendQuality, r, NFriends):
Sum = 0
for FQ in range(int(friendQuality * 100.01)):
Sum += Profit(b, K, FQ/100.1, r, NFriends)
return Sum / 100.01
def CompetitiveSignal(b, K, q, r, NFriends, cost):
profit = Profit(b, K, q, r, NFriends)
integralProfit = IntegralProfit(b, K, q, r, NFriends)
return (competence(b, q) * profit - (1-b) * ralProfit) / cost
def CompetitiveSignal1FriendsB0(K, q, cost):
# special case 1 friend and no bottom competence
return K*q**2/(2*cost)
def CompetitiveSignal2FriendsB0(K, q, r, cost):
# special case 2 friends and no bottom competence
return (-2*K**2*r*q**3/(3*cost)+K*q**2*(1+r)/(2*cost))
def CompetitiveSignal3Friends(b, q, r, cost):
# special case 3 friends
return (1-b)*((1+r+r**2)*(b*q+(1-b)*q**2/2) - 2*r*(1+r+r**2)*(b**2*q + (1-b)**2*q**3/3 +2*b*(1-b)*q**2/2) + 3*r**3 *(b**3*q + 3*b**2*(1-b)*q**2/2 + 3*b*(1-b)**2*q**3/3 + (1-b)**3*q**4/4) )/cost
def CompetitiveSignal3FriendsB0(q, r, cost):
# special case 3 friends and no bottom competence
return (1+r+r**2) *q**2/(2*cost) - 2*r*(1+r+r**2) *q**3/(3*cost) + 3*r**3* q**4/(4*cost)
def Equil2Friends(b, K, C, eta, r, deltag):
# for two friends
ro = 0
S_eta = competence(b, eta)
S_tau = competence(b,(1+eta)/2)
bh = K * S_tau *(1+r) - K**2 * r * S_tau**2 - C * deltag
bl = (1 - (1 - (1-ro)*K*S_tau - ro * K * S_eta)*(1 - K*r*(1-ro)*S_tau - ro*K*r*S_eta)) + C * deltag
# return bh-bl
#return (1-S_eta)*(1 + r/2 - 3*r*S_eta/2) - 4 * C * deltag
return K*(1-b)*(1-eta)*(1+r-K*r*(2*b + (1-b)*(3*eta+1)/2))/4 - C*deltag
def EquilManyFriends(b, K, C, eta, r, deltag, NF):
S_eta = competence(b, eta)
S_tau = competence(b,(1+eta)/2)
return Profit(b,K,S_tau,r,NF) - Profit(b,K,S_eta,r,NF) - 2*C*deltag
def SubEquilManyFriends(b, K, C, eta, tau, theta, r, deltag, NF):
S_eta = competence(b, eta)
S_tau = competence(b, tau)
SubMiddle = competence(b, (theta+eta)/2)
## return Profit(b,K,S_tau,r,NF) - Profit(b,K,SubMiddle,r,NF) - 2*C*deltag / S_eta
return Profit(b,K,S_tau,r,NF) - Profit(b,K,SubMiddle,r,NF) - 2*C*deltag
def UniformSignalB0(K, C, eta, r, deltag, NF):
b = 0 # otherwise false
S_eta = competence(b, eta)
S_tau = competence(b,(1+eta)/2)
Pu = (Profit(b,K,S_tau,r,NF) + Profit(b,K,S_eta,r,NF)) /2
Pc = IntegralProfit(b, K, eta, r, NF)
return ( S_eta * Pu - Pc ) / C
def UniformBenefitB0(K, C, eta, theta, r, deltag, NF, sm, ro):
b = 0 # otherwise false
S_eta = competence(b, eta)
S_theta = competence(b, theta)
S_tau = competence(b,(1+eta)/2)
Ptau = (1 + ro + ro*ro/4)/2
#Ptau = (1 + ro)/2
return Ptau * Profit(b,K,S_tau,r,NF) + (1-Ptau) * Profit(b,K,theta,r,NF) - (C*sm+ro*deltag)/S_theta
def DiffBenefitB0(K, C, eta, theta, r, deltag, NF, sm, ro):
S_theta = competence(b, theta)
return UniformBenefitB0(K, C, eta, theta, r, deltag, NF, sm, ro) \
- IntegralProfit(b, K, theta, r, NF)/S_theta
Equil = Solver(EquilManyFriends)
#Equil = Solver(Equil2Friends)
Equil.deltag = deltag = 1.2 * 0.11 # Learning noise
Equil.b = b = 0.0 # BottomCompetence
Equil.K = K = 1.0
Equil.r = r = 0.6 # RankEffect
Equil.NF = NF = 2 # Number of friends
Equil.C = C = 0.6 # Cost
#Equil.ro = 0.0 # shift from uniform signal
Equil.eta = 0.1 # threshold for uniform signal (initialization)
ETA = Equil.eta
OffSet = 0
print Equil
#print '%d\t%d' % (int(round(100* ETA)),(100*CompetitiveSignal(b,K,ETA, r, NF, C)))
#print 'Eta: %d\t competitive signal in Eta: %d' % (int(round(100* ETA)),(100*CompetitiveSignal3FriendsB0(ETA, r, C))),
print 'Eta: %d\t competitive signal in Eta: %d' % (int(round(100* ETA)),(100*CompetitiveSignal2FriendsB0(1,ETA, r, C))),
# sm = CompetitiveSignal3FriendsB0(ETA, r, C)
sm = UniformSignalB0(K, C, ETA, r, deltag, NF)
print 'sm: %d' % (100 * sm),
SubEquil = Solver(SubEquilManyFriends)
SubEquil.deltag = deltag # Learning noise
SubEquil.b = b # BottomCompetence
SubEquil.K = K
SubEquil.r = r # RankEffect
SubEquil.NF = NF # Number of friends
SubEquil.C = C # Cost
SubEquil.eta = ETA
SubEquil.tau = ETA
SubEquil.theta = 0.1 # initialization
THETA = SubEquil.theta
print 'Theta: %d' % (100 * THETA),
### 2nd iteration
##SubEquil.eta = THETA
##SubEquil.theta = 0.1 # initialization
##THTHETA = SubEquil.theta
##print 'ThTheta: %d' % (100 * THTHETA),
print
#print SubEquil
"""
for q in range(1,int(round(100* THETA)),1):
# print CompetitiveSignal3Friends(0,q/100.0,0.6,0.6),
print "%01.1f\t" % (100 * CompetitiveSignal1FriendsB0(K,q/100.0,C)),
#print "%01.1f\t" % (10000 * CompetitiveSignal3FriendsB0(q/100.0,r,C)/q),
for q in range(int(round(100* THETA)),101,1):
print "%01.1f\t" % (100 * sm),
#print "%01.1f\t" % (10000 * sm/q),
print
"""
##for q in range(1,int(round(100* ETA)),1):
## print "%01.2f\t" % (100 * IntegralProfit(b,K,q/100.0,r,NF)),
##print
##for ro in range(-190, 190, 1):
## Equil.ro = ro/100.0
## Equil.eta = 0.01
## ETA = Equil.eta
## print '%d\t' % (int(round(100* ETA))),
##print
##for dtheta in range(-5, 10):
## theta = ETA - dtheta / 100.0
## print theta
## for ro in range(-5,5,1):
## print "%01.1f\t" % (100 * DiffBenefitB0(K, C, ETA, theta, r, deltag, NF, sm, ro/100.0)),
## print
#print "%01.1f\t" % (100 * DiffBenefitB0(K, C, ETA, ETA, r, deltag, NF, sm, 0.0))
__author__ = 'Dessalles'
| 31.293907 | 195 | 0.611499 | 1,451 | 8,731 | 3.603722 | 0.166782 | 0.008797 | 0.019889 | 0.012048 | 0.284949 | 0.222796 | 0.174221 | 0.139224 | 0.130426 | 0.122012 | 0 | 0.057446 | 0.212461 | 8,731 | 278 | 196 | 31.406475 | 0.703025 | 0.268125 | 0 | 0.159509 | 0 | 0 | 0.034809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.006135 | null | null | 0.042945 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a1cb185553a265ef90a6017854334865e3cc339 | 304 | py | Python | python_docs/05Functions/01Definition.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | python_docs/05Functions/01Definition.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | python_docs/05Functions/01Definition.py | Matheus-IT/lang-python-related | dd2e5d9b9f16d3838ba1670fdfcba1fa3fe305e9 | [
"MIT"
] | null | null | null | # Definição de função
def soma(a2, b2): # Os parâmetros aqui precisam ter outro nome
print(f'A = {a2} e B = {b2}')
s = a2 + b2
print(f'A soma vale A + B = {s}')
# Programa principal
a = int(input('Digite um valor para A: '))
b = int(input('Digite um valor para B: '))
soma(a, b)
| 25.333333 | 63 | 0.582237 | 53 | 304 | 3.339623 | 0.54717 | 0.033898 | 0.079096 | 0.180791 | 0.282486 | 0.282486 | 0 | 0 | 0 | 0 | 0 | 0.026906 | 0.266447 | 304 | 11 | 64 | 27.636364 | 0.766816 | 0.266447 | 0 | 0 | 0 | 0 | 0.432692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a1d0c02ec27af98a78a24a6f4e896b2268b6a0f | 852 | py | Python | python/number.py | Dahercode/datalumni-test | 9587400bddafd1c32e97655727c5d3dbbfd17574 | [
"MIT"
] | 1 | 2020-02-18T16:56:38.000Z | 2020-02-18T16:56:38.000Z | python/number.py | Dahercode/datalumni-test | 9587400bddafd1c32e97655727c5d3dbbfd17574 | [
"MIT"
] | null | null | null | python/number.py | Dahercode/datalumni-test | 9587400bddafd1c32e97655727c5d3dbbfd17574 | [
"MIT"
] | null | null | null | # Your code goes here
tab=[]
for i in range(1000) :
tab.append(i)
tab2=[]
for i in range(len(tab)):
if sum([ int(c) for c in str(tab[i]) ]) <= 10:
tab2.append(tab[i])
tab3=[]
for i in range(len(tab2)):
a=str(tab2[i])
if a[len(a)-2] == '4':
tab3.append(tab2[i])
tab4=[]
for i in range(len(tab3)):
if len(str(tab3[i]))>=2 :
tab4.append(tab3[i])
tab5=[]
for i in range(len(tab4)):
a=str(tab4[i])
if a.find('7')==-1 and a.find('1')==-1:
tab5.append(tab4[i])
tab6=[]
for i in range(len(tab5)):
a=str(tab5[i])
if ((int(a[0])+int(a[1]))%2) != 0:
tab6.append(tab5[i])
tab7=[]
for i in range(len(tab6)):
a=str(tab6[i])
if str(len(a)) == a[len(a)-1]:
tab7.append(tab6[i])
print(tab7)
mystery_number=tab7[0]
print(f'Le nombre mystère est le : {mystery_number}')
| 18.933333 | 53 | 0.543427 | 160 | 852 | 2.88125 | 0.25 | 0.060738 | 0.091106 | 0.167028 | 0.182213 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072617 | 0.224178 | 852 | 44 | 54 | 19.363636 | 0.624811 | 0.0223 | 0 | 0 | 0 | 0 | 0.055355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a2032ec76e617ec97b415d06f3a42408d534a65 | 635 | py | Python | restbed/core/api.py | mr-tenders/restbed | 68d36536286203048ce01f1467d3db7ee108bebb | [
"MIT"
] | null | null | null | restbed/core/api.py | mr-tenders/restbed | 68d36536286203048ce01f1467d3db7ee108bebb | [
"MIT"
] | null | null | null | restbed/core/api.py | mr-tenders/restbed | 68d36536286203048ce01f1467d3db7ee108bebb | [
"MIT"
] | null | null | null | """
restbed core api
"""
import pyinsane2
from typing import List
class CoreApi(object):
scanner: pyinsane2.Scanner = None
scan_session: pyinsane2.ScanSession = None
@staticmethod
def initialize():
"""
initialize SANE, don't call if unit testing
(well that's the hope...)
"""
pyinsane2.init()
def select_device(self):
devs: List[pyinsane2.Scanner] = pyinsane2.get_devices()
self.scanner = devs[0]
return self.scanner
def start_scanning(self):
pyinsane2.maximize_scan_area(self.scanner)
self.scan_session = self.scanner.scan()
| 22.678571 | 63 | 0.63937 | 73 | 635 | 5.465753 | 0.60274 | 0.110276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016985 | 0.258268 | 635 | 27 | 64 | 23.518519 | 0.830149 | 0.135433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5a20458f16a895f14563ad81b494f0d3c1292dbf | 736 | py | Python | secure_notes_client/thread_pool.py | rlee287/secure-notes-client | 56d5fcce1d2eeb46de22aac63131fe7214b6f185 | [
"MIT"
] | null | null | null | secure_notes_client/thread_pool.py | rlee287/secure-notes-client | 56d5fcce1d2eeb46de22aac63131fe7214b6f185 | [
"MIT"
] | 4 | 2019-07-10T01:34:12.000Z | 2019-08-20T01:52:31.000Z | secure_notes_client/thread_pool.py | rlee287/secure-notes-client | 56d5fcce1d2eeb46de22aac63131fe7214b6f185 | [
"MIT"
] | null | null | null | from concurrent.futures import ThreadPoolExecutor
import time
from PySide2.QtCore import QCoreApplication
thread_pool=None
def init_thread_pool():
global thread_pool
thread_pool=ThreadPoolExecutor()
def deinit_thread_pool():
global thread_pool
thread_pool.shutdown()
def submit_task(function,*args,**kwargs):
global thread_pool
return thread_pool.submit(function,*args,**kwargs)
#TODO: find a less hacky way to keep events processed
def async_run_await_result(function,*args,**kwargs):
future_obj=thread_pool.submit(function,*args,**kwargs)
while not future_obj.done():
QCoreApplication.processEvents()
time.sleep(0.1)
future_result=future_obj.result()
return future_result
| 26.285714 | 58 | 0.762228 | 96 | 736 | 5.625 | 0.479167 | 0.185185 | 0.133333 | 0.081481 | 0.259259 | 0.259259 | 0.133333 | 0 | 0 | 0 | 0 | 0.004792 | 0.149457 | 736 | 27 | 59 | 27.259259 | 0.857827 | 0.070652 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0 | 1 | 0.2 | false | 0 | 0.15 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a24938eab3876854f2631917fd72abe26cefe64 | 1,518 | py | Python | quandl_data_retriever/server.py | fabiomolinar/quandl-data-retriever | d9359922cb222ac519f7d9e4dd892bbcf6b1b2d0 | [
"MIT"
] | null | null | null | quandl_data_retriever/server.py | fabiomolinar/quandl-data-retriever | d9359922cb222ac519f7d9e4dd892bbcf6b1b2d0 | [
"MIT"
] | null | null | null | quandl_data_retriever/server.py | fabiomolinar/quandl-data-retriever | d9359922cb222ac519f7d9e4dd892bbcf6b1b2d0 | [
"MIT"
] | null | null | null | """ Server module
Quandl API limits:
Authenticated users have a limit of 300 calls per 10 seconds,
2,000 calls per 10 minutes and a limit of 50,000 calls per day.
"""
import urllib
import logging
from twisted.internet import reactor
from twisted.web.client import Agent, readBody
from . import settings
from . import resources
logger = logging.getLogger(settings.LOG_NAME + ".server")
def main():
from .data import data_list
try:
resource_name = data_list[0]["resource"]
resource = resources.__dict__[resource_name]
key = data_list[0]["api_key"]
key = settings.__dict__[key]
resource = resource(key)
except KeyError as e:
logger.warning("KeyError while trying to instantiate Resource class with : " + str(e))
else:
# TODO: go to next item
pass
url = resource.get_url(data_list[0]["url"])
agent = Agent(reactor)
d = agent.request(
str.encode(data_list[0]["method"], "utf-8"),
str.encode(url, "ascii")
)
def cbResponse(response):
if response.code == 200:
def cbBody(body):
resource.save(body)
pass
d = readBody(response)
d.addCallback(cbBody)
return d
d.addCallback(cbResponse)
def cbShutdown(ignored):
if not ignored:
logger.warning("request failed.")
reactor.stop()
d.addBoth(cbShutdown)
reactor.run()
if __name__ == "__main__":
main() | 26.172414 | 94 | 0.614625 | 187 | 1,518 | 4.850267 | 0.491979 | 0.044101 | 0.039691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022079 | 0.283926 | 1,518 | 58 | 95 | 26.172414 | 0.812328 | 0.120553 | 0 | 0.047619 | 0 | 0 | 0.09262 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.095238 | false | 0.047619 | 0.166667 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a2a01adbfb1b632775069e902a5a1facd9c2f69 | 3,308 | py | Python | birdsong_recognition/dataset.py | YingyingF/birdsong_recognition | 4f8a2ccb900898a02d4454a5f1c206125f23fa44 | [
"Apache-2.0"
] | null | null | null | birdsong_recognition/dataset.py | YingyingF/birdsong_recognition | 4f8a2ccb900898a02d4454a5f1c206125f23fa44 | [
"Apache-2.0"
] | null | null | null | birdsong_recognition/dataset.py | YingyingF/birdsong_recognition | 4f8a2ccb900898a02d4454a5f1c206125f23fa44 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: dataset.ipynb (unless otherwise specified).
__all__ = ['load_mp3', 'get_sample_label', 'preprocess_file', 'pad_by_zeros', 'split_file_by_window_size',
'wrapper_split_file_by_window_size', 'create_dataset_fixed_size', 'get_spectrogram', 'add_channel_dim']
# Cell
def load_mp3(file):
sample = tf.io.read_file(file)
sample_audio = tfio.audio.decode_mp3(sample)
return sample_audio
# Cell
def get_sample_label(file):
sample = load_mp3(file)
label = tf.argmax(tf.strings.split(file, '/')[1] == np.array(ebirds))
return sample, label
# Cell
def preprocess_file(sample_audio, label):
# Only look at the first channel
sample_audio = sample_audio[:,0]
sample_audio_scaled = (sample_audio - tf.math.reduce_min(sample_audio))/(tf.math.reduce_max(sample_audio) - tf.math.reduce_min(sample_audio))
sample_audio_scaled = 2*(sample_audio_scaled - 0.5)
return sample_audio_scaled, label
# Cell
def pad_by_zeros(sample, min_file_size, last_sample_size):
padding_size = min_file_size - last_sample_size
sample_padded = tf.pad(sample, paddings=[[tf.constant(0), padding_size]])
return sample_padded
# Cell
def split_file_by_window_size(sample, label):
# number of subsamples given none overlapping window size.
subsample_count = int(np.round(sample.shape[0]/min_file_size))
# ignore extremely long files for now
subsample_limit = 75
if subsample_count <= subsample_limit:
# if the last sample is at least half the window size, then pad it, if not, clip it.
last_sample_size = sample.shape[0]%min_file_size
if last_sample_size/min_file_size > 0.5:
sample = pad_by_zeros(sample, min_file_size, last_sample_size)
else:
sample = sample[:subsample_count*min_file_size]
sample = tf.reshape(sample, shape=[subsample_count, min_file_size])
label = tf.pad(tf.expand_dims(label, axis=0), paddings=[[0, subsample_count-1]], constant_values=label.numpy())
else:
sample = tf.reshape(sample[:subsample_limit*min_file_size], shape=[subsample_limit, min_file_size])
label = tf.pad(tf.expand_dims(label, axis=0), paddings=[[0, 74]], constant_values=label.numpy())
return sample, label
# Cell
def wrapper_split_file_by_window_size(sample, label):
sample, label = tf.py_function(split_file_by_window_size, inp=(sample, label),
Tout=(sample.dtype, label.dtype))
return sample, label
# Cell
def create_dataset_fixed_size(ds):
iterator = iter(ds)
sample, label = iterator.next()
samples_all = tf.unstack(sample)
labels_all = tf.unstack(label)
while True:
try:
sample, label = iterator.next()
sample = tf.unstack(sample)
label = tf.unstack(label)
samples_all = tf.concat([samples_all, sample], axis=0)
labels_all = tf.concat([labels_all, label], axis=0)
except:
break
return samples_all, labels_all
# Cell
def get_spectrogram(sample, label):
spectrogram = tfio.experimental.audio.spectrogram(sample, nfft=512, window=512, stride=256)
return spectrogram, label
# Cell
def add_channel_dim(sample, label):
sample = tf.expand_dims(sample, axis=-1)
return sample, label | 39.380952 | 145 | 0.703144 | 470 | 3,308 | 4.668085 | 0.257447 | 0.075205 | 0.050137 | 0.038742 | 0.28897 | 0.195989 | 0.147675 | 0.118505 | 0.084777 | 0.084777 | 0 | 0.013026 | 0.187727 | 3,308 | 84 | 146 | 39.380952 | 0.803498 | 0.101874 | 0 | 0.135593 | 1 | 0 | 0.0558 | 0.028069 | 0 | 0 | 0 | 0 | 0 | 1 | 0.152542 | false | 0 | 0 | 0 | 0.305085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a2aef76ad354c4dafd74c644c7cdf56a923d14d | 749 | py | Python | test/test_api_data_utils.py | onap/optf-osdf | 2b9e7f4fca3d510a201283a8561f6ff3424f5fd6 | [
"Apache-2.0"
] | 3 | 2019-04-15T13:33:57.000Z | 2019-10-21T17:19:19.000Z | test/test_api_data_utils.py | onap/optf-osdf | 2b9e7f4fca3d510a201283a8561f6ff3424f5fd6 | [
"Apache-2.0"
] | null | null | null | test/test_api_data_utils.py | onap/optf-osdf | 2b9e7f4fca3d510a201283a8561f6ff3424f5fd6 | [
"Apache-2.0"
] | null | null | null | import json
import os
from osdf.utils import api_data_utils
from collections import defaultdict
BASE_DIR = os.path.dirname(__file__)
with open(os.path.join(BASE_DIR, "placement-tests/request.json")) as json_data:
req_json = json.load(json_data)
class TestVersioninfo():
#
# Tests for api_data_utils.py
#
def test_retrieve_version_info(self):
request_id = 'test12345'
test_dict = {'placementVersioningEnabled': False, 'placementMajorVersion': '1', 'placementPatchVersion': '0', 'placementMinorVersion': '0'}
test_verison_info_dict = defaultdict(dict ,test_dict )
#verison_info_dict = api_data_utils.retrieve_version_info(req_json, request_id)
#assert verison_info_dict == test_verison_info_dict
| 34.045455 | 147 | 0.750334 | 97 | 749 | 5.43299 | 0.474227 | 0.083491 | 0.113852 | 0.072106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012598 | 0.152203 | 749 | 21 | 148 | 35.666667 | 0.817323 | 0.210948 | 0 | 0 | 0 | 0 | 0.220513 | 0.2 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5a2e4a10cc2ee782907da20e988df75437125628 | 751 | py | Python | duplicate_csv.py | AronFreyr/de1-project | 9e95346db9a6955ee017d59c73c83251d529d8ff | [
"Apache-2.0"
] | null | null | null | duplicate_csv.py | AronFreyr/de1-project | 9e95346db9a6955ee017d59c73c83251d529d8ff | [
"Apache-2.0"
] | null | null | null | duplicate_csv.py | AronFreyr/de1-project | 9e95346db9a6955ee017d59c73c83251d529d8ff | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[7]:
import os
write_to_csv_file = 'million_song_subset.csv'
csv_file_read = open(write_to_csv_file,'r')
csv_file_write = open(write_to_csv_file,'a')
while True:
next_line = csv_file_read.readline()
if not next_line:
break
csv_file_size = os.path.getsize(write_to_csv_file)
print("file size: {}".format(str(csv_file_size/1048576)))
# if the csv file larger than or euqal to 5GB exist for loop
if csv_file_size >= 5368709120:
break
if next_line.startswith("song_id"):
continue
csv_file_write.write(next_line)
print("appended: {}".format(next_line))
csv_file_read.close()
csv_file_write.close()
# In[ ]:
| 17.465116 | 64 | 0.660453 | 115 | 751 | 3.982609 | 0.452174 | 0.213974 | 0.087336 | 0.122271 | 0.161572 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034423 | 0.226365 | 751 | 42 | 65 | 17.880952 | 0.753873 | 0.142477 | 0 | 0.111111 | 0 | 0 | 0.089764 | 0.03622 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0.111111 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a3220a6933b741f74449b702618162293bca339 | 1,944 | py | Python | tests/settings.py | matrixorz/firefly | fb8082ccc525bf7b266960ae49fc0b15e522fd92 | [
"MIT"
] | 247 | 2015-04-13T05:58:10.000Z | 2021-01-21T07:31:58.000Z | tests/settings.py | qiluosheng/firefly | fb8082ccc525bf7b266960ae49fc0b15e522fd92 | [
"MIT"
] | 57 | 2015-04-13T15:10:50.000Z | 2016-04-08T09:15:27.000Z | tests/settings.py | qiluosheng/firefly | fb8082ccc525bf7b266960ae49fc0b15e522fd92 | [
"MIT"
] | 94 | 2015-04-12T06:03:30.000Z | 2020-05-11T14:26:56.000Z | # coding=utf-8
DEBUG = True
TESTING = True
SECRET_KEY = 'secret_key for test'
# mongodb
MONGODB_SETTINGS = {
'db': 'firefly_test',
'username': '',
'password': '',
'host': '127.0.0.1',
'port': 27017
}
# redis cache
CACHE_TYPE = 'redis'
CACHE_REDIS_HOST = '127.0.0.1'
CACHE_REDIS_PORT = 6379
CACHE_REDIS_DB = 9
CACHE_REDIS_PASSWORD = ''
# mail sender
MAIL_SERVER = 'smtp.googlemail.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USERNAME = 'MAIL_USERNAME'
MAIL_PASSWORD = 'MAIL_PASSWORD'
MAIL_DEFAULT_SENDER = 'admin@python-cn.org'
SECURITY_PASSWORD_SALT = "abc"
SECURITY_PASSWORD_HASH = "bcrypt"
# SECURITY_PASSWORD_HASH = "pbkdf2_sha512"
SECURITY_EMAIL_SENDER = "support@python-cn.org"
SECURITY_CONFIRM_SALT = "570be5f24e690ce5af208244f3e539a93b6e4f05"
SECURITY_REMEMBER_SALT = "de154140385c591ea771dcb3b33f374383e6ea47"
# Set secret keys for CSRF protection
CSRF_ENABLED = False
WTF_CSRF_ENABLED = False
SERVER_EMAIL = 'Python-China <support@python-cn.org>'
# Flask-SocialBlueprint
SOCIAL_BLUEPRINT = {
# https://developers.facebook.com/apps/
"flask_social_blueprint.providers.Facebook": {
# App ID
'consumer_key': '197…',
# App Secret
'consumer_secret': 'c956c1…'
},
# https://apps.twitter.com/app/new
"flask_social_blueprint.providers.Twitter": {
# Your access token from API Keys tab
'consumer_key': 'bkp…',
# access token secret
'consumer_secret': 'pHUx…'
},
# https://console.developers.google.com/project
"flask_social_blueprint.providers.Google": {
# Client ID
'consumer_key': '797….apps.googleusercontent.com',
# Client secret
'consumer_secret': 'bDG…'
},
# https://github.com/settings/applications/new
"flask_social_blueprint.providers.Github": {
# Client ID
'consumer_key': '6f6…',
# Client Secret
'consumer_secret': '1a9…'
},
}
| 25.578947 | 67 | 0.679012 | 233 | 1,944 | 5.523605 | 0.420601 | 0.058275 | 0.06216 | 0.090132 | 0.065268 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06206 | 0.195988 | 1,944 | 75 | 68 | 25.92 | 0.746001 | 0.21965 | 0 | 0 | 0 | 0 | 0.414162 | 0.209753 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.106383 | 0 | 0 | 0 | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5a36eda2f990b0b613ca5b9070e7a670400461bc | 1,806 | py | Python | mbed_connector_api/tests/mock_data.py | ARMmbed/mbed-connector-python | a5024a01dc67cc192c8bf7a70b251fcf0a3f279b | [
"Apache-2.0"
] | 2 | 2017-01-05T07:16:03.000Z | 2018-09-04T02:26:19.000Z | mbed_connector_api/tests/mock_data.py | ARMmbed/mbed-connector-python | a5024a01dc67cc192c8bf7a70b251fcf0a3f279b | [
"Apache-2.0"
] | 13 | 2016-02-29T17:31:56.000Z | 2017-02-07T22:46:17.000Z | mbed_connector_api/tests/mock_data.py | ARMmbed/mbed-connector-python | a5024a01dc67cc192c8bf7a70b251fcf0a3f279b | [
"Apache-2.0"
] | 2 | 2017-02-07T22:10:41.000Z | 2017-03-06T06:38:58.000Z | # Copyright 2014-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0
# See LICENSE file for details.
class mockData:
"""dictionary of mocking data for the mocking tests"""
# dictionary to hold the mock data
_data={}
# function to add mock data to the _mock_data dictionary
def _add(self,uri,status,payload):
self._data[uri] = {"status":status,
"payload":payload
}
return
def getPayload(self,input):
return self._data[input]['payload']
def getStatusCode(self,input):
return self._data[input]['status']
# initialize the _mock_data dictionary with all the appropriate mocking data
def __init__(self):
self._add( uri="limits", status=200,
payload='{"transaction-quota":10000,"transaction-count":259,"endpoint-quota":100,"endpoint-count":1}')
self._add( uri="connectorVersion", status=200,
payload='DeviceServer v3.0.0-520\nREST version = v2')
self._add( uri="apiVersion", status=200,
payload='["v1","v2"]')
self._add( uri="endpoints", status=200,
payload='[{"name":"51f540a2-3113-46e2-aef4-96e94a637b31","type":"test","status":"ACTIVE"}]')
self._add( uri="resources", status=200,
payload='[{"uri":"/Test/0/S","rt":"Static","obs":false,"type":""},{"uri":"/Test/0/D","rt":"Dynamic","obs":true,"type":""},{"uri":"/3/0/2","obs":false,"type":""},{"uri":"/3/0/1","obs":false,"type":""},{"uri":"/3/0/17","obs":false,"type":""},{"uri":"/3/0/0","obs":false,"type":""},{"uri":"/3/0/16","obs":false,"type":""},{"uri":"/3/0/11","obs":false,"type":""},{"uri":"/3/0/11/0","obs":false,"type":""},{"uri":"/3/0/4","obs":false,"type":""}]')
#self._add( uri="", status=200,
# payload="")
#self._add( uri="", status=200,
# payload="")
#self._add( uri="", status=200,
# payload="")
| 42 | 448 | 0.613511 | 251 | 1,806 | 4.330677 | 0.338645 | 0.066237 | 0.099356 | 0.110396 | 0.23827 | 0.23827 | 0.139834 | 0.071757 | 0.071757 | 0.071757 | 0 | 0.070006 | 0.137874 | 1,806 | 42 | 449 | 43 | 0.628131 | 0.250831 | 0 | 0 | 0 | 0.136364 | 0.566125 | 0.466357 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0.090909 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a37802b395a4a964c1285e03e992f8b1712b575 | 2,134 | py | Python | examples/demo/eager_demo/src/demo_1_pybullet.py | eager-dev/eager | f10ccbd7452acb3a29881ecd95c759f632c91da9 | [
"Apache-2.0"
] | 16 | 2021-07-02T14:48:53.000Z | 2022-02-23T02:53:01.000Z | examples/demo/eager_demo/src/demo_1_pybullet.py | eager-dev/eager | f10ccbd7452acb3a29881ecd95c759f632c91da9 | [
"Apache-2.0"
] | 37 | 2021-06-30T12:10:29.000Z | 2022-02-02T09:46:34.000Z | examples/demo/eager_demo/src/demo_1_pybullet.py | eager-dev/eager | f10ccbd7452acb3a29881ecd95c759f632c91da9 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
import rospy
# Import eager packages
from eager_core.utils.file_utils import launch_roscore, load_yaml
from eager_core.eager_env import EagerEnv
from eager_core.objects import Object
from eager_core.wrappers.flatten import Flatten
from eager_bridge_pybullet.pybullet_engine import PyBulletEngine # noqa: F401
# Required for action processor
from eager_process_safe_actions.safe_actions_processor import SafeActionsProcessor
if __name__ == '__main__':
roscore = launch_roscore() # First launch roscore
rospy.init_node('eager_demo', anonymous=True, log_level=rospy.WARN)
rate = rospy.Rate(1/0.08)
# Define the engine
engine = PyBulletEngine(gui=True)
# Create robot
robot = Object.create('robot', 'eager_robot_vx300s', 'vx300s')
# Add action preprocessing
processor = SafeActionsProcessor(robot_type='vx300s',
vel_limit=0.25,
collision_height=0.15,
)
robot.actuators['joints'].add_preprocess(
processor=processor,
observations_from_objects=[robot],
)
# Add a camera for rendering
calibration = load_yaml('eager_demo', 'calibration')
cam = Object.create('cam', 'eager_sensor_realsense', 'd435',
position=calibration['position'],
orientation=calibration['orientation'],
)
# Create environment
env = EagerEnv(name='demo_env',
engine=engine,
objects=[robot, cam],
render_sensor=cam.sensors['camera_rgb'],
)
env = Flatten(env)
env.render()
obs = env.reset() # TODO: if code does not close properly, render seems to keep a thread open....
for i in range(200):
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
if done:
obs = env.reset()
rate.sleep()
# todo: create a env.close(): close render screen, and env.shutdown() to shutdown the environment cleanly.
env.close()
| 33.873016 | 110 | 0.627929 | 241 | 2,134 | 5.377593 | 0.46473 | 0.041667 | 0.040123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018843 | 0.278819 | 2,134 | 62 | 111 | 34.419355 | 0.823262 | 0.182755 | 0 | 0.04878 | 0 | 0 | 0.084296 | 0.012702 | 0 | 0 | 0 | 0.016129 | 0 | 1 | 0 | false | 0 | 0.170732 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a48f16367b8db551ede0ba75c39ecf9f879f676 | 646 | py | Python | setup.py | jhakonen/wotdisttools | 2194761baaf1f6ade5fa740d134553b77300211b | [
"MIT"
] | 9 | 2019-08-15T14:59:39.000Z | 2021-06-24T22:03:31.000Z | setup.py | jhakonen/wotdisttools | 2194761baaf1f6ade5fa740d134553b77300211b | [
"MIT"
] | 1 | 2019-08-06T19:22:44.000Z | 2019-08-11T09:23:31.000Z | setup.py | jhakonen/setuptools-wotmod | 2194761baaf1f6ade5fa740d134553b77300211b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name='setuptools-wotmod',
version='0.2',
packages=find_packages(),
description='setuptools integration for creating World of Tanks mods',
long_description=open('README.md').read(),
author='jhakonen',
url='https://github.com/jhakonen/setuptools-wotmod/',
license='MIT License',
setup_requires=['pytest-runner'],
tests_require=[
'mock',
'nose',
'pytest<5',
],
entry_points={
"distutils.commands": [
"bdist_wotmod = setuptools_wotmod.bdist_wotmod:bdist_wotmod",
],
},
)
| 24.846154 | 74 | 0.630031 | 69 | 646 | 5.753623 | 0.710145 | 0.120907 | 0.085642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005988 | 0.224458 | 646 | 25 | 75 | 25.84 | 0.786427 | 0.03096 | 0 | 0.090909 | 0 | 0 | 0.4064 | 0.0688 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a4bcf1b59efc03b155e47a1a800ec05299ddea9 | 258 | py | Python | lab1/lab1/views/home.py | ZerocksX/Service-Oriented-Computing-2019 | eac6b0e9a40eed76b452f6524fd899e7107b0f69 | [
"Apache-2.0"
] | null | null | null | lab1/lab1/views/home.py | ZerocksX/Service-Oriented-Computing-2019 | eac6b0e9a40eed76b452f6524fd899e7107b0f69 | [
"Apache-2.0"
] | null | null | null | lab1/lab1/views/home.py | ZerocksX/Service-Oriented-Computing-2019 | eac6b0e9a40eed76b452f6524fd899e7107b0f69 | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse
from django.shortcuts import render, redirect
from lab1.views import login
def docs(request):
if not request.user.is_authenticated:
return redirect(login.login_view)
return render(request, 'docs.html')
| 23.454545 | 45 | 0.763566 | 35 | 258 | 5.571429 | 0.628571 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00463 | 0.162791 | 258 | 10 | 46 | 25.8 | 0.898148 | 0 | 0 | 0 | 0 | 0 | 0.034884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5a4ed98e41bcfbfb4f87bc36a45fc26e1aa68177 | 1,015 | py | Python | client_code/utils/__init__.py | daviesian/anvil-extras | 84fd5ca5144808d4ce2b333995e801a4ddff60e6 | [
"MIT"
] | null | null | null | client_code/utils/__init__.py | daviesian/anvil-extras | 84fd5ca5144808d4ce2b333995e801a4ddff60e6 | [
"MIT"
] | null | null | null | client_code/utils/__init__.py | daviesian/anvil-extras | 84fd5ca5144808d4ce2b333995e801a4ddff60e6 | [
"MIT"
] | null | null | null | # SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 The Anvil Extras project team members listed at
# https://github.com/anvilistas/anvil-extras/graphs/contributors
#
# This software is published at https://github.com/anvilistas/anvil-extras
from functools import cache
__version__ = "1.4.0"
def __dir__():
return ["auto_refreshing", "wait_for_writeback", "timed", "BindingRefreshDict"]
@cache
def __getattr__(name):
# todo use dynamic imports but __import__ is not yet supported in skult
if name == "auto_refreshing":
from ._auto_refreshing import auto_refreshing
return auto_refreshing
elif name == "timed":
from ._timed import timed
return timed
elif name == "wait_for_writeback":
from ._writeback_waiter import wait_for_writeback
return wait_for_writeback
elif name == "BindingRefreshDict":
from ._auto_refreshing import BindingRefreshDict
return BindingRefreshDict
else:
raise AttributeError(name)
| 26.710526 | 83 | 0.715271 | 120 | 1,015 | 5.758333 | 0.508333 | 0.121563 | 0.092619 | 0.04631 | 0.107091 | 0.107091 | 0.107091 | 0 | 0 | 0 | 0 | 0.008685 | 0.205911 | 1,015 | 37 | 84 | 27.432432 | 0.848635 | 0.296552 | 0 | 0 | 0 | 0 | 0.165722 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0.1 | false | 0 | 0.25 | 0.05 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a50e3662524ec61048e74d97bc09d7305717136 | 7,018 | py | Python | tests/test_utils.py | h4ck3rm1k3/requests | 46184236dc177fb68c7863445609149d0ac243ea | [
"Apache-2.0"
] | null | null | null | tests/test_utils.py | h4ck3rm1k3/requests | 46184236dc177fb68c7863445609149d0ac243ea | [
"Apache-2.0"
] | null | null | null | tests/test_utils.py | h4ck3rm1k3/requests | 46184236dc177fb68c7863445609149d0ac243ea | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import os
from io import BytesIO
import pytest
from requests import compat
from requests.utils import (
address_in_network, dotted_netmask,
get_auth_from_url, get_encodings_from_content,
get_environ_proxies, guess_filename,
is_ipv4_address, is_valid_cidr, requote_uri,
select_proxy, super_len)
from .compat import StringIO, cStringIO
class TestSuperLen:
@pytest.mark.parametrize(
'stream, value', (
(StringIO.StringIO, 'Test'),
(BytesIO, b'Test'),
pytest.mark.skipif('cStringIO is None')((cStringIO, 'Test')),
))
def test_io_streams(self, stream, value):
"""Ensures that we properly deal with different kinds of IO streams."""
assert super_len(stream()) == 0
assert super_len(stream(value)) == 4
def test_super_len_correctly_calculates_len_of_partially_read_file(self):
"""Ensure that we handle partially consumed file like objects."""
s = StringIO.StringIO()
s.write('foobarbogus')
assert super_len(s) == 0
class TestGetEnvironProxies:
"""Ensures that IP addresses are correctly matches with ranges
in no_proxy variable."""
@pytest.yield_fixture(scope='class', autouse=True, params=['no_proxy', 'NO_PROXY'])
def no_proxy(self, request):
os.environ[request.param] = '192.168.0.0/24,127.0.0.1,localhost.localdomain,172.16.1.1'
yield
del os.environ[request.param]
@pytest.mark.parametrize(
'url', (
'http://192.168.0.1:5000/',
'http://192.168.0.1/',
'http://172.16.1.1/',
'http://172.16.1.1:5000/',
'http://localhost.localdomain:5000/v1.0/',
))
def test_bypass(self, url):
assert get_environ_proxies(url) == {}
@pytest.mark.parametrize(
'url', (
'http://192.168.1.1:5000/',
'http://192.168.1.1/',
'http://www.requests.com/',
))
def test_not_bypass(self, url):
assert get_environ_proxies(url) != {}
class TestIsIPv4Address:
def test_valid(self):
assert is_ipv4_address('8.8.8.8')
@pytest.mark.parametrize('value', ('8.8.8.8.8', 'localhost.localdomain'))
def test_invalid(self, value):
assert not is_ipv4_address(value)
class TestIsValidCIDR:
def test_valid(self):
assert is_valid_cidr('192.168.1.0/24')
@pytest.mark.parametrize(
'value', (
'8.8.8.8',
'192.168.1.0/a',
'192.168.1.0/128',
'192.168.1.0/-1',
'192.168.1.999/24',
))
def test_invalid(self, value):
assert not is_valid_cidr(value)
class TestAddressInNetwork:
def test_valid(self):
assert address_in_network('192.168.1.1', '192.168.1.0/24')
def test_invalid(self):
assert not address_in_network('172.16.0.1', '192.168.1.0/24')
class TestGuessFilename:
@pytest.mark.parametrize(
'value', (1, type('Fake', (object,), {'name': 1})()),
)
def test_guess_filename_invalid(self, value):
assert guess_filename(value) is None
@pytest.mark.parametrize(
'value, expected_type', (
(b'value', compat.bytes),
(b'value'.decode('utf-8'), compat.str)
))
def test_guess_filename_valid(self, value, expected_type):
obj = type('Fake', (object,), {'name': value})()
result = guess_filename(obj)
assert result == value
assert isinstance(result, expected_type)
class TestContentEncodingDetection:
def test_none(self):
encodings = get_encodings_from_content('')
assert not len(encodings)
@pytest.mark.parametrize(
'content', (
# HTML5 meta charset attribute
'<meta charset="UTF-8">',
# HTML4 pragma directive
'<meta http-equiv="Content-type" content="text/html;charset=UTF-8">',
# XHTML 1.x served with text/html MIME type
'<meta http-equiv="Content-type" content="text/html;charset=UTF-8" />',
# XHTML 1.x served as XML
'<?xml version="1.0" encoding="UTF-8"?>',
))
def test_pragmas(self, content):
encodings = get_encodings_from_content(content)
assert len(encodings) == 1
assert encodings[0] == 'UTF-8'
def test_precedence(self):
content = '''
<?xml version="1.0" encoding="XML"?>
<meta charset="HTML5">
<meta http-equiv="Content-type" content="text/html;charset=HTML4" />
'''.strip()
assert get_encodings_from_content(content) == ['HTML5', 'HTML4', 'XML']
USER = PASSWORD = "%!*'();:@&=+$,/?#[] "
ENCODED_USER = compat.quote(USER, '')
ENCODED_PASSWORD = compat.quote(PASSWORD, '')
@pytest.mark.parametrize(
'url, auth', (
(
'http://' + ENCODED_USER + ':' + ENCODED_PASSWORD + '@' +
'request.com/url.html#test',
(USER, PASSWORD)
),
(
'http://user:pass@complex.url.com/path?query=yes',
('user', 'pass')
),
(
'http://user:pass%20pass@complex.url.com/path?query=yes',
('user', 'pass pass')
),
(
'http://user:pass pass@complex.url.com/path?query=yes',
('user', 'pass pass')
),
(
'http://user%25user:pass@complex.url.com/path?query=yes',
('user%user', 'pass')
),
(
'http://user:pass%23pass@complex.url.com/path?query=yes',
('user', 'pass#pass')
),
))
def test_get_auth_from_url(url, auth):
assert get_auth_from_url(url) == auth
@pytest.mark.parametrize(
'uri, expected', (
(
# Ensure requoting doesn't break expectations
'http://example.com/fiz?buz=%25ppicture',
'http://example.com/fiz?buz=%25ppicture',
),
(
# Ensure we handle unquoted percent signs in redirects
'http://example.com/fiz?buz=%ppicture',
'http://example.com/fiz?buz=%25ppicture',
),
))
def test_requote_uri_with_unquoted_percents(uri, expected):
"""See: https://github.com/kennethreitz/requests/issues/2356
"""
assert requote_uri(uri) == expected
@pytest.mark.parametrize(
'mask, expected', (
(8, '255.0.0.0'),
(24, '255.255.255.0'),
(25, '255.255.255.128'),
))
def test_dotted_netmask(mask, expected):
assert dotted_netmask(mask) == expected
@pytest.mark.parametrize(
'url, expected', (
('hTTp://u:p@Some.Host/path', 'http://some.host.proxy'),
('hTTp://u:p@Other.Host/path', 'http://http.proxy'),
('hTTps://Other.Host', None),
))
def test_select_proxies(url, expected):
"""Make sure we can select per-host proxies correctly."""
proxies = {'http': 'http://http.proxy',
'http://some.host': 'http://some.host.proxy'}
assert select_proxy(url, proxies) == expected
| 30.25 | 95 | 0.58008 | 853 | 7,018 | 4.644783 | 0.228605 | 0.033569 | 0.063604 | 0.012115 | 0.293286 | 0.220091 | 0.163301 | 0.146138 | 0.085815 | 0.054518 | 0 | 0.050896 | 0.260901 | 7,018 | 231 | 96 | 30.380952 | 0.712936 | 0.078655 | 0 | 0.234286 | 0 | 0.022857 | 0.26703 | 0.05661 | 0 | 0 | 0 | 0 | 0.125714 | 1 | 0.114286 | false | 0.091429 | 0.034286 | 0 | 0.188571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5a54ab45f8f150e828680b7baff870b193da03be | 6,448 | py | Python | ggpy/cruft/grammar.py | hobson/ggpy | 4e6e6e876c3a4294cd711647051da2d9c1836b60 | [
"MIT"
] | 1 | 2015-01-26T19:07:45.000Z | 2015-01-26T19:07:45.000Z | ggpy/cruft/grammar.py | hobson/ggpy | 4e6e6e876c3a4294cd711647051da2d9c1836b60 | [
"MIT"
] | null | null | null | ggpy/cruft/grammar.py | hobson/ggpy | 4e6e6e876c3a4294cd711647051da2d9c1836b60 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# package: org.ggp.base.util.symbol.grammar
import threading
class SymbolFormatException(Exception):
source = ''
def __init__(self, message, source):
super(SymbolFormatException, self).__init__(message)
self.source = source
def getSource(self):
return self.source
def __str__(self):
return "Improperly formatted symbolic expression: " + self.source
class Symbol(object):
def __str__(self):
pass
class SymbolAtom(Symbol):
def __init__(self, value=None):
super(SymbolAtom, self).__init__()
self.value = value.intern() if not value is None else ''
def getValue(self):
return self.value
def __str__(self):
return self.value
class SymbolList(Symbol): # odd that this is a derived class and not a container for Symbol objects
'''List container for Symbol objects (self.contents = [Symbol(), Symbol(), ...])
Java -> Python
size -> __len__
toString -> __str__
'''
def __init__(self, contents):
super(SymbolList, self).__init__()
self.contents = contents
def get(self, index):
""" generated source for method get """
return self.contents.get(index)
def __len__(self):
return len(self.contents)
def __str__(self):
if self.contents:
return '( ' + ' '.join([str(sym) for sym in self.contents]) + ' )'
else:
return '( )'
class SymbolPool(object):
'''A pair of dicts/pools with a thread-safe add_key_value_if_absent() operations
Python dicts and lists are already single-operation (atomic) thread-safe
nonatomic operations for lists (L) and dicts (D) include:
i = i+1
L.append(L[-1])
L[i] = L[j]
D[x] = D[x] + 1
SymbolPool uses a lock (GIL?) to perform multiple operations on a dict/list thread-safely
Here's how you'd do the same for the D[x] operation above:
import threading
lock = threading.Lock()
lock.acquire()
D[x] = D[x] + 1
lock.release()
'''
# WARNING: mutable class attributes will be shared across instances!
atomPool = {}
listPool = {}
# `classmethod`s can be overridden by any classes that inherit SymbolPool
# and are shared among instances. otherwise they are the same as instance
# methods. `staticmethod`s are just non-global functions and don't need to access
# the class
@staticmethod
def addToPool(key, value, pool):
""" Add key-value to `dict` `pool` if `pool` does not yet have one for that key
value :: a list of Symbol objects (SymbolList)
pool :: a dictionary of atoms or symbol lists stored in this SymbolPool class
Sam says, "Even if you've checked to make sure that the pool doesn't contain the key,
you still shouldn't assume that this method actually inserts the given value, since
this class is accessed by multiple threads simultaneously."
@return the value associated with the key in the pool
"""
# added by HL to avoid the unthreadsafe behavior described by Sam above
lock = threading.Lock()
lock.aquire()
prev_value = pool.get(key)
if prev_value is None:
pool[key] = value
lock.release()
return value
lock.release()
return prev_value
@classmethod
def getAtom(cls, value):
'''Add an atom to the atomPool if it isn't already there, return the value if there'''
ret = cls.atomPool.get(value)
if ret is None:
ret = cls.addToPool(value, SymbolAtom(value), cls.atomPool)
return ret
@classmethod
def getList(cls, contents):
"""contents is a SymbolList or list of symbols"""
ret = cls.listPool.get(contents)
if ret == None:
ret = cls.addToPool(contents, SymbolList(contents), cls.listPool)
return ret
# no need to overload in python just treat the Array like a List and it should just work!
# @classmethod
# @getList.register(object, Symbol)
# def getList_0(cls, contents):
# """ generated source for method getList_0 """
# return cls.getList(Arrays.asList(contents))
@classmethod
def drainPool(cls):
'''Drains the contents of the SymbolPool. Useful to control memory usage.
Sam says, "Once you've finished playing a large game, this should be safe to call
any time during gameplay. But my experiments indicate that SymbolPool
has a 97% cache hit rate during a game, so you likely only want to call
this between games, because symbols from previous game are unlikely to
reappear in subsequent, unrelated games.""
'''
cls.atomPool = dict()
cls.listPool = dict()
class SymbolFactory(object):
@classmethod
def create(cls, string):
try:
return cls.convert(LinkedList(tokens))
except Exception as e:
raise SymbolFormatException(string)
# Private, implementation-specific methods below here
@classmethod
def convert(cls, tokens):
""" generated source for method convert """
if tokens.getFirst() == "(":
return convertList(tokens)
else:
return convertAtom(tokens)
@classmethod
def convertAtom(cls, tokens):
""" generated source for method convertAtom """
return SymbolPool.getAtom(tokens.removeFirst())
@classmethod
def convertList(cls, tokens):
""" generated source for method convertList """
contents = ArrayList()
tokens.removeFirst()
while not tokens.getFirst() == "": # java2python added an extra close-paren
contents.add(cls.convert(tokens))
tokens.removeFirst()
return SymbolPool.getList(contents)
@classmethod
def lex(cls, string):
""" generated source for method lex """
tokens = ArrayList()
for token in string.split(" "):
tokens.add(token)
return tokens
@classmethod
def preprocess(cls, string):
""" generated source for method preprocess """
string = string.replaceAll("\\(", " ( ")
string = string.replaceAll("\\)", " ) ")
string = string.replaceAll("\\s+", " ")
string = string.trim()
return string
| 32.079602 | 100 | 0.622519 | 781 | 6,448 | 5.06146 | 0.341869 | 0.031875 | 0.031875 | 0.042499 | 0.060966 | 0.058437 | 0 | 0 | 0 | 0 | 0 | 0.001947 | 0.283189 | 6,448 | 200 | 101 | 32.24 | 0.85331 | 0.432072 | 0 | 0.227723 | 0 | 0 | 0.020384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207921 | false | 0.009901 | 0.009901 | 0.049505 | 0.49505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a57e614d9b55b36163878bad041ba8ed0614d30 | 948 | py | Python | cortical/models/context.py | npd15393/ResumeMiner | 9644ae97aaad869c3739b2b7b92e4e5a6f857206 | [
"BSD-2-Clause"
] | null | null | null | cortical/models/context.py | npd15393/ResumeMiner | 9644ae97aaad869c3739b2b7b92e4e5a6f857206 | [
"BSD-2-Clause"
] | null | null | null | cortical/models/context.py | npd15393/ResumeMiner | 9644ae97aaad869c3739b2b7b92e4e5a6f857206 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
"""
/*******************************************************************************
* Copyright (c) cortical.io GmbH. All rights reserved.
*
* This software is confidential and proprietary information.
* You shall use it only in accordance with the terms of the
* license agreement you entered into with cortical.io GmbH.
******************************************************************************/
"""
from cortical.models.fingerprint import Fingerprint
class Context(object):
def __init__(self, fingerprint=None, context_label=None, context_id=None):
#The semantic fingerprint representation of a context
self.fingerprint = Fingerprint(**fingerprint) if isinstance(fingerprint, dict) else fingerprint # Fingerprint
#The descriptive label of a context.
self.context_label = context_label # str
#The id of a context.
self.context_id = context_id # int
| 43.090909 | 117 | 0.597046 | 101 | 948 | 5.504951 | 0.564356 | 0.064748 | 0.053957 | 0.07554 | 0.07554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172996 | 948 | 21 | 118 | 45.142857 | 0.709184 | 0.584388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
5a5cd7e8aa4acb388f0ef7bcdc817349add0a810 | 1,212 | py | Python | web/hottubapi.py | pwschuurman/hottub_controller | be9faeabcaf9f5bb7aba3ec03eba60276b27cf80 | [
"MIT"
] | 1 | 2020-06-03T18:32:50.000Z | 2020-06-03T18:32:50.000Z | web/hottubapi.py | pwschuurman/hottub_controller | be9faeabcaf9f5bb7aba3ec03eba60276b27cf80 | [
"MIT"
] | null | null | null | web/hottubapi.py | pwschuurman/hottub_controller | be9faeabcaf9f5bb7aba3ec03eba60276b27cf80 | [
"MIT"
] | null | null | null | from gpioapi import GpioAPI
import rx
MAX_TEMP = 38
COOL_TEMP = 30
class HotTubAPI:
def __init__(self):
self.gpioapi = GpioAPI(None)
def transmissions(self):
return self.gpioapi.transmission_subject
def heat_up(self):
reached_max_temp = self.gpioapi.transmission_subject.pipe(
op.filter(lambda x: x.set_point() is not None and x.set_point() >= MAX_TEMP)
)
# Press the temp-up button until reached max temp
rx.interval(1.0).pipe(
op.timeout(15.0),
op.take_until(reached_max_temp)
).on_next(self.press_temp_up_button())
def cool_down(self):
reached_cool_temp = self.gpioapi.transmission_subject.pipe(
op.filter(lambda x: x.set_point() <= COOL_TEMP)
)
# Press the temp-up button until reached cool temp
rx.interval(1.0).pipe(
op.timeout(15.0),
op.take_until(reached_cool_temp)
).on_next(self.press_temp_down_button())
def press_light_button(self):
self.gpioapi.light_button.press()
def press_pump_button(self):
self.gpioapi.pump_button.press()
def press_temp_down_button(self):
self.gpioapi.temp_down_button.press()
def press_temp_up_button(self):
self.gpioapi.temp_up_button.press()
| 24.734694 | 82 | 0.710396 | 184 | 1,212 | 4.407609 | 0.255435 | 0.108508 | 0.092478 | 0.103576 | 0.540074 | 0.421702 | 0.364982 | 0.364982 | 0.276202 | 0.276202 | 0 | 0.014099 | 0.180693 | 1,212 | 48 | 83 | 25.25 | 0.802618 | 0.079208 | 0 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242424 | false | 0 | 0.060606 | 0.030303 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a5e3e187f9834c9b5e31410232316fcaa6ec9f3 | 7,711 | py | Python | src/biocluster_pipeline.py | zocean/Norma | 4c45c1540f7d7d13f9b71a6772044d3772a451f8 | [
"MIT"
] | 1 | 2020-02-17T22:59:46.000Z | 2020-02-17T22:59:46.000Z | src/biocluster_pipeline.py | zocean/Norma | 4c45c1540f7d7d13f9b71a6772044d3772a451f8 | [
"MIT"
] | null | null | null | src/biocluster_pipeline.py | zocean/Norma | 4c45c1540f7d7d13f9b71a6772044d3772a451f8 | [
"MIT"
] | 2 | 2020-02-24T02:54:04.000Z | 2020-07-07T22:16:35.000Z | #!/usr/bin/python
# Programmer : Yang Zhang
# Contact: zocean636@gmail.com
# Last-modified: 24 Jan 2019 15:20:08
import os,sys,argparse
def parse_arg():
''' This Function Parse the Argument '''
p=argparse.ArgumentParser( description = 'Example: %(prog)s -h', epilog='Library dependency :')
p.add_argument('-v','--version',action='version',version='%(prog)s 0.1')
p.add_argument('--conf',type=str,dest="conf",help="configure file")
p.add_argument('--dry_run',dest="dry_run",action="store_true",help="set this parameter if just want to test environment. No real script will be procssed")
if len(sys.argv) < 2:
print p.print_help()
exit(1)
return p.parse_args()
class Run(object):
def __init__(self):
# global parameter
self.genome_size = None
self.bowtie2_index = None
self.wig2bigwig = None
self.norma = None
# tool specific parameter
self.bowtie_opt = None
self.norma_opt = None
# experiment specific parameter
self.exp_name = None
self.fastq_pulldown = None
self.fastq_input = None
self.label_pulldown = None
self.label_input = None
# output file
self.output_folder = None
self.out_bam_pulldown = None
self.out_bam_input = None
self.out_bowtie_log_pulldown = None
self.out_bowtie_log_input = None
self.out_bam_pulldown_rmdup = None
self.out_bam_input_rmdup = None
self.out_norma_output = None
self.out_norma_log = None
def build(self, conf):
# check required parameters
for parameter in ['genome_size', 'bowtie2_index', 'wig2bigwig', 'norma', 'exp_name', 'fastq_pulldown', 'fastq_input', 'label_pulldown', 'label_input', 'output_folder']:
if conf.get(parameter, None) is None:
print >>sys.stderr, "%s parameter not found" % (parameter)
exit(1)
# run initiation
self.genome_size = conf['genome_size']
self.bowtie2_index = conf['bowtie2_index']
self.wig2bigwig = conf['wig2bigwig']
self.norma = conf['norma']
if conf.get('bowtie_opt', None) is not None and conf['bowtie_opt'] != "":
self.bowtie_opt = conf['bowtie_opt']
if conf.get('norma_opt', None) is not None and conf['norma_opt'] != "":
self.norma_opt = conf['norma_opt']
self.exp_name = conf['exp_name']
self.fastq_pulldown = conf['fastq_pulldown'].split(',')
self.fastq_input = conf['fastq_input'].split(',')
self.label_pulldown = conf['label_pulldown']
self.label_input = conf['label_input']
# output
self.output_folder = conf['output_folder']
if not os.path.isdir(self.output_folder):
os.makedirs(self.output_folder)
self.out_bam_pulldown = os.path.join(self.output_folder, '%s.bam' % (self.label_pulldown))
self.out_bam_input = os.path.join(self.output_folder, '%s.bam' % (self.label_input))
self.out_log_bowtie_pulldown = os.path.join(self.output_folder, 'log_bowtie_%s.txt' % (self.label_pulldown))
self.out_log_bowtie_input = os.path.join(self.output_folder, 'log_bowtie_%s.txt' % (self.label_input))
self.out_bam_pulldown_rmdup = os.path.join(self.output_folder, '%s.rmdup.bam' % (self.label_pulldown))
self.out_bam_input_rmdup = os.path.join(self.output_folder, '%s.rmdup.bam' % (self.label_input))
self.out_norma_output = os.path.join(self.output_folder, self.exp_name)
self.out_log_norma = os.path.join(self.output_folder, 'log_norma_%s' % (self.exp_name))
def pipeline(self, dry_run = False):
#
print >>sys.stderr, "# Start Norma pipeline for experiment: %s" % (self.exp_name)
print >>sys.stderr, "# Step 1: Align the pulldown fastq to the reference genome"
cmd = self.__run_bowtie(self.bowtie2_index, self.fastq_pulldown, self.bowtie_opt, self.out_bam_pulldown, self.out_log_bowtie_pulldown)
if dry_run:
print >>sys.stderr, cmd
else:
os.system(cmd)
print >>sys.stderr, "# Step 1: Alignment done: check %s for running log" % (self.out_log_bowtie_pulldown)
print >>sys.stderr, ""
#
print >>sys.stderr, "# Step 2: PCR duplicates removal for pulldown"
cmd = self.__run_rmdup(self.out_bam_pulldown, self.out_bam_pulldown_rmdup)
if dry_run:
print >>sys.stderr, cmd
else:
os.system(cmd)
print >>sys.stderr, ""
#
print >>sys.stderr, "# Step 3: Align the input fastq to the reference genome"
cmd = self.__run_bowtie(self.bowtie2_index, self.fastq_input, self.bowtie_opt, self.out_bam_input, self.out_log_bowtie_input)
if dry_run:
print >>sys.stderr, cmd
else:
os.system(cmd)
print >>sys.stderr, "# Step 3: Aligment done: check %s for running log" % (self.out_log_bowtie_input)
print >>sys.stderr, ""
#
print >>sys.stderr, "# Step 4: PCR duplicates removal for input"
cmd = self.__run_rmdup(self.out_bam_input, self.out_bam_input_rmdup)
if dry_run:
print >>sys.stderr, cmd
else:
os.system(cmd)
print >>sys.stderr, ""
#
print >>sys.stderr, "# Step 5: Run Norma to get the TSA-seq signal"
cmd = self.__run_norma(self.norma, self.out_bam_pulldown_rmdup, self.out_bam_input_rmdup, self.out_norma_output, self.out_log_norma, self.norma_opt, self.wig2bigwig, self.genome_size)
if dry_run:
print >>sys.stderr, cmd
else:
os.system(cmd)
print >>sys.stderr, "# Step 5: Norma done: check %s for running log" % (self.out_log_norma)
def __run_bowtie(self, genome_index, fastq_list, other_opt, output_file, log_file):
if other_opt is not None:
cmd = "bowtie2 %s -x %s -U %s 2>%s | samtools view -S -bh - | samtools sort -o %s" % (other_opt, genome_index, ' '.join(fastq_list), log_file, output_file)
else:
cmd = "bowtie2 -x %s -U %s 2>%s | samtools view -bS " % (genome_index, ' '.join(fastq_list), log_file, output_file)
cmd += '\n' + "samtools index %s" % (output_file)
return cmd
def __run_rmdup(self, input_bam, output_bam):
cmd = "samtools rmdup -s %s %s" % (input_bam, output_bam)
cmd += '\n' + "samtools index %s" % (output_bam)
return cmd
def __run_norma(self, norma_script, pulldown_bam, input_bam, output, log, other_opt, wig2bigiwg, genome_size):
if other_opt is not None:
cmd = "%s %s -g %s -e %s -c %s --wig2bw %s -o %s 2>&1 >%s" % (norma_script, other_opt, genome_size, pulldown_bam, input_bam, wig2bigiwg, output, log)
else:
cmd = "%s -g %s -e %s -c %s --wig2bw %s -o %s 2>&1 >%s" % (norma_script, other_opt, genome_size, pulldown_bam, input_bam, wig2bigiwg, output, log)
return cmd
def parse_conf(filename):
fin = open(filename, 'r')
table = {}
for line in fin:
if line.strip().startswith('#') or line.strip() == '':
continue
row = line.strip().split('=')
table[row[0].strip()] = row[1].strip()
fin.close()
return table
def main():
global args
args = parse_arg()
# parse the configure table
conf = parse_conf(args.conf)
print >>sys.stderr, "# parse parameters done"
# build Run
TSA_seq_run = Run()
TSA_seq_run.build(conf)
print >>sys.stderr, "# build run done"
# run
print >>sys.stderr, "# run pipeline"
TSA_seq_run.pipeline(args.dry_run)
if __name__=="__main__":
main()
| 46.451807 | 191 | 0.625989 | 1,074 | 7,711 | 4.260708 | 0.163873 | 0.047421 | 0.067308 | 0.031469 | 0.434004 | 0.356425 | 0.313156 | 0.257649 | 0.239729 | 0.214598 | 0 | 0.009296 | 0.246661 | 7,711 | 165 | 192 | 46.733333 | 0.778447 | 0.036182 | 0 | 0.23741 | 0 | 0.021583 | 0.184592 | 0 | 0.014388 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.007194 | null | null | 0.165468 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a5f41145e46fd5342cd880863fcd045e36493b6 | 268 | py | Python | inmembrane/plugins/__init__.py | pansapiens/inmembrane | 382eee3b2bacc9c567f65d7c48f1ddf9a86c253c | [
"BSD-2-Clause"
] | 4 | 2015-03-09T02:08:34.000Z | 2021-02-06T13:52:21.000Z | inmembrane/plugins/__init__.py | pansapiens/inmembrane | 382eee3b2bacc9c567f65d7c48f1ddf9a86c253c | [
"BSD-2-Clause"
] | 5 | 2015-01-29T03:36:04.000Z | 2021-12-08T07:20:42.000Z | inmembrane/plugins/__init__.py | pansapiens/inmembrane | 382eee3b2bacc9c567f65d7c48f1ddf9a86c253c | [
"BSD-2-Clause"
] | 6 | 2015-03-09T02:08:43.000Z | 2021-06-07T17:33:16.000Z | # This little bit of magic fills the __all__ list
# with every plugin name, and means that calling:
# from plugins import *
# within inmembrane.py will import every plugin
import pkgutil
__all__ = []
for p in pkgutil.iter_modules(__path__):
__all__.append(p[1])
| 26.8 | 50 | 0.75 | 41 | 268 | 4.487805 | 0.804878 | 0.119565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004525 | 0.175373 | 268 | 9 | 51 | 29.777778 | 0.828054 | 0.61194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a65af496e71e8ad9c61c888ed0b5d903da6928e | 343 | py | Python | company_logo.py | DomirScire/HackerRank_answers | 0432185a472aeae7062cf4e406d0e7a5ed2cc979 | [
"MIT"
] | 1 | 2021-03-19T13:05:16.000Z | 2021-03-19T13:05:16.000Z | company_logo.py | DomirScire/HackerRank_answers | 0432185a472aeae7062cf4e406d0e7a5ed2cc979 | [
"MIT"
] | null | null | null | company_logo.py | DomirScire/HackerRank_answers | 0432185a472aeae7062cf4e406d0e7a5ed2cc979 | [
"MIT"
] | null | null | null | # DomirScire
import math
import os
import random
import re
import sys
import collections
if __name__ == '__main__':
s = sorted(input().strip())
s_counter = collections.Counter(s).most_common()
s_counter = sorted(s_counter, key=lambda x: (x[1] * -1, x[0]))
for i in range(0, 3):
print(s_counter[i][0], s_counter[i][1])
| 22.866667 | 66 | 0.661808 | 55 | 343 | 3.872727 | 0.527273 | 0.187793 | 0.084507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02518 | 0.189504 | 343 | 14 | 67 | 24.5 | 0.741007 | 0.029155 | 0 | 0 | 0 | 0 | 0.024169 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5a6ebd896d0065716f83ceee55fedb02e43d2b47 | 17,814 | py | Python | cosmic-core/systemvm/patches/centos7/opt/cosmic/router/bin/cs/firewall.py | sanderv32/cosmic | 9a9d86500b67255a1c743a9438a05c0d969fd210 | [
"Apache-2.0"
] | 64 | 2016-01-30T13:31:00.000Z | 2022-02-21T02:13:25.000Z | cosmic-core/systemvm/patches/centos7/opt/cosmic/router/bin/cs/firewall.py | sanderv32/cosmic | 9a9d86500b67255a1c743a9438a05c0d969fd210 | [
"Apache-2.0"
] | 525 | 2016-01-22T10:46:31.000Z | 2022-02-23T11:08:01.000Z | cosmic-core/systemvm/patches/centos7/opt/cosmic/router/bin/cs/firewall.py | sanderv32/cosmic | 9a9d86500b67255a1c743a9438a05c0d969fd210 | [
"Apache-2.0"
] | 25 | 2016-01-13T16:46:46.000Z | 2021-07-23T15:22:27.000Z | import logging
from jinja2 import Environment, FileSystemLoader
import utils
class Firewall:
def __init__(self, config):
self.config = config
self.jinja_env = Environment(
loader=FileSystemLoader('/opt/cosmic/router/bin/cs/templates'),
trim_blocks=True,
lstrip_blocks=True
)
self.fw = self.config.fw
def sync(self):
logging.info("Running firewall sync")
public_device = None
public_ip = None
self.add_default_vpc_rules()
if "interfaces" not in self.config.dbag_network_overview:
logging.info("Skipping firewall sync, as we have no 'interfaces' object in network_overview.")
return
for interface in self.config.dbag_network_overview['interfaces']:
device = utils.get_interface_name_from_mac_address(interface['mac_address'])
if interface['metadata']['type'] == 'sync':
self.add_sync_vpc_rules(device)
elif interface['metadata']['type'] == 'other':
pass
elif interface['metadata']['type'] == 'public':
self.add_public_vpc_rules(device)
public_device = device
public_ip = interface['ipv4_addresses'][0]['cidr']
elif interface['metadata']['type'] == 'guesttier':
self.add_tier_vpc_rules(device, interface['ipv4_addresses'][0]['cidr'])
elif interface['metadata']['type'] == 'private':
self.add_private_vpc_rules(device, interface['ipv4_addresses'][0]['cidr'])
vpn_open = False
if public_device is not None and 'vpn' in self.config.dbag_network_overview:
if 'site2site' in self.config.dbag_network_overview['vpn']:
for site2site in self.config.dbag_network_overview['vpn']['site2site']:
self.add_site2site_vpn_rules(public_device, site2site)
vpn_open = True
if 'remote_access' in self.config.dbag_network_overview['vpn']:
if public_ip is not None:
self.add_remote_access_vpn_rules(
public_device, public_ip, self.config.dbag_network_overview['vpn']['remote_access']
)
vpn_open = True
# default block VPN ports
logging.info("VPN_open is %s" % (vpn_open))
if not vpn_open:
self.block_vpn_rules(public_device)
if public_device is not None and 'loadbalancer' in self.config.dbag_network_overview:
if len(self.config.dbag_network_overview['loadbalancer']) > 0:
self.add_loadbalancer_rules(public_device, public_ip, self.config.dbag_network_overview['loadbalancer'])
def add_default_vpc_rules(self):
logging.info("Configuring default VPC rules")
self.fw.append(["filter", "", "-P INPUT DROP"])
self.fw.append(["filter", "", "-P FORWARD DROP"])
self.fw.append(["filter", "", "-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT"])
self.fw.append(["mangle", "front", "-A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill"])
self.fw.append(["filter", "", "-A INPUT -i lo -j ACCEPT"])
self.fw.append(["filter", "", "-A INPUT -p icmp -j ACCEPT"])
if self.config.get_advert_method() == "MULTICAST":
self.fw.append(["filter", "", "-A INPUT -d 224.0.0.18/32 -j ACCEPT"])
self.fw.append(["filter", "", "-A INPUT -d 224.0.0.22/32 -j ACCEPT"])
self.fw.append(["filter", "", "-A INPUT -d 224.0.0.252/32 -j ACCEPT"])
self.fw.append(["filter", "", "-A INPUT -d 225.0.0.50/32 -j ACCEPT"])
self.fw.append(["filter", "",
"-A INPUT -i eth0 -p tcp -m tcp -s 169.254.0.1/32 --dport 3922 -m "
"state --state NEW,ESTABLISHED -j ACCEPT"])
self.fw.append(["filter", "", "-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT"])
self.fw.append(["filter", "", "-A FORWARD -s %s ! -d %s -j ACCEPT" % (
self.config.dbag_cmdline['config']['vpccidr'], self.config.dbag_cmdline['config']['vpccidr']
)])
def add_tier_vpc_rules(self, device, cidr):
logging.info("Configuring VPC tier rules for device %s" % device)
self.fw.append(["filter", "", "-A INPUT -i %s -m state --state RELATED,ESTABLISHED -j ACCEPT" % device])
self.fw.append(["filter", "", "-A FORWARD -m state --state NEW -o %s -j ACL_INBOUND_%s" % (device, device)])
self.fw.append(["filter", "", "-A OUTPUT -m state --state NEW -o %s -j ACL_INBOUND_%s" % (device, device)])
self.fw.append(["filter", "front", "-A ACL_INBOUND_%s -d 224.0.0.18/32 -j ACCEPT" % device])
self.fw.append(["filter", "front", "-A ACL_INBOUND_%s -d 224.0.0.22/32 -j ACCEPT" % device])
self.fw.append(["filter", "front", "-A ACL_INBOUND_%s -d 224.0.0.252/32 -j ACCEPT" % device])
self.fw.append(["filter", "front", "-A ACL_INBOUND_%s -d 225.0.0.50/32 -j ACCEPT" % device])
self.fw.append(["filter", "front", "-A ACL_INBOUND_%s -d %s -p udp -m udp --dport 68 -j ACCEPT" % (
device, cidr
)])
self.fw.append(["filter", "", "-A INPUT -i %s -p udp -m udp --dport 67 -j ACCEPT" % device])
self.fw.append(["filter", "", "-A INPUT -i %s -p udp -m udp --dport 53 -s %s -j ACCEPT" % (device, cidr)])
self.fw.append(["filter", "", "-A INPUT -i %s -p tcp -m tcp --dport 53 -s %s -j ACCEPT" % (device, cidr)])
self.fw.append(["filter", "", "-A INPUT -i %s -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT" %
device
])
self.fw.append(["filter", "", "-A INPUT -i %s -p tcp -m tcp --dport 8080 -m state --state NEW -j ACCEPT" %
device])
self.fw.append(["mangle", "", "-A PREROUTING -m state --state NEW -i %s ! -d %s -j ACL_OUTBOUND_%s" % (
device, cidr, device
)])
self.fw.append(["mangle", "front", "-A ACL_OUTBOUND_%s -d 224.0.0.18/32 -j ACCEPT" % device])
self.fw.append(["mangle", "front", "-A ACL_OUTBOUND_%s -d 224.0.0.22/32 -j ACCEPT" % device])
self.fw.append(["mangle", "front", "-A ACL_OUTBOUND_%s -d 224.0.0.252/32 -j ACCEPT" % device])
self.fw.append(["mangle", "front", "-A ACL_OUTBOUND_%s -d 225.0.0.50/32 -j ACCEPT" % device])
self.fw.append(["mangle", "front", "-A ACL_OUTBOUND_%s -d 255.255.255.255/32 -j ACCEPT" % device])
self.fw.append(["nat", "front", "-A POSTROUTING -s %s -o %s -j SNAT --to-source %s" % (
cidr, device, cidr.split('/')[0]
)])
self.fw.append(["", "front", "-A INPUT -i %s -d %s -p tcp -m tcp -m state --state NEW --dport 80 -j ACCEPT" % (
device, cidr
)])
self.fw.append(["", "front", "-A INPUT -i %s -d %s -p tcp -m tcp -m state --state NEW --dport 443 -j ACCEPT" % (
device, cidr
)])
def add_sync_vpc_rules(self, device):
logging.info("Configuring Sync VPC rules")
if self.config.get_advert_method() == "UNICAST":
self.fw.append(["filter", "", "-A INPUT -i %s -p vrrp -j ACCEPT" % device])
self.fw.append(["filter", "", "-A OUTPUT -o %s -p vrrp -j ACCEPT" % device])
self.fw.append(["filter", "", "-A INPUT -i %s -p tcp --dport 3780 -j ACCEPT" % device])
self.fw.append(["filter", "", "-A OUTPUT -o %s -p tcp --dport 3780 -j ACCEPT" % device])
def add_public_vpc_rules(self, device):
logging.info("Configuring Public VPC rules")
# create ingress chain mangle (port forwarding / source nat)
self.fw.append(["mangle", "", "-N ACL_PUBLIC_IP_%s" % device])
self.fw.append(["mangle", "", "-A PREROUTING -m state --state NEW -i %s -j ACL_PUBLIC_IP_%s" % (
device, device
)])
self.fw.append(["filter", "", "-A INPUT -i %s -m state --state RELATED,ESTABLISHED -j ACCEPT" % device])
# create ingress chain filter (load balancing)
self.fw.append(["filter", "", "-N ACL_PUBLIC_IP_%s" % device])
self.fw.append(["filter", "", "-A INPUT -m state --state NEW -j ACL_PUBLIC_IP_%s" % device])
# create egress chain
self.fw.append(["mangle", "front", "-N ACL_OUTBOUND_%s" % device])
# jump to egress chain
self.fw.append(["mangle", "front", "-A PREROUTING -m state --state NEW -i %s -j ACL_OUTBOUND_%s" % (
device, device
)])
# create source nat list chain
self.fw.append(["filter", "", "-N SOURCE_NAT_LIST"])
self.fw.append(["filter", "", "-A FORWARD -j SOURCE_NAT_LIST"])
if 'source_nat' in self.config.dbag_network_overview['services'] and \
self.config.dbag_network_overview['services']['source_nat']:
logging.info("Adding SourceNAT for interface %s to %s" % (
device, self.config.dbag_network_overview['services']['source_nat'][0]['to']
))
self.fw.append(["nat", "", "-A POSTROUTING -o %s -d 10.0.0.0/8 -j RETURN" % device])
self.fw.append(["nat", "", "-A POSTROUTING -o %s -d 172.16.0.0/12 -j RETURN" % device])
self.fw.append(["nat", "", "-A POSTROUTING -o %s -d 192.168.0.0/16 -j RETURN" % device])
self.fw.append(["nat", "", "-A POSTROUTING -j SNAT -o %s --to-source %s" % (
device, self.config.dbag_network_overview['services']['source_nat'][0]['to']
)])
def add_private_vpc_rules(self, device, cidr):
logging.info("Configuring Private VPC rules")
self.fw.append(["filter", "", "-A INPUT -i %s -m state --state RELATED,ESTABLISHED -j ACCEPT" % device])
# create egress chain
self.fw.append(["mangle", "", "-N ACL_OUTBOUND_%s" % device])
# jump to egress chain
self.fw.append(["mangle", "", "-A PREROUTING -m state --state NEW -i %s ! -d %s -j ACL_OUTBOUND_%s" % (
device, cidr, device
)])
# create ingress chain
self.fw.append(["filter", "", "-N ACL_INBOUND_%s" % device])
# jump to ingress chain
self.fw.append(["filter", "", "-A FORWARD -m state --state NEW -o %s -j ACL_INBOUND_%s" % (device, device)])
def add_site2site_vpn_rules(self, device, site2site):
logging.info("Configuring Site2Site VPN rules")
self.config.fw.append(["", "front", "-A INPUT -i %s -p udp -m udp --dport 500 -s %s -d %s -j ACCEPT" % (
device, site2site['right'], site2site['left'])])
self.config.fw.append(["", "front", "-A INPUT -i %s -p udp -m udp --dport 4500 -s %s -d %s -j ACCEPT" % (
device, site2site['right'], site2site['left'])])
self.config.fw.append(["", "front", "-A INPUT -i %s -p esp -s %s -d %s -j ACCEPT" % (
device, site2site['right'], site2site['left'])])
self.config.fw.append(["nat", "front", "-A POSTROUTING -o %s -m mark --mark 0x525 -j ACCEPT" % device])
# Make it possible to tcpdump on ipsec tunnels
# https://wiki.strongswan.org/projects/strongswan/wiki/CorrectTrafficDump
# ingress IPsec and IKE Traffic rule
self.config.fw.append(["filter", "front", "-I INPUT -p esp -j NFLOG --nflog-group 5"])
self.config.fw.append(["filter", "front", "-I INPUT -p ah -j NFLOG --nflog-group 5"])
self.config.fw.append(["filter", "front",
"-I INPUT -p udp -m multiport --dports 500,4500 -j NFLOG --nflog-group 5"])
# egress IPsec and IKE traffic
self.config.fw.append(["filter", "front", "-I OUTPUT -p esp -j NFLOG --nflog-group 5"])
self.config.fw.append(["filter", "front", "-I OUTPUT -p ah -j NFLOG --nflog-group 5"])
self.config.fw.append(["filter", "front",
"-I OUTPUT -p udp -m multiport --dports 500,4500 -j NFLOG --nflog-group 5"])
# decapsulated IPsec traffic
self.config.fw.append(["mangle", "front",
"-I PREROUTING -m policy --pol ipsec --dir in -j NFLOG --nflog-group 5"])
self.config.fw.append(["mangle", "front",
"-I POSTROUTING -m policy --pol ipsec --dir out -j NFLOG --nflog-group 5"])
# IPsec traffic that is destinated for the local host (iptables INPUT chain)
self.config.fw.append(["filter", "front",
"-I INPUT -m addrtype --dst-type LOCAL -m policy --pol ipsec --dir in"
" -j NFLOG --nflog-group 5"])
# IPsec traffic that is destinated for a remote host (iptables FORWARD chain)
self.config.fw.append(["filter", "front",
"-I INPUT -m addrtype ! --dst-type LOCAL -m policy --pol ipsec --dir in"
" -j NFLOG --nflog-group 5"])
# IPsec traffic that is outgoing (iptables OUTPUT chain)
self.config.fw.append(["filter", "front", "-I OUTPUT -m policy --pol ipsec --dir out -j NFLOG --nflog-group 5"])
for net in site2site['peer_list'].lstrip().rstrip().split(','):
self.config.fw.append(["mangle", "front",
"-A FORWARD -s %s -d %s -j MARK --set-xmark 0x525/0xffffffff" % (
site2site['left_subnet'], net)])
self.config.fw.append(["mangle", "",
"-A OUTPUT -s %s -d %s -j MARK --set-xmark 0x525/0xffffffff" % (
site2site['left_subnet'], net)])
self.config.fw.append(["mangle", "front",
"-A FORWARD -s %s -d %s -j MARK --set-xmark 0x524/0xffffffff" % (
net, site2site['left_subnet'])])
self.config.fw.append(["mangle", "",
"-A INPUT -s %s -d %s -j MARK --set-xmark 0x524/0xffffffff" % (
net, site2site['left_subnet'])])
# Block anything else
self.block_vpn_rules(device)
def add_remote_access_vpn_rules(self, device, publicip, remote_access):
logging.info("Configuring RemoteAccess VPN rules")
localcidr = remote_access['local_cidr']
local_ip = remote_access['local_ip']
self.config.fw.append(["", "", "-I INPUT -i %s --dst %s -p udp -m udp --dport 500 -j ACCEPT" % (device, publicip.split("/")[0])])
self.config.fw.append(["", "", "-I INPUT -i %s --dst %s -p udp -m udp --dport 4500 -j ACCEPT" % (device, publicip.split("/")[0])])
self.config.fw.append(["", "", "-I INPUT -i %s --dst %s -p udp -m udp --dport 1701 -j ACCEPT" % (device, publicip.split("/")[0])])
self.config.fw.append(["", "", "-I INPUT -i %s ! --dst %s -p udp -m udp --dport 500 -j REJECT" % (device, publicip.split("/")[0])])
self.config.fw.append(["", "", "-I INPUT -i %s ! --dst %s -p udp -m udp --dport 4500 -j REJECT" % (device, publicip.split("/")[0])])
self.config.fw.append(["", "", "-I INPUT -i %s ! --dst %s -p udp -m udp --dport 1701 -j REJECT" % (device, publicip.split("/")[0])])
self.config.fw.append(["", "", "-I INPUT -i %s -p ah -j ACCEPT" % device])
self.config.fw.append(["", "", "-I INPUT -i %s -p esp -j ACCEPT" % device])
self.config.fw.append(["", "", " -N VPN_FORWARD"])
self.config.fw.append(["", "", "-I FORWARD -i ppp+ -j VPN_FORWARD"])
self.config.fw.append(["", "", "-I FORWARD -o ppp+ -j VPN_FORWARD"])
self.config.fw.append(["", "", "-I FORWARD -o ppp+ -j VPN_FORWARD"])
self.config.fw.append(["", "", "-A VPN_FORWARD -s %s -j RETURN" % localcidr])
self.config.fw.append(["", "", "-A VPN_FORWARD -i ppp+ -d %s -j RETURN" % localcidr])
self.config.fw.append(["", "", "-A VPN_FORWARD -i ppp+ -o ppp+ -j RETURN"])
self.config.fw.append(["", "", "-I INPUT -i ppp+ -m udp -p udp --dport 53 -j ACCEPT"])
self.config.fw.append(["", "", "-I INPUT -i ppp+ -m tcp -p tcp --dport 53 -j ACCEPT"])
self.config.fw.append(["nat", "front", "-A PREROUTING -i ppp+ -m tcp -p tcp --dport 53 -j DNAT --to-destination %s" % local_ip])
def block_vpn_rules(self, device):
logging.info("Dropping VPN rules")
self.config.fw.append(["", "", "-A INPUT -i %s -p udp -m udp --dport 500 -j REJECT" % device])
self.config.fw.append(["", "", "-A INPUT -i %s -p udp -m udp --dport 4500 -j REJECT" % device])
self.config.fw.append(["", "", "-A INPUT -i %s -p udp -m udp --dport 1701 -j REJECT" % device])
self.config.fw.append(["", "", "-A INPUT -i %s -p ah -j REJECT" % device])
self.config.fw.append(["", "", "-A INPUT -i %s -p esp -j REJECT" % device])
self.config.fw.append(["", "", "-A INPUT -i ppp+ -m udp -p udp --dport 53 -j REJECT"])
self.config.fw.append(["", "", "-A INPUT -i ppp+ -m tcp -p tcp --dport 53 -j REJECT"])
def add_loadbalancer_rules(self, device, publicip, loadbalancer):
logging.info("Configuring Loadbalancer rules")
for lb in loadbalancer.get('load_balancers', {}):
self.config.fw.append(["", "", "-I INPUT -i %s --dst %s -p %s -m %s --dport %s -j ACCEPT" % (device,
publicip.split("/")[0],
lb['protocol'],
lb['protocol'],
lb['src_port'])])
| 57.650485 | 140 | 0.544179 | 2,374 | 17,814 | 3.997051 | 0.098989 | 0.085994 | 0.072083 | 0.085362 | 0.745811 | 0.734008 | 0.669407 | 0.619243 | 0.559701 | 0.502477 | 0 | 0.025791 | 0.281745 | 17,814 | 308 | 141 | 57.837662 | 0.715826 | 0.040137 | 0 | 0.219828 | 0 | 0.133621 | 0.382926 | 0.002049 | 0 | 0 | 0.003806 | 0 | 0 | 1 | 0.047414 | false | 0.00431 | 0.012931 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a6f7399d0e46958326d190fed0176f8bf1bbfef | 468 | py | Python | core/migrations/0012_alter_preco_categoria.py | thiagofreitascarneiro/Projeto_Fusion | 4bf9d1c69ddf83fbc957e9ccdc41112d71bbffa9 | [
"MIT"
] | null | null | null | core/migrations/0012_alter_preco_categoria.py | thiagofreitascarneiro/Projeto_Fusion | 4bf9d1c69ddf83fbc957e9ccdc41112d71bbffa9 | [
"MIT"
] | null | null | null | core/migrations/0012_alter_preco_categoria.py | thiagofreitascarneiro/Projeto_Fusion | 4bf9d1c69ddf83fbc957e9ccdc41112d71bbffa9 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.6 on 2021-09-05 19:39
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0011_auto_20210905_1619'),
]
operations = [
migrations.AlterField(
model_name='preco',
name='categoria',
field=models.CharField(choices=[('Premium', 'C'), ('Pro', 'B'), ('Plus', 'A')], max_length=15, verbose_name='categoria'),
),
]
| 24.631579 | 133 | 0.587607 | 52 | 468 | 5.173077 | 0.846154 | 0.096654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094286 | 0.252137 | 468 | 18 | 134 | 26 | 0.674286 | 0.096154 | 0 | 0 | 1 | 0 | 0.159145 | 0.054632 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a78040379a605d417a65ff4123fa8c2e73e5ad9 | 3,393 | py | Python | src/financial_statements/old/balance_sheet.py | LeanderLXZ/intelligent-analysis-of-financial-statements | 38bab5bea3c2f22f71020020c8325f6b6b014853 | [
"Apache-2.0"
] | null | null | null | src/financial_statements/old/balance_sheet.py | LeanderLXZ/intelligent-analysis-of-financial-statements | 38bab5bea3c2f22f71020020c8325f6b6b014853 | [
"Apache-2.0"
] | null | null | null | src/financial_statements/old/balance_sheet.py | LeanderLXZ/intelligent-analysis-of-financial-statements | 38bab5bea3c2f22f71020020c8325f6b6b014853 | [
"Apache-2.0"
] | 1 | 2021-12-15T02:09:16.000Z | 2021-12-15T02:09:16.000Z | import time
import threading
import argparse
import tushare as ts
import numpy as np
import pandas as pd
from pandas import datetime as dt
from tqdm import tqdm
from utils import *
with open('../../tushare_token.txt', 'r') as f:
token = f.readline()
ts.set_token(token)
tushare_api = ts.pro_api()
# 股票列表
df_list = []
for list_status in ['L', 'D', 'P']:
df_i = tushare_api.stock_basic(
exchange='',
list_status=list_status,
fields='ts_code')
df_list.append(df_i)
df_all = pd.concat(df_list)
# 资产负债表
df = pd.DataFrame()
for ts_code in tqdm(df_all['ts_code'].values):
df_i = safe_get(
tushare_api.balancesheet,
ts_code=ts_code,
fields=
'ts_code, ann_date, f_ann_date, end_date, report_type, comp_type,'
'total_share, cap_rese, undistr_porfit, surplus_rese, special_rese,'
'money_cap, trad_asset, notes_receiv, accounts_receiv, oth_receiv,'
'prepayment, div_receiv, int_receiv, inventories, amor_exp,'
'nca_within_1y, sett_rsrv, loanto_oth_bank_fi, premium_receiv,'
'reinsur_receiv, reinsur_res_receiv, pur_resale_fa, oth_cur_assets,'
'total_cur_assets, fa_avail_for_sale, htm_invest, lt_eqt_invest,'
'invest_real_estate, time_deposits, oth_assets, lt_rec, fix_assets,'
'cip, const_materials, fixed_assets_disp, produc_bio_assets,'
'oil_and_gas_assets, intan_assets, r_and_d, goodwill, lt_amor_exp,'
'defer_tax_assets, decr_in_disbur, oth_nca, total_nca, cash_reser_cb,'
'depos_in_oth_bfi, prec_metals, deriv_assets, rr_reins_une_prem,'
'rr_reins_outstd_cla, rr_reins_lins_liab, rr_reins_lthins_liab,'
'refund_depos, ph_pledge_loans, refund_cap_depos, indep_acct_assets,'
'client_depos, client_prov, transac_seat_fee, invest_as_receiv,'
'total_assets, lt_borr, st_borr, cb_borr, depos_ib_deposits,'
'loan_oth_bank, trading_fl, notes_payable, acct_payable, adv_receipts,'
'sold_for_repur_fa, comm_payable, payroll_payable, taxes_payable,'
'int_payable, div_payable, oth_payable, acc_exp, deferred_inc,'
'st_bonds_payable, payable_to_reinsurer, rsrv_insur_cont,'
'acting_trading_sec, acting_uw_sec, non_cur_liab_due_1y, oth_cur_liab,'
'total_cur_liab, bond_payable, lt_payable, specific_payables,'
'estimated_liab, defer_tax_liab, defer_inc_non_cur_liab, oth_ncl,'
'total_ncl, depos_oth_bfi, deriv_liab, depos, agency_bus_liab,'
'oth_liab, prem_receiv_adva, depos_received, ph_invest, reser_une_prem,'
'reser_outstd_claims, reser_lins_liab, reser_lthins_liab,'
'indept_acc_liab, pledge_borr, indem_payable, policy_div_payable,'
'total_liab, treasury_share, ordin_risk_reser, forex_differ,'
'invest_loss_unconf, minority_int, total_hldr_eqy_exc_min_int,'
'total_hldr_eqy_inc_min_int, total_liab_hldr_eqy, lt_payroll_payable,'
'oth_comp_income, oth_eqt_tools, oth_eqt_tools_p_shr, lending_funds,'
'acc_receivable, st_fin_payable, payables, hfs_assets, hfs_sales,'
'update_flag'
)
df_i = df_i.drop_duplicates()
df_i = df_i.reindex(index=df_i.index[::-1])
df_i.insert(0, 'code', [c[:6] for c in df_i['ts_code']])
df = df.append(df_i)
df = df.reset_index(drop=True)
df.to_csv('../../data/financial_statements/balance_sheet.csv', index=False) | 44.644737 | 80 | 0.72178 | 516 | 3,393 | 4.257752 | 0.443798 | 0.01502 | 0.009103 | 0.010014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001791 | 0.177424 | 3,393 | 76 | 81 | 44.644737 | 0.785382 | 0.002947 | 0 | 0 | 0 | 0 | 0.632653 | 0.056492 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.134328 | 0 | 0.134328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5a7f6cebc7d1d5a0a12a5527001bd5fbb8d22d54 | 568 | py | Python | DiplomaProject/office/admin.py | iamgo100/diploma | fc7314468631bf43774b4678890d2a315658713c | [
"MIT"
] | null | null | null | DiplomaProject/office/admin.py | iamgo100/diploma | fc7314468631bf43774b4678890d2a315658713c | [
"MIT"
] | null | null | null | DiplomaProject/office/admin.py | iamgo100/diploma | fc7314468631bf43774b4678890d2a315658713c | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Shift, Service, Appointment
class ShiftAdmin(admin.ModelAdmin):
fields = ['date', 'master', 'room']
list_display = ('date', 'master', 'status')
class ServicetAdmin(admin.ModelAdmin):
list_display = ('service_name', 'cost', 'duration', 'room')
class AppointmentAdmin(admin.ModelAdmin):
list_display = ('service', 'client', 'date', 'time', 'shift')
admin.site.register(Shift, ShiftAdmin)
admin.site.register(Service, ServicetAdmin)
admin.site.register(Appointment, AppointmentAdmin) | 35.5 | 66 | 0.713028 | 61 | 568 | 6.57377 | 0.459016 | 0.112219 | 0.127182 | 0.129676 | 0.164589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139085 | 568 | 16 | 67 | 35.5 | 0.820041 | 0 | 0 | 0 | 0 | 0 | 0.151625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.