docstring stringlengths 52 499 | function stringlengths 67 35.2k | __index_level_0__ int64 52.6k 1.16M |
|---|---|---|
Set x limits for plot.
This will set the limits for the x axis
for the specific plot.
Args:
xlims (len-2 list of floats): The limits for the axis.
dx (float): Amount to increment by between the limits.
xscale (str): Scale of the axis. Either `log` or `lin`.
... | def set_xlim(self, xlims, dx, xscale, reverse=False):
self._set_axis_limits('x', xlims, dx, xscale, reverse)
return | 1,002,960 |
Set y limits for plot.
This will set the limits for the y axis
for the specific plot.
Args:
ylims (len-2 list of floats): The limits for the axis.
dy (float): Amount to increment by between the limits.
yscale (str): Scale of the axis. Either `log` or `lin`.
... | def set_ylim(self, xlims, dx, xscale, reverse=False):
self._set_axis_limits('y', xlims, dx, xscale, reverse)
return | 1,002,961 |
Set the figure size in inches.
Sets the figure size with a call to fig.set_size_inches.
Default in code is 8 inches for each.
Args:
width (float): Dimensions for figure width in inches.
height (float, optional): Dimensions for figure height in inches. Default is None. | def set_fig_size(self, width, height=None):
self.figure.figure_width = width
self.figure.figure_height = height
return | 1,002,967 |
Set the figure spacing.
Sets whether in general there is space between subplots.
If all axes are shared, this can be `tight`. Default in code is `wide`.
The main difference is the tick labels extend to the ends if space==`wide`.
If space==`tight`, the edge tick labels are cut off for c... | def set_spacing(self, space):
self.figure.spacing = space
if 'subplots_adjust_kwargs' not in self.figure.__dict__:
self.figure.subplots_adjust_kwargs = {}
if space == 'wide':
self.figure.subplots_adjust_kwargs['hspace'] = 0.3
self.figure.subplots_adju... | 1,002,968 |
Indicate general x,y column labels.
This sets the general x and y column labels into data files for all plots.
It can be overridden for specific plots.
Args:
xlabel/ylabel (str, optional): String indicating column label for x,y values
into the data files. Default is... | def set_all_file_column_labels(self, xlabel=None, ylabel=None):
if xlabel is not None:
self.general.x_column_label = xlabel
if ylabel is not None:
self.general.y_column_label = ylabel
if xlabel is None and ylabel is None:
warnings.warn("is not specify... | 1,002,975 |
Reverse an axis in all figure plots.
This will reverse the tick marks on an axis for each plot in the figure.
It can be overridden in SinglePlot class.
Args:
axis_to_reverse (str): Axis to reverse. Supports `x` and `y`.
Raises:
ValueError: The string representi... | def reverse_axis(self, axis_to_reverse):
if axis_to_reverse.lower() == 'x':
self.general.reverse_x_axis = True
if axis_to_reverse.lower() == 'y':
self.general.reverse_y_axis = True
if axis_to_reverse.lower() != 'x' or axis_to_reverse.lower() != 'y':
r... | 1,002,979 |
Prepare the parallel calculations
Prepares the arguments to be run in parallel.
It will divide up arrays according to num_splits.
Args:
binary_args (list): List of binary arguments for input into the SNR function.
other_args (tuple of obj): tuple of other args for input... | def prep_parallel(self, binary_args, other_args):
if self.length < 100:
raise Exception("Run this across 1 processor by setting num_processors kwarg to None.")
if self.num_processors == -1:
self.num_processors = mp.cpu_count()
split_val = int(np.ceil(self.length... | 1,003,080 |
Run parallel calulation
This will run the parallel calculation on self.num_processors.
Args:
para_func (obj): Function object to be used in parallel.
Returns:
(dict): Dictionary with parallel results. | def run_parallel(self, para_func):
if self.timer:
start_timer = time.time()
# for testing
# check = parallel_snr_func(*self.args[10])
# import pdb
# pdb.set_trace()
with mp.Pool(self.num_processors) as pool:
print('start pool with {} pro... | 1,003,081 |
Only `key` is required
Arguments:
operator (str) -- "?" optional, "!" for complete arrays; defaults to None (i.e. required)
required (boolean) -- whether the key is required in the output (defaults to True)
scope (`Selector`) -- restrict extraction to elements matching this selecto... | def __init__(self, key, operator=None, required=True, scope=None, iterate=False):
self.key = key
self.operator = operator
self.required = required
self.scope = scope
self.iterate = iterate | 1,003,394 |
Build part of the abstract Parsley extraction tree
Arguments:
parselet_node (dict) -- part of the Parsley tree to compile
(can be the root dict/node)
level (int) -- current recursion depth (used for debug) | def _compile(self, parselet_node, level=0):
if self.DEBUG:
debug_offset = "".join([" " for x in range(level)])
if self.DEBUG:
print(debug_offset, "%s::compile(%s)" % (
self.__class__.__name__, parselet_node))
if isinstance(parselet_node, dic... | 1,003,405 |
Main function for this program.
This will read in sensitivity_curves and binary parameters; calculate snrs
with a matched filtering approach; and then read the contour data out to a file.
Args:
pid (obj or dict): GenInput class or dictionary containing all of the input information for
... | def generate_contour_data(pid):
# check if pid is dicionary or GenInput class
# if GenInput, change to dictionary
if isinstance(pid, GenInput):
pid = pid.return_dict()
begin_time = time.time()
WORKING_DIRECTORY = '.'
if 'WORKING_DIRECTORY' not in pid['general'].keys():
pi... | 1,003,503 |
Set the grid values for y.
Create information for the grid of y values.
Args:
num_y (int): Number of points on axis.
y_low/y_high (float): Lowest/highest value for the axis.
yscale (str): Scale of the axis. Choices are 'log' or 'lin'.
yval_name (str): Na... | def set_y_grid_info(self, y_low, y_high, num_y, yscale, yval_name):
self._set_grid_info('y', y_low, y_high, num_y, yscale, yval_name)
return | 1,003,723 |
Set the grid values for x.
Create information for the grid of x values.
Args:
num_x (int): Number of points on axis.
x_low/x_high (float): Lowest/highest value for the axis.
xscale (str): Scale of the axis. Choices are 'log' or 'lin'.
xval_name (str): Na... | def set_x_grid_info(self, x_low, x_high, num_x, xscale, xval_name):
self._set_grid_info('x', x_low, x_high, num_x, xscale, xval_name)
return | 1,003,724 |
Set the signal type of interest.
Sets the signal type for which the SNR is calculated.
This means inspiral, merger, and/or ringdown.
Args:
sig_type (str or list of str): Signal type desired by user.
Choices are `ins`, `mrg`, `rd`, `all` for circular waveforms create... | def set_signal_type(self, sig_type):
if isinstance(sig_type, str):
sig_type = [sig_type]
self.snr_input.signal_type = sig_type
return | 1,003,728 |
Raise an appropriate error for a given response.
Arguments:
response (:py:class:`aiohttp.ClientResponse`): The API response.
Raises:
:py:class:`aiohttp.web_exceptions.HTTPException`: The appropriate
error for the response's status. | def raise_for_status(response):
for err_name in web_exceptions.__all__:
err = getattr(web_exceptions, err_name)
if err.status_code == response.status:
payload = dict(
headers=response.headers,
reason=response.reason,
)
if issub... | 1,004,071 |
Truncate the supplied text for display.
Arguments:
text (:py:class:`str`): The text to truncate.
max_len (:py:class:`int`, optional): The maximum length of the
text before truncation (defaults to 350 characters).
end (:py:class:`str`, optional): The ending to use to show that
the ... | def truncate(text, max_len=350, end='...'):
if len(text) <= max_len:
return text
return text[:max_len].rsplit(' ', maxsplit=1)[0] + end | 1,004,072 |
Input binary parameters and calculate the SNR
Binary parameters are read in and adjusted based on shapes. They are then
fed into ``run`` for calculation of the snr.
Args:
*args: Arguments for binary parameters (see `:meth:gwsnrcalc.utils.pyphenomd.__call__`)
Returns:
... | def __call__(self, *binary_args):
# if self.num_processors is None, run on single processor
if self.num_processors is None:
return self.snr_function(0, binary_args, self.wavegen,
self.signal_type, self.noise_interpolants,
... | 1,004,195 |
Main function for creating these plots.
Reads in plot info dict from json file or dictionary in script.
Args:
return_fig_ax (bool, optional): Return figure and axes objects.
Returns:
2-element tuple containing
- **fig** (*obj*): Figure object for customization outside of those... | def plot_main(pid, return_fig_ax=False):
global WORKING_DIRECTORY, SNR_CUT
if isinstance(pid, PlotInput):
pid = pid.return_dict()
WORKING_DIRECTORY = '.'
if 'WORKING_DIRECTORY' not in pid['general'].keys():
pid['general']['WORKING_DIRECTORY'] = '.'
SNR_CUT = 5.0
if 'SNR_... | 1,004,249 |
Initialize an `ExpCM` object.
Args:
`prefs` (list)
List of dicts giving amino-acid preferences for
each site. Each dict keyed by amino acid letter
codes, value is pref > 0 and < 1. Must sum to 1
at each site.
`kappa`, `omeg... | def __init__(self, prefs, kappa=2.0, omega=0.5, beta=1.0, mu=1.0,
phi=scipy.ones(N_NT) / N_NT,
freeparams=['kappa', 'omega', 'beta', 'mu', 'eta']):
self._nsites = len(prefs)
assert self.nsites > 0, "No preferences specified"
assert all(map(lambda x: x in self.AL... | 1,004,599 |
Initialize an `ExpCM_empirical_phi` object.
Args:
`prefs`, `kappa`, `omega`, `beta`, `mu`, `freeparams`
Same meaning as for an `ExpCM`
`g`
Has the meaning described in the main class doc string. | def __init__(self, prefs, g, kappa=2.0, omega=0.5, beta=1.0, mu=1.0,
freeparams=['kappa', 'omega', 'beta', 'mu']):
_checkParam('g', g, self.PARAMLIMITS, self.PARAMTYPES)
assert abs(1 - g.sum()) <= ALMOST_ZERO, "g doesn't sum to 1"
self.g = g.copy()
self.g /= self.g.... | 1,004,625 |
Initialize an `ExpCM_empirical_phi_divpressure` object.
Args:
`prefs`, `kappa`, `omega`, `beta`, `mu`, `g`, `freeparams`
Same meaning as for an `ExpCM_empirical_phi`
`divPressureValues`, `omega2`
Meaning described in the main class doc string. | def __init__(self, prefs, g, divPressureValues, kappa=2.0, omega=0.5,
beta=1.0, mu=1.0,omega2=0.0,
freeparams=['kappa', 'omega', 'beta', 'mu', 'omega2']):
_checkParam('omega2',omega2, self.PARAMLIMITS, self.PARAMTYPES)
self.omega2 = omega2
self.deltar = scipy.arr... | 1,004,630 |
Initialize an `YNGKP_M0` object.
Args:
`kappa`, `omega`, `mu`,
Model params described in main class doc string.
`freeparams` (list of strings)
Specifies free parameters.
`e_pw`, `nsites`
Meaning described in the main class doc ... | def __init__(self, e_pw, nsites, kappa=2.0, omega=0.5, mu=1.0,
freeparams=['kappa', 'omega', 'mu']):
_checkParam('e_pw', e_pw, self.PARAMLIMITS, self.PARAMTYPES)
self.e_pw = e_pw.copy()
self.phi = self._calculate_correctedF3X4()
assert scipy.allclose(self.phi.sum(axi... | 1,004,633 |
Initialize an `GammaDistributedModel` object.
The `lambda_param` is set to "omega".
Args:
`model` `ncats`,`alpha_lambda`, `beta_lambda`, `freeparams`
Meaning described in main class doc string for
`GammaDistributedModel`. | def __init__(self, model, ncats, alpha_lambda=1.0, beta_lambda=2.0,
freeparams=['alpha_lambda', 'beta_lambda']):
super(GammaDistributedOmegaModel, self).__init__(model, "omega",
ncats, alpha_lambda=1.0, beta_lambda=2.0,
freeparams=['alpha_lambda', 'beta_lambda']) | 1,004,657 |
Initialize an `GammaDistributedModel` object.
The `lambda_param` is set to "beta".
Args:
`model` `ncats`,`alpha_lambda`, `beta_lambda`, `freeparams`
Meaning described in main class doc string for
`GammaDistributedModel`. | def __init__(self, model, ncats, alpha_lambda=1.0, beta_lambda=2.0,
freeparams=['alpha_lambda', 'beta_lambda']):
# set new limits so the maximum value of `beta` is equal to or
# greater than the maximum `beta` inferred from the gamma distribution
# with the constrained `alpha_b... | 1,004,658 |
Setup colorbars for each type of plot.
Take all of the optional performed during ``__init__`` method and makes the colorbar.
Args:
plot_call_sign (obj): Plot instance of ax.contourf with colormapping to
add as a colorbar. | def setup_colorbars(self, plot_call_sign):
self.fig.colorbar(plot_call_sign, cax=self.cbar_ax,
ticks=self.cbar_ticks, orientation=self.cbar_orientation)
# setup colorbar ticks
(getattr(self.cbar_ax, 'set_' + self.cbar_var + 'ticklabels')
(self.cbar_... | 1,004,716 |
Return a specific record.
Args:
session (requests.sessions.Session): Authenticated session.
record_id (int): The ID of the record to get.
endpoint_override (str, optional): Override the default
endpoint using this.
Returns:
helpscout.Base... | def get(cls, session, record_id, endpoint_override=None):
cls._check_implements('get')
try:
return cls(
endpoint_override or '/%s/%d.json' % (
cls.__endpoint__, record_id,
),
singleton=True,
session=... | 1,005,162 |
Return records in a mailbox.
Args:
session (requests.sessions.Session): Authenticated session.
endpoint_override (str, optional): Override the default
endpoint using this.
data (dict, optional): Data to provide as request parameters.
Returns:
... | def list(cls, session, endpoint_override=None, data=None):
cls._check_implements('list')
return cls(
endpoint_override or '/%s.json' % cls.__endpoint__,
data=data,
session=session,
) | 1,005,163 |
Update a record.
Args:
session (requests.sessions.Session): Authenticated session.
record (helpscout.BaseModel): The record to
be updated.
Returns:
helpscout.BaseModel: Freshly updated record. | def update(cls, session, record):
cls._check_implements('update')
data = record.to_api()
del data['id']
data['reload'] = True
return cls(
'/%s/%s.json' % (cls.__endpoint__, record.id),
data=data,
request_type=RequestPaginator.PUT,
... | 1,005,165 |
Initialize a new HelpScout client.
Args:
api_key (str): The API key to use for this session. | def __init__(self, api_key):
self.session = Session()
self.session.auth = HTTPBasicAuth(api_key, 'NoPassBecauseKey!')
self._load_apis() | 1,005,282 |
Get the EPSG code associated with a geometry attribute.
Arguments:
geom_attr
the key of the geometry property as defined in the SQLAlchemy
mapper. If you use ``declarative_base`` this is the name of
the geometry attribute as defined in the mapped class. | def _get_col_epsg(mapped_class, geom_attr):
col = class_mapper(mapped_class).get_property(geom_attr).columns[0]
return col.type.srid | 1,005,452 |
Create an ``and_`` SQLAlchemy filter (a ClauseList object) based
on the request params (``queryable``, ``eq``, ``ne``, ...).
Arguments:
request
the request.
mapped_class
the SQLAlchemy mapped class. | def create_attr_filter(request, mapped_class):
mapping = {
'eq': '__eq__',
'ne': '__ne__',
'lt': '__lt__',
'lte': '__le__',
'gt': '__gt__',
'gte': '__ge__',
'like': 'like',
'ilike': 'ilike'
}
filters = []
if 'queryable' in request.par... | 1,005,454 |
Create MapFish default filter based on the request params.
Arguments:
request
the request.
mapped_class
the SQLAlchemy mapped class.
geom_attr
the key of the geometry property as defined in the SQLAlchemy
mapper. If you use ``declarative_base`` this is the name of
... | def create_filter(request, mapped_class, geom_attr, **kwargs):
attr_filter = create_attr_filter(request, mapped_class)
geom_filter = create_geom_filter(request, mapped_class, geom_attr,
**kwargs)
if geom_filter is None and attr_filter is None:
return None
... | 1,005,455 |
Return a specific team.
Args:
session (requests.sessions.Session): Authenticated session.
team_id (int): The ID of the team to get.
Returns:
helpscout.models.Person: A person singleton representing the team,
if existing. Otherwise ``None``. | def get(cls, session, team_id):
return cls(
'/teams/%d.json' % team_id,
singleton=True,
session=session,
) | 1,005,550 |
List the members for the team.
Args:
team_or_id (helpscout.models.Person or int): Team or the ID of
the team to get the folders for.
Returns:
RequestPaginator(output_type=helpscout.models.Users): Users
iterator. | def get_members(cls, session, team_or_id):
if isinstance(team_or_id, Person):
team_or_id = team_or_id.id
return cls(
'/teams/%d/members.json' % team_or_id,
session=session,
out_type=User,
) | 1,005,551 |
Parse a property received from the API into an internal object.
Args:
name (str): Name of the property on the object.
value (mixed): The unparsed API value.
Raises:
HelpScoutValidationException: In the event that the property name
is not found.
... | def _parse_property(cls, name, value):
prop = cls._props.get(name)
return_value = value
if not prop:
logger.debug(
'"%s" with value "%s" is not a valid property for "%s".' % (
name, value, cls,
),
)
... | 1,005,883 |
Return a snake cased version of the input string.
Args:
string (str): A camel cased string.
Returns:
str: A snake cased string. | def _to_snake_case(string):
sub_string = r'\1_\2'
string = REGEX_CAMEL_FIRST.sub(sub_string, string)
return REGEX_CAMEL_SECOND.sub(sub_string, string).lower() | 1,005,885 |
Return a camel cased version of the input string.
Args:
string (str): A snake cased string.
Returns:
str: A camel cased string. | def _to_camel_case(string):
components = string.split('_')
return '%s%s' % (
components[0],
''.join(c.title() for c in components[1:]),
) | 1,005,886 |
Formats a given value
Args:
value: value to format
Returns:
str: formatted value | def __call__(self, value):
fmt = self.fmt(value)
if len(fmt) > self.col_width:
fmt = fmt[:self.col_width - 3] + '...'
fmt = self.just(fmt, self.col_width)
return fmt | 1,006,036 |
A helper method that adds routes to view callables that, together,
implement the MapFish HTTP interface.
Example::
import papyrus
config.include(papyrus)
config.add_papyrus_routes('spots', '/spots')
config.scan()
Arguments:
``route_name_prefix' The prefix used for the... | def add_papyrus_routes(self, route_name_prefix, base_url):
route_name = route_name_prefix + '_read_many'
self.add_route(route_name, base_url, request_method='GET')
route_name = route_name_prefix + '_read_one'
self.add_route(route_name, base_url + '/{id}', request_method='GET')
route_name = rout... | 1,006,062 |
Send a DELETE request and return the JSON decoded result.
Args:
json (dict, optional): Object to encode and send in request.
Returns:
mixed: JSON decoded response data. | def delete(self, json=None):
return self._call('delete', url=self.endpoint, json=json) | 1,006,216 |
Send a POST request and return the JSON decoded result.
Args:
params (dict, optional): Mapping of parameters to send in request.
Returns:
mixed: JSON decoded response data. | def get(self, params=None):
return self._call('get', url=self.endpoint, params=params) | 1,006,217 |
Send a POST request and return the JSON decoded result.
Args:
json (dict, optional): Object to encode and send in request.
Returns:
mixed: JSON decoded response data. | def post(self, json=None):
return self._call('post', url=self.endpoint, json=json) | 1,006,218 |
Send a PUT request and return the JSON decoded result.
Args:
json (dict, optional): Object to encode and send in request.
Returns:
mixed: JSON decoded response data. | def put(self, json=None):
return self._call('put', url=self.endpoint, json=json) | 1,006,219 |
Instantiate an API Authentication Proxy.
Args:
auth (requests.Session): Authenticated requests Session.
proxy_class (type): A class implementing the ``BaseApi``
interface. | def __init__(self, session, proxy_class):
assert isinstance(proxy_class, type)
self.session = session
self.proxy_class = proxy_class | 1,006,231 |
Override attribute getter to act as a proxy for``proxy_class``.
If ``item`` is contained in ``METHOD_NO_PROXY``, it will not be
proxied to the ``proxy_class`` and will instead return the attribute
on this object.
Args:
item (str): Name of attribute to get. | def __getattr__(self, item):
if item in self.METHOD_NO_PROXY:
return super(AuthProxy, self).__getattr__(item)
attr = getattr(self.proxy_class, item)
if callable(attr):
return self.auth_proxy(attr) | 1,006,232 |
Authentication proxy for API requests.
This is required because the API objects are naive of ``HelpScout``,
so they would otherwise be unauthenticated.
Args:
method (callable): A method call that should be authenticated. It
should accept a ``requests.Session`` as its f... | def auth_proxy(self, method):
def _proxy(*args, **kwargs):
return method(self.session, *args, **kwargs)
return _proxy | 1,006,233 |
Get the users that are associated to a Mailbox.
Args:
session (requests.sessions.Session): Authenticated session.
mailbox_or_id (MailboxRef or int): Mailbox of the ID of the
mailbox to get the folders for.
Returns:
RequestPaginator(output_type=helpsc... | def find_in_mailbox(cls, session, mailbox_or_id):
if hasattr(mailbox_or_id, 'id'):
mailbox_or_id = mailbox_or_id.id
return cls(
'/mailboxes/%d/users.json' % mailbox_or_id,
session=session,
) | 1,006,234 |
Delete an attachment.
Args:
session (requests.sessions.Session): Authenticated session.
attachment (helpscout.models.Attachment): The attachment to
be deleted.
Returns:
NoneType: Nothing. | def delete_attachment(cls, session, attachment):
return super(Conversations, cls).delete(
session,
attachment,
endpoint_override='/attachments/%s.json' % attachment.id,
out_type=Attachment,
) | 1,006,283 |
Return conversations for a specific customer in a mailbox.
Args:
session (requests.sessions.Session): Authenticated session.
mailbox (helpscout.models.Mailbox): Mailbox to search.
customer (helpscout.models.Customer): Customer to search for.
Returns:
Req... | def find_customer(cls, session, mailbox, customer):
return cls(
'/mailboxes/%d/customers/%s/conversations.json' % (
mailbox.id, customer.id,
),
session=session,
) | 1,006,284 |
Return conversations for a specific user in a mailbox.
Args:
session (requests.sessions.Session): Authenticated session.
mailbox (helpscout.models.Mailbox): Mailbox to search.
user (helpscout.models.User): User to search for.
Returns:
RequestPaginator(ou... | def find_user(cls, session, mailbox, user):
return cls(
'/mailboxes/%d/users/%s/conversations.json' % (
mailbox.id, user.id,
),
session=session,
) | 1,006,285 |
Return a specific attachment's data.
Args:
session (requests.sessions.Session): Authenticated session.
attachment_id (int): The ID of the attachment from which to get
data.
Returns:
helpscout.models.AttachmentData: An attachment data singleton, if
... | def get_attachment_data(cls, session, attachment_id):
return cls(
'/attachments/%d/data.json' % attachment_id,
singleton=True,
session=session,
out_type=AttachmentData,
) | 1,006,286 |
Return conversations in a mailbox.
Args:
session (requests.sessions.Session): Authenticated session.
mailbox (helpscout.models.Mailbox): Mailbox to list.
Returns:
RequestPaginator(output_type=helpscout.models.Conversation):
Conversations iterator. | def list(cls, session, mailbox):
endpoint = '/mailboxes/%d/conversations.json' % mailbox.id
return super(Conversations, cls).list(session, endpoint) | 1,006,287 |
Return conversations in a specific folder of a mailbox.
Args:
session (requests.sessions.Session): Authenticated session.
mailbox (helpscout.models.Mailbox): Mailbox that folder is in.
folder (helpscout.models.Folder): Folder to list.
Returns:
RequestPag... | def list_folder(cls, session, mailbox, folder):
return cls(
'/mailboxes/%d/folders/%s/conversations.json' % (
mailbox.id, folder.id,
),
session=session,
) | 1,006,288 |
Update a thread.
Args:
session (requests.sessions.Session): Authenticated session.
conversation (helpscout.models.Conversation): The conversation
that the thread belongs to.
thread (helpscout.models.Thread): The thread to be updated.
Returns:
... | def update_thread(cls, session, conversation, thread):
data = thread.to_api()
data['reload'] = True
return cls(
'/conversations/%s/threads/%d.json' % (
conversation.id, thread.id,
),
data=data,
request_type=RequestPaginator... | 1,006,290 |
Called by the protocol on object creation.
Arguments:
* ``feature`` The GeoJSON feature as received from the client. | def __init__(self, feature=None):
if feature:
for p in class_mapper(self.__class__).iterate_properties:
if not isinstance(p, ColumnProperty):
continue
if p.columns[0].primary_key:
primary_key = p.key
if hasa... | 1,006,369 |
Called by the protocol on object update.
Arguments:
* ``feature`` The GeoJSON feature as received from the client. | def __update__(self, feature):
for p in class_mapper(self.__class__).iterate_properties:
if not isinstance(p, ColumnProperty):
continue
col = p.columns[0]
if isinstance(col.type, Geometry):
geom = feature.geometry
if ge... | 1,006,370 |
Prints a formatted row
Args:
args: row cells | def __call__(self, *args):
if len(self.formatters) == 0:
self.setup(*args)
row_cells = []
if self.rownum:
row_cells.append(0)
if self.timestamp:
row_cells.append(datetime.datetime.now())
if self.time_diff:
row_cells.appen... | 1,006,444 |
Setup formatters by observing the first row.
Args:
*args: row cells | def setup_formatters(self, *args):
formatters = []
col_offset = 0
# initialize formatters for row-id, timestamp and time-diff columns
if self.rownum:
formatters.append(fmt.RowNumberFormatter.setup(0))
col_offset += 1
if self.timestamp:
... | 1,006,446 |
Do preparations before printing the first row
Args:
*args: first row cells | def setup(self, *args):
self.setup_formatters(*args)
if self.columns:
self.print_header()
elif self.border and not self.csv:
self.print_line(self.make_horizontal_border()) | 1,006,447 |
Converts row values into a csv line
Args:
row: a list of row cells as unicode
Returns:
csv_line (unicode) | def csv_format(self, row):
if PY2:
buf = io.BytesIO()
csvwriter = csv.writer(buf)
csvwriter.writerow([c.strip().encode(self.encoding) for c in row])
csv_line = buf.getvalue().decode(self.encoding).rstrip()
else:
buf = io.StringIO()
... | 1,006,452 |
Join a new query to existing queries on the stack.
Args:
query (tuple or list or DomainCondition): The condition for the
query. If a ``DomainCondition`` object is not provided, the
input should conform to the interface defined in
:func:`~.domain.Domai... | def add_query(self, query, join_with=AND):
if not isinstance(query, DomainCondition):
query = DomainCondition.from_tuple(query)
if len(self.query):
self.query.append(join_with)
self.query.append(query) | 1,006,743 |
Initialize a new generic query condition.
Args:
field (str): Field name to search on. This should be the
Pythonified name as in the internal models, not the
name as provided in the API e.g. ``first_name`` for
the Customer's first name instead of ``fir... | def __init__(self, field, value, **kwargs):
return super(DomainCondition, self).__init__(
field=field, value=value, **kwargs
) | 1,006,745 |
List the folders for the mailbox.
Args:
mailbox_or_id (helpscout.models.Mailbox or int): Mailbox or the ID
of the mailbox to get the folders for.
Returns:
RequestPaginator(output_type=helpscout.models.Folder): Folders
iterator. | def get_folders(cls, session, mailbox_or_id):
if isinstance(mailbox_or_id, Mailbox):
mailbox_or_id = mailbox_or_id.id
return cls(
'/mailboxes/%d/folders.json' % mailbox_or_id,
session=session,
out_type=Folder,
) | 1,006,812 |
Parse raw record data if required.
Args:
record (dict or BaseModel): The record data that was received for
the request. If it is a ``dict``, the data will be parsed
using the proper model's ``from_api`` method. | def __init__(self, *args, **kwargs):
if isinstance(kwargs.get('record'), dict):
prefix, _ = kwargs['event_type'].split('.', 1)
model = self.EVENT_PREFIX_TO_MODEL[prefix]
kwargs['record'] = model.from_api(**kwargs['record'])
super(WebHookEvent, self).__init__(... | 1,007,234 |
Defines a flag of type 'string'.
Args:
flag_name: The name of the flag as a string.
default_value: The default value the flag should take as a string.
docstring: A helpful message explaining the use of the flag. | def DEFINE_string(flag_name, default_value, docstring, required=False): # pylint: disable=invalid-name
_define_helper(flag_name, default_value, docstring, str, required) | 1,007,327 |
Defines a flag of type 'int'.
Args:
flag_name: The name of the flag as a string.
default_value: The default value the flag should take as an int.
docstring: A helpful message explaining the use of the flag. | def DEFINE_integer(flag_name, default_value, docstring, required=False): # pylint: disable=invalid-name
_define_helper(flag_name, default_value, docstring, int, required) | 1,007,328 |
Defines a flag of type 'boolean'.
Args:
flag_name: The name of the flag as a string.
default_value: The default value the flag should take as a boolean.
docstring: A helpful message explaining the use of the flag. | def DEFINE_boolean(flag_name, default_value, docstring): # pylint: disable=invalid-name
# Register a custom function for 'bool' so --flag=True works.
def str2bool(bool_str):
return bool_str.lower() in ('true', 't', '1')
get_context_parser().add_argument(
'--' + flag_name,
... | 1,007,329 |
Defines a flag of type 'float'.
Args:
flag_name: The name of the flag as a string.
default_value: The default value the flag should take as a float.
docstring: A helpful message explaining the use of the flag. | def DEFINE_float(flag_name, default_value, docstring, required=False): # pylint: disable=invalid-name
_define_helper(flag_name, default_value, docstring, float, required) | 1,007,330 |
Return a value associated with a key from the session dictionary.
Args:
key (str): The dictionary key.
Returns:
str: The value associate with that key or None if the key is
not in the dictionary. | def __getitem__(self,key):
self.rdb.expire(self.session_hash,self.ttl)
encoded_result = self.rdb.hget(self.session_hash,key)
if encoded_result is None:
return None
else:
return encoded_result.decode('utf-8') | 1,007,396 |
Set an existing or new key, value association.
Args:
key (str): The dictionary key.
value (str): The dictionary value | def __setitem__(self,key,value):
self.rdb.hset(self.session_hash,key,value)
self.rdb.expire(self.session_hash,self.ttl) | 1,007,397 |
Get a value from the dictionary.
Args:
key (str): The dictionary key.
default (any): The default to return if the key is not in the
dictionary. Defaults to None.
Returns:
str or any: The dictionary value or the default if the key is not
... | def get(self,key,default=None):
retval = self.__getitem__(key)
if not retval:
retval = default
return retval | 1,007,398 |
function hash() implement to acquire hash value that use simply method that weighted sum.
Parameters:
-----------
value: string
the value is param of need acquire hash
Returns:
--------
result
hash code for value | def hash(self, value):
result = 0
for i in range(len(value)):
result += self.seed * result + ord(value[i])
return (self.capacity - 1) % result | 1,007,476 |
Tokenizes a document, using a lemmatizer.
Args:
| doc (str) -- the text document to process.
Returns:
| list -- the list of tokens. | def tokenize(self, docs):
if self.n_jobs == 1:
return [self._tokenize(doc) for doc in docs]
else:
return parallel(self._tokenize, docs, self.n_jobs) | 1,007,936 |
Encrypt a 16-byte block of data.
NOTE: This function was formerly called `encrypt`, but was changed when
support for encrypting arbitrary-length strings was added.
Args:
plainText (str): 16-byte data.
Returns:
16-byte str.
Raises:
TypeError if CamCrypt object has not been... | def encrypt_block(self, plainText):
if not self.initialized:
raise TypeError("CamCrypt object has not been initialized")
if len(plainText) != BLOCK_SIZE:
raise ValueError("plainText must be %d bytes long (received %d bytes)" %
(BLOCK_SIZE, len(plainText)))
cipher = ct... | 1,008,001 |
Decrypt a 16-byte block of data.
NOTE: This function was formerly called `decrypt`, but was changed when
support for decrypting arbitrary-length strings was added.
Args:
cipherText (str): 16-byte data.
Returns:
16-byte str.
Raises:
TypeError if CamCrypt object has not bee... | def decrypt_block(self, cipherText):
if not self.initialized:
raise TypeError("CamCrypt object has not been initialized")
if len(cipherText) != BLOCK_SIZE:
raise ValueError("cipherText must be %d bytes long (received %d bytes)" %
(BLOCK_SIZE, len(cipherText)))
plain =... | 1,008,002 |
Returns the feature vectors for a set of docs. If model is not already be trained,
then self.train() is called.
Args:
docs (dict or list of tuples): asset_id, body_text of documents
you wish to featurize. | def vectorize( self, docs ):
if type(docs) == dict:
docs = docs.items()
if self.model == None:
self.train(docs)
asset_id2vector = {}
unfound = []
for item in docs:
## iterate through the items in docs and check if any are already i... | 1,008,197 |
Train Doc2Vec on a series of docs. Train from scratch or update.
Args:
docs: list of tuples (assetid, body_text) or dictionary {assetid : body_text}
retrain: boolean, retrain from scratch or update model
saves model in class to self.model
Returns: 0 if successful | def train(self, docs, retrain=False):
if type(docs) == dict:
docs = docs.items()
train_sentences = [self._gen_sentence(item) for item in docs]
if (self.is_trained) and (retrain == False):
## online training
self.update_model(train_sentences, updat... | 1,008,198 |
Takes in html-mixed body text as a string and returns a list of strings,
lower case and with punctuation given spacing.
Called by self._gen_sentence()
Args:
inpnut (string): body text | def _process(self, input):
input = re.sub("<[^>]*>", " ", input)
punct = list(string.punctuation)
for symbol in punct:
input = input.replace(symbol, " %s " % symbol)
input = filter(lambda x: x != u'', input.lower().split(' '))
return input | 1,008,200 |
Takes an assetid_body_tuple and returns a Doc2Vec LabeledSentence
Args:
assetid_body_tuple (tuple): (assetid, bodytext) pair | def _gen_sentence(self, assetid_body_tuple):
asset_id, body = assetid_body_tuple
text = self._process(body)
sentence = LabeledSentence(text, labels=['DOC_%s' % str(asset_id)])
return sentence | 1,008,201 |
Set the resource attributes from the kwargs.
Only sets items in the `self.Meta.attributes` white list.
Subclass this method to customise attributes.
Args:
kwargs: Keyword arguements passed into the init of this class | def set_attributes(self, **kwargs):
if self._subresource_map:
self.set_subresources(**kwargs)
for key in self._subresource_map.keys():
# Don't let these attributes be overridden later
kwargs.pop(key, None)
for field, value in kwargs.items(... | 1,008,631 |
Construct the URL for talking to this resource.
i.e.:
http://myapi.com/api/resource
Note that this is NOT the method for calling individual instances i.e.
http://myapi.com/api/resource/1
Args:
resource: The resource class instance
base_url: The Base U... | def get_resource_url(cls, resource, base_url):
if resource.Meta.resource_name:
url = '{}/{}'.format(base_url, resource.Meta.resource_name)
else:
p = inflect.engine()
plural_name = p.plural(resource.Meta.name.lower())
url = '{}/{}'.format(base_url,... | 1,008,632 |
Construct the URL for talking to an individual resource.
http://myapi.com/api/resource/1
Args:
url: The url for this resource
uid: The unique identifier for an individual resource
kwargs: Additional keyword argueents
returns:
final_url: The URL f... | def get_url(cls, url, uid, **kwargs):
if uid:
url = '{}/{}'.format(url, uid)
else:
url = url
return cls._parse_url_and_validate(url) | 1,008,633 |
Recieves a URL string and validates it using urlparse.
Args:
url: A URL string
Returns:
parsed_url: A validated URL
Raises:
BadURLException | def _parse_url_and_validate(cls, url):
parsed_url = urlparse(url)
if parsed_url.scheme and parsed_url.netloc:
final_url = parsed_url.geturl()
else:
raise BadURLException
return final_url | 1,008,635 |
For the list of valid URLs, try and match them up
to resources in the related_resources attribute.
Args:
url_values: A dictionary of keys and URL strings that
could be related resources.
Returns:
valid_values: The values that are valid | def match_urls_to_resources(self, url_values):
valid_values = {}
for resource in self.Meta.related_resources:
for k, v in url_values.items():
resource_url = resource.get_resource_url(
resource, resource.Meta.base_url)
if isinstance... | 1,008,638 |
Set the resource attributes from the kwargs.
Only sets items in the `self.Meta.attributes` white list.
Args:
kwargs: Keyword arguements passed into the init of this class | def set_attributes(self, **kwargs):
for field, value in kwargs.items():
if field in self.Meta.attributes:
setattr(self, field, value) | 1,008,641 |
Read data from file(s) or STDIN.
Args:
filenames (list): List of files to read to get data. If empty or
None, read from STDIN. | def _get_data(filenames):
if filenames:
data = ""
for filename in filenames:
with open(filename, "rb") as f:
data += f.read()
else:
data = sys.stdin.read()
return data | 1,009,304 |
Print data to a file or STDOUT.
Args:
filename (str or None): If None, print to STDOUT; otherwise, print
to the file with this name.
data (str): Data to print. | def _print_results(filename, data):
if filename:
with open(filename, 'wb') as f:
f.write(data)
else:
print data | 1,009,305 |
Prepares the HTTP REQUEST and returns it.
Args:
method_type: The HTTP method type
params: Additional parameters for the HTTP request.
kwargs: Any extra keyword arguements passed into a client method.
returns:
prepared_request: An HTTP request object. | def prepare_http_request(self, method_type, params, **kwargs):
prepared_request = self.session.prepare_request(
requests.Request(method=method_type, **params)
)
return prepared_request | 1,009,452 |
Handles Response objects
Args:
response: An HTTP reponse object
valid_status_codes: A tuple list of valid status codes
resource: The resource class to build from this response
returns:
resources: A list of Resource instances | def _handle_response(self, response, valid_status_codes, resource):
if response.status_code not in valid_status_codes:
raise InvalidStatusCodeError(
status_code=response.status_code,
expected_status_codes=valid_status_codes
)
if respon... | 1,009,454 |
Given a resource_class and it's Meta.methods tuple,
assign methods for communicating with that resource.
Args:
resource_class: A single resource class | def assign_methods(self, resource_class):
assert all([
x.upper() in VALID_METHODS for x in resource_class.Meta.methods])
for method in resource_class.Meta.methods:
self._assign_method(
resource_class,
method.upper()
) | 1,009,458 |
Using reflection, assigns a new method to this class.
Args:
resource_class: A resource class
method_type: The HTTP method type | def _assign_method(self, resource_class, method_type):
method_name = resource_class.get_method_name(
resource_class, method_type)
valid_status_codes = getattr(
resource_class.Meta,
'valid_status_codes',
DEFAULT_VALID_STATUS_CODES
... | 1,009,459 |
Ensures that the data within cdata has double sphere symmetry.
Example::
>>> spherepy.doublesphere(cdata, 1)
Args:
sym (int): is 1 for scalar data and -1 for vector data
Returns:
numpy.array([*,*], dtype=np.complex128) containing array with
doublesphere symmet... | def double_sphere(cdata, sym):
nrows = cdata.shape[0]
ncols = cdata.shape[1]
ddata = np.zeros([nrows, ncols], dtype=np.complex128)
for n in xrange(0, nrows):
for m in xrange(0, ncols):
s = sym * cdata[np.mod(nrows - n, nrows),
np.mod(i... | 1,009,487 |
Calculates virtual barcode for IBAN account number and ISO reference
Arguments:
iban {string} -- IBAN formed account number
reference {string} -- ISO 11649 creditor reference
amount {decimal.Decimal} -- Amount in euros, 0.01 - 999999.99
due {datetime.date} -- due date | def barcode(iban, reference, amount, due=None):
iban = iban.replace(' ', '')
reference = reference.replace(' ', '')
if reference.startswith('RF'):
version = 5
else:
version = 4
if version == 5:
reference = reference[2:] # test RF and add 00 where needed
if le... | 1,009,666 |
This endpoint doesn't return a JSON object, instead it returns
a series of rows, each its own object. Given this setup, it makes
sense to treat it how we handle our Bulk Export reqeusts.
Arguments:
path: the directory on your computer you wish the file to be downloaded into.
return_response_object... | def get_experiment_metrics(self, path, return_response_object= None,
experiment_id=None, campaign_id=None,
start_date_time=None, end_date_time=None
):
call="/api/experiments/metrics"
if isinstance(return_response_object, bool) is False:
raise ValueError("'return_iterator_object... | 1,009,747 |
Groups together Params for adding under the 'What' section.
Args:
params(list of :func:`Param`): Parameter elements to go in this group.
name(str): Group name. NB ``None`` is valid, since the group may be
best identified by its type.
type(str): Type of group, e.g. 'complex' (for... | def Group(params, name=None, type=None):
atts = {}
if name:
atts['name'] = name
if type:
atts['type'] = type
g = objectify.Element('Group', attrib=atts)
for p in params:
g.append(p)
return g | 1,009,974 |
Represents external information, typically original obs data and metadata.
Args:
uri(str): Uniform resource identifier for external data, e.g. FITS file.
meaning(str): The nature of the document referenced, e.g. what
instrument and filter was used to create the data? | def Reference(uri, meaning=None):
attrib = {'uri': uri}
if meaning is not None:
attrib['meaning'] = meaning
return objectify.Element('Reference', attrib) | 1,009,975 |
Represents a probable cause / relation between this event and some prior.
Args:
probability(float): Value 0.0 to 1.0.
relation(str): e.g. 'associated' or 'identified' (see Voevent spec)
name(str): e.g. name of identified progenitor.
concept(str): One of a 'formal UCD-like vocabulary... | def Inference(probability=None, relation=None, name=None, concept=None):
atts = {}
if probability is not None:
atts['probability'] = str(probability)
if relation is not None:
atts['relation'] = relation
inf = objectify.Element('Inference', attrib=atts)
if name is not None:
... | 1,009,976 |
Used to cite earlier VOEvents.
Use in conjunction with :func:`.add_citations`
Args:
ivorn(str): It is assumed this will be copied verbatim from elsewhere,
and so these should have any prefix (e.g. 'ivo://','http://')
already in place - the function will not alter the value.
... | def EventIvorn(ivorn, cite_type):
# This is an ugly hack around the limitations of the lxml.objectify API:
c = objectify.StringElement(cite=cite_type)
c._setText(ivorn)
c.tag = "EventIVORN"
return c | 1,009,977 |
Initialize ndrive instance
Using given user information, login to ndrive server and create a session
Args:
NID_AUT: Naver account authentication info
NID_SES: Naver account session info
Returns: | def __init__(self, NID_AUT = None, NID_SES= None):
self.session.headers["User-Agent"] = \
"Mozilla/5.0 (Windows NT 6.2; WOW64) Chrome/32.0.1700.76 Safari/537.36"
self.session.cookies.set('NID_AUT', NID_AUT)
self.session.cookies.set('NID_SES', NID_SES) | 1,010,110 |
Get registerUserInfo
Args:
svctype: Platform information
auth: ???
Returns:
True: Success
False: Failed | def getRegisterUserInfo(self, svctype = "Android NDrive App ver", auth = 0):
data = {'userid': self.user_id, 'svctype': svctype, 'auth': auth}
r = self.session.get(nurls['getRegisterUserInfo'], params = data)
j = json.loads(r.text)
if j['message'] != 'success':
pri... | 1,010,112 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.