_id stringlengths 2 7 | title stringlengths 1 88 | partition stringclasses 3
values | text stringlengths 31 13.1k | language stringclasses 1
value | meta_information dict |
|---|---|---|---|---|---|
q10300 | Hooks.getTriggerToken | train | async def getTriggerToken(self, *args, **kwargs):
"""
Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
| python | {
"resource": ""
} |
q10301 | Hooks.resetTriggerToken | train | async def resetTriggerToken(self, *args, **kwargs):
"""
Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
| python | {
"resource": ""
} |
q10302 | Hooks.triggerHookWithToken | train | async def triggerHookWithToken(self, *args, **kwargs):
"""
Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
| python | {
"resource": ""
} |
q10303 | AwsProvisioner.createWorkerType | train | async def createWorkerType(self, *args, **kwargs):
"""
Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will | python | {
"resource": ""
} |
q10304 | AwsProvisioner.updateWorkerType | train | async def updateWorkerType(self, *args, **kwargs):
"""
Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the | python | {
"resource": ""
} |
q10305 | AwsProvisioner.workerType | train | async def workerType(self, *args, **kwargs):
"""
Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
| python | {
"resource": ""
} |
q10306 | AwsProvisioner.createSecret | train | async def createSecret(self, *args, **kwargs):
"""
Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for | python | {
"resource": ""
} |
q10307 | AwsProvisioner.removeSecret | train | async def removeSecret(self, *args, **kwargs):
"""
Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted | python | {
"resource": ""
} |
q10308 | AwsProvisioner.getLaunchSpecs | train | async def getLaunchSpecs(self, *args, **kwargs):
"""
Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
| python | {
"resource": ""
} |
q10309 | resolve_font | train | def resolve_font(name):
"""Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... | python | {
"resource": ""
} |
q10310 | get_font_files | train | def get_font_files():
"""Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True
"""
roots = [
'/usr/share/fonts/truetype', # where ubuntu puts fonts
'/usr/share/fonts', # where fedora puts fonts
os.path.expanduser('~/.fonts'), # custom user fonts | python | {
"resource": ""
} |
q10311 | PurgeCache.purgeCache | train | def purgeCache(self, *args, **kwargs):
"""
Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes | python | {
"resource": ""
} |
q10312 | is_asdf | train | def is_asdf(raw):
"""If the password is in the order on keyboard."""
reverse = raw[::-1]
| python | {
"resource": ""
} |
q10313 | is_by_step | train | def is_by_step(raw):
"""If the password is alphabet step by step."""
# make sure it is unicode
delta = ord(raw[1]) - ord(raw[0])
for i in range(2, len(raw)):
| python | {
"resource": ""
} |
q10314 | is_common_password | train | def is_common_password(raw, freq=0):
"""If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/
"""
| python | {
"resource": ""
} |
q10315 | check | train | def check(raw, length=8, freq=0, min_types=3, level=STRONG):
"""Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password.
"""
raw = to_unicode(raw)
if level > STRONG:
level = STRONG
if len(raw) < length:
return Strength(False, 'terrible', 'password is too short')
if is_asdf(raw) or is_by_step(raw):
return Strength(False, 'simple', 'password has a pattern')
if is_common_password(raw, freq=freq):
return Strength(False, 'simple', 'password is too common')
types = 0
if LOWER.search(raw):
| python | {
"resource": ""
} |
q10316 | InfobloxNetMRI._make_request | train | def _make_request(self, url, method="get", data=None, extra_headers=None):
"""Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
attempts = 0
while attempts < 1:
| python | {
"resource": ""
} |
q10317 | InfobloxNetMRI._send_request | train | def _send_request(self, url, method="get", data=None, extra_headers=None):
"""Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
headers = {'Content-type': 'application/json'}
if isinstance(extra_headers, dict):
headers.update(extra_headers) | python | {
"resource": ""
} |
q10318 | InfobloxNetMRI._get_api_version | train | def _get_api_version(self):
"""Fetches the most recent API version
Returns:
str
"""
| python | {
"resource": ""
} |
q10319 | InfobloxNetMRI._authenticate | train | def _authenticate(self):
""" Perform an authentication against NetMRI"""
url = "{base_url}/api/authenticate".format(base_url=self._base_url())
data = json.dumps({'username': self.username, "password": self.password})
# Bypass authentication check in make_request by | python | {
"resource": ""
} |
q10320 | InfobloxNetMRI._controller_name | train | def _controller_name(self, objtype):
"""Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name
"""
# would be better to use inflect.pluralize here, but would add a dependency
if objtype.endswith('y'):
| python | {
"resource": ""
} |
q10321 | InfobloxNetMRI._object_url | train | def _object_url(self, objtype, objid):
"""Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object
"""
| python | {
"resource": ""
} |
q10322 | InfobloxNetMRI._method_url | train | def _method_url(self, method_name):
"""Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method
"""
| python | {
"resource": ""
} |
q10323 | InfobloxNetMRI.api_request | train | def api_request(self, method_name, params):
"""Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
| python | {
"resource": ""
} |
q10324 | InfobloxNetMRI.show | train | def show(self, objtype, objid):
"""Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
| python | {
"resource": ""
} |
q10325 | Notify.irc | train | def irc(self, *args, **kwargs):
"""
Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed | python | {
"resource": ""
} |
q10326 | Notify.addDenylistAddress | train | def addDenylistAddress(self, *args, **kwargs):
"""
Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
| python | {
"resource": ""
} |
q10327 | Notify.deleteDenylistAddress | train | def deleteDenylistAddress(self, *args, **kwargs):
"""
Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
| python | {
"resource": ""
} |
q10328 | Notify.list | train | def list(self, *args, **kwargs):
"""
List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at | python | {
"resource": ""
} |
q10329 | AuthEvents.clientCreated | train | def clientCreated(self, *args, **kwargs):
"""
Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-created',
'name': 'clientCreated',
| python | {
"resource": ""
} |
q10330 | AuthEvents.clientUpdated | train | def clientUpdated(self, *args, **kwargs):
"""
Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-updated',
'name': 'clientUpdated',
| python | {
"resource": ""
} |
q10331 | AuthEvents.clientDeleted | train | def clientDeleted(self, *args, **kwargs):
"""
Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-deleted',
'name': 'clientDeleted',
| python | {
"resource": ""
} |
q10332 | AuthEvents.roleCreated | train | def roleCreated(self, *args, **kwargs):
"""
Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-created',
'name': 'roleCreated',
| python | {
"resource": ""
} |
q10333 | AuthEvents.roleUpdated | train | def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
| python | {
"resource": ""
} |
q10334 | AuthEvents.roleDeleted | train | def roleDeleted(self, *args, **kwargs):
"""
Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-deleted',
'name': 'roleDeleted',
| python | {
"resource": ""
} |
q10335 | memoize | train | def memoize(function):
"""A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks.
"""
cache = {}
@functools.wraps(function)
def _memoize(*args):
if args in cache: | python | {
"resource": ""
} |
q10336 | TerminalInfo.dimensions | train | def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
""" | python | {
"resource": ""
} |
q10337 | sp_msg | train | def sp_msg(cmd, pipe=None, data=None):
"""Produces skypipe protocol multipart message"""
msg = [SP_HEADER, cmd]
if pipe is not None:
| python | {
"resource": ""
} |
q10338 | stream_skypipe_output | train | def stream_skypipe_output(endpoint, name=None):
"""Generator for reading skypipe data"""
name = name or ''
socket = ctx.socket(zmq.DEALER)
socket.connect(endpoint)
try:
socket.send_multipart(sp_msg(SP_CMD_LISTEN, name))
while True:
msg = socket.recv_multipart()
try:
data = parse_skypipe_data_stream(msg, name)
| python | {
"resource": ""
} |
q10339 | parse_skypipe_data_stream | train | def parse_skypipe_data_stream(msg, for_pipe):
"""May return data from skypipe message or raises EOFError"""
header = str(msg.pop(0))
command = str(msg.pop(0))
| python | {
"resource": ""
} |
q10340 | skypipe_input_stream | train | def skypipe_input_stream(endpoint, name=None):
"""Returns a context manager for streaming data into skypipe"""
name = name or ''
class context_manager(object):
def __enter__(self):
self.socket = ctx.socket(zmq.DEALER)
self.socket.connect(endpoint)
| python | {
"resource": ""
} |
q10341 | stream_stdin_lines | train | def stream_stdin_lines():
"""Generator for unbuffered line reading from STDIN"""
stdin = os.fdopen(sys.stdin.fileno(), 'r', 0)
while True:
line = | python | {
"resource": ""
} |
q10342 | run | train | def run(endpoint, name=None):
"""Runs the skypipe client"""
try:
if os.isatty(0):
# output mode
for data in stream_skypipe_output(endpoint, name):
sys.stdout.write(data)
sys.stdout.flush()
else:
| python | {
"resource": ""
} |
q10343 | DateTimeRange.validate_time_inversion | train | def validate_time_inversion(self):
"""
Check time inversion of the time range.
:raises ValueError:
If |attr_start_datetime| is
bigger than |attr_end_datetime|.
:raises TypeError:
Any one of |attr_start_datetime| and |attr_end_datetime|,
or both is inappropriate datetime value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange("2015-03-22T10:10:00+0900", "2015-03-22T10:00:00+0900")
try:
time_range.validate_time_inversion()
except ValueError:
print "time inversion"
:Output:
.. parsed-literal::
time inversion
| python | {
"resource": ""
} |
q10344 | DateTimeRange.set_start_datetime | train | def set_start_datetime(self, value, timezone=None):
"""
Set the start time of the time range.
:param value: |param_start_datetime|
:type value: |datetime|/|str|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
| python | {
"resource": ""
} |
q10345 | DateTimeRange.set_end_datetime | train | def set_end_datetime(self, value, timezone=None):
"""
Set the end time of the time range.
:param datetime.datetime/str value: |param_end_datetime|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
| python | {
"resource": ""
} |
q10346 | DateTimeRange.intersection | train | def intersection(self, x):
"""
Newly set a time range that overlaps
the input and the current time range.
:param DateTimeRange x:
Value to compute intersection with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.intersection(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:05:00+0900 - 2015-03-22T10:10:00+0900
"""
| python | {
"resource": ""
} |
q10347 | DateTimeRange.encompass | train | def encompass(self, x):
"""
Newly set a time range that encompasses
the input and the current time range.
:param DateTimeRange x:
Value to compute encompass with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.encompass(dtr1)
:Output:
.. parsed-literal::
| python | {
"resource": ""
} |
q10348 | wait_for | train | def wait_for(text, finish=None, io=None):
"""Displays dots until returned event is set"""
if finish:
finish.set()
time.sleep(0.1) # threads, sigh
if not io:
io = sys.stdout
finish = threading.Event()
io.write(text)
def _wait():
while not finish.is_set():
| python | {
"resource": ""
} |
q10349 | lookup_endpoint | train | def lookup_endpoint(cli):
"""Looks up the application endpoint from dotcloud"""
url = '/applications/{0}/environment'.format(APPNAME)
environ = cli.user.get(url).item
port = environ['DOTCLOUD_SATELLITE_ZMQ_PORT']
| python | {
"resource": ""
} |
q10350 | setup | train | def setup(cli):
"""Everything to make skypipe ready to use"""
if not cli.global_config.loaded:
setup_dotcloud_account(cli) | python | {
"resource": ""
} |
q10351 | discover_satellite | train | def discover_satellite(cli, deploy=True, timeout=5):
"""Looks to make sure a satellite exists, returns endpoint
First makes sure we have dotcloud account credentials. Then it looks
up the environment for the satellite app. This will contain host and
port to construct an endpoint. However, if app doesn't exist, or
endpoint does not check out, we call `launch_satellite` to deploy,
which calls `discover_satellite` again when finished. Ultimately we
return a working endpoint. If deploy is False it will not try to
deploy.
"""
if not cli.global_config.loaded:
cli.die("Please setup skypipe by running `skypipe --setup`")
try:
| python | {
"resource": ""
} |
q10352 | launch_satellite | train | def launch_satellite(cli):
"""Deploys a new satellite app over any existing app"""
cli.info("Launching skypipe satellite:")
finish = wait_for(" Pushing to dotCloud")
# destroy any existing satellite
destroy_satellite(cli)
# create new satellite app
url = '/applications'
try:
cli.user.post(url, {
'name': APPNAME,
'flavor': 'sandbox'
})
except RESTAPIError as e:
if e.code == 409:
cli.die('Application "{0}" already exists.'.format(APPNAME))
else:
cli.die('Creating application "{0}" failed: {1}'.format(APPNAME, e))
class args: application = APPNAME
#cli._connect(args)
# push satellite code
protocol = 'rsync'
url = '/applications/{0}/push-endpoints{1}'.format(APPNAME, '')
endpoint = cli._select_endpoint(cli.user.get(url).items, protocol)
class args: path = satellite_path
cli.push_with_rsync(args, endpoint)
# tell dotcloud to deploy, then wait for it to finish
revision = None
clean = False
url = '/applications/{0}/deployments'.format(APPNAME)
response = cli.user.post(url, {'revision': revision, 'clean': clean})
deploy_trace_id = response.trace_id
deploy_id = response.item['deploy_id']
original_stdout = sys.stdout
finish = wait_for(" Waiting for deployment", finish, original_stdout)
try:
sys.stdout = StringIO()
res = cli._stream_deploy_logs(APPNAME, deploy_id,
deploy_trace_id=deploy_trace_id, follow=True)
if res != 0:
return res
except KeyboardInterrupt:
| python | {
"resource": ""
} |
q10353 | Dumper.pg_backup | train | def pg_backup(self, pg_dump_exe='pg_dump', exclude_schema=None):
"""Call the pg_dump command to create a db backup
Parameters
----------
pg_dump_exe: str
the pg_dump command path
exclude_schema: str[]
list of schemas to be skipped
"""
command = [
pg_dump_exe, '-Fc', '-f', self.file,
'service={}'.format(self.pg_service)
| python | {
"resource": ""
} |
q10354 | Dumper.pg_restore | train | def pg_restore(self, pg_restore_exe='pg_restore', exclude_schema=None):
"""Call the pg_restore command to restore a db backup
Parameters
----------
pg_restore_exe: str
the pg_restore command path
"""
command = [
pg_restore_exe, '-d',
'service={}'.format(self.pg_service),
'--no-owner'
]
if exclude_schema:
exclude_schema_available = False
try:
pg_version = subprocess.check_output(['pg_restore','--version'])
pg_version = str(pg_version).replace('\\n', '').replace("'", '').split(' ')[-1]
exclude_schema_available = LooseVersion(pg_version) >= LooseVersion("10.0")
except subprocess.CalledProcessError as e:
| python | {
"resource": ""
} |
q10355 | Upgrader.__get_delta_files | train | def __get_delta_files(self):
"""Search for delta files and return a dict of Delta objects, keyed by directory names."""
files = [(d, f) for d in self.dirs for f in listdir(d) if isfile(join(d, f))]
deltas = OrderedDict()
for d, f in files:
file_ = join(d, f)
if not Delta.is_valid_delta_name(file_):
continue
delta = Delta(file_)
if d not | python | {
"resource": ""
} |
q10356 | Upgrader.__run_delta_sql | train | def __run_delta_sql(self, delta):
"""Execute the delta sql file on the database"""
| python | {
"resource": ""
} |
q10357 | Upgrader.__run_delta_py | train | def __run_delta_py(self, delta):
"""Execute the delta py file"""
| python | {
"resource": ""
} |
q10358 | Upgrader.__run_pre_all | train | def __run_pre_all(self):
"""Execute the pre-all.py and pre-all.sql files if they exist"""
# if the list of delta dirs is [delta1, delta2] the pre scripts of delta2 are
# executed before the pre scripts of delta1
for d in reversed(self.dirs):
pre_all_py_path = os.path.join(d, 'pre-all.py')
if os.path.isfile(pre_all_py_path):
print(' Applying pre-all.py...', end=' ')
self.__run_py_file(pre_all_py_path, 'pre-all')
| python | {
"resource": ""
} |
q10359 | Upgrader.__run_post_all | train | def __run_post_all(self):
"""Execute the post-all.py and post-all.sql files if they exist"""
# if the list of delta dirs is [delta1, delta2] the post scripts of delta1 are
# executed before the post scripts of delta2
for d in self.dirs:
post_all_py_path = os.path.join(d, 'post-all.py')
if os.path.isfile(post_all_py_path):
print(' Applying post-all.py...', end=' ')
self.__run_py_file(post_all_py_path, 'post-all')
| python | {
"resource": ""
} |
q10360 | Upgrader.__run_sql_file | train | def __run_sql_file(self, filepath):
"""Execute the sql file at the passed path
Parameters
----------
filepath: str
the path of the file to execute"""
with open(filepath, 'r') as delta_file:
sql = delta_file.read()
| python | {
"resource": ""
} |
q10361 | Upgrader.__run_py_file | train | def __run_py_file(self, filepath, module_name):
"""Execute the python file at the passed path
Parameters
----------
filepath: str
the path of the file to execute
module_name: str
the name of the python module
"""
# Import the module
spec = importlib.util.spec_from_file_location(module_name, filepath)
delta_py = importlib.util.module_from_spec(spec)
spec.loader.exec_module(delta_py)
# Get the python file's directory path
# Note: we add a separator for backward compatibility, as existing DeltaPy subclasses
# may assume that delta_dir ends with a separator
dir_ = dirname(filepath) + os.sep
| python | {
"resource": ""
} |
q10362 | Delta.is_valid_delta_name | train | def is_valid_delta_name(file):
"""Return if a file has a valid name
A delta file name can be:
- pre-all.py
- pre-all.sql
- delta_x.x.x_ddmmyyyy.pre.py
- delta_x.x.x_ddmmyyyy.pre.sql
- delta_x.x.x_ddmmyyyy.py
- delta_x.x.x_ddmmyyyy.sql
| python | {
"resource": ""
} |
q10363 | Delta.get_checksum | train | def get_checksum(self):
"""Return the md5 checksum of the delta file."""
with open(self.file, 'rb') as f:
| python | {
"resource": ""
} |
q10364 | Delta.get_type | train | def get_type(self):
"""Return the type of the delta file.
Returns
-------
type: int
"""
ext = self.match.group(5)
if ext == 'pre.py':
return DeltaType.PRE_PYTHON
| python | {
"resource": ""
} |
q10365 | DeltaPy.variable | train | def variable(self, name: str, default_value=None):
"""
Safely returns the value of the variable given in PUM
Parameters
----------
name
the name of the variable
| python | {
"resource": ""
} |
q10366 | Checker.run_checks | train | def run_checks(self):
"""Run all the checks functions.
Returns
-------
bool
True if all the checks are true
False otherwise
dict
Dictionary of lists of differences
"""
result = True
differences_dict = {}
if 'tables' not in self.ignore_list:
tmp_result, differences_dict['tables'] = self.check_tables()
result = False if not tmp_result else result
if 'columns' not in self.ignore_list:
tmp_result, differences_dict['columns'] = self.check_columns(
'views' not in self.ignore_list)
result = False if not tmp_result else result
if 'constraints' not in self.ignore_list:
tmp_result, differences_dict['constraints'] = \
self.check_constraints()
result = False if not tmp_result else result
if 'views' not in self.ignore_list:
tmp_result, differences_dict['views'] = self.check_views()
result = False if not tmp_result else result
if 'sequences' not in self.ignore_list:
tmp_result, differences_dict['sequences'] = self.check_sequences()
result = False if not tmp_result else result
if 'indexes' not in self.ignore_list:
| python | {
"resource": ""
} |
q10367 | Checker.__check_equals | train | def __check_equals(self, query):
"""Check if the query results on the two databases are equals.
Returns
-------
bool
True if the results are the same
False otherwise
list
A list with the differences
"""
self.cur1.execute(query)
records1 = self.cur1.fetchall()
self.cur2.execute(query)
records2 = self.cur2.fetchall()
result = True
differences = []
d = difflib.Differ()
records1 = [str(x) for x in records1]
| python | {
"resource": ""
} |
q10368 | ask_for_confirmation | train | def ask_for_confirmation(prompt=None, resp=False):
"""Prompt for a yes or no response from the user.
Parameters
----------
prompt: basestring
The question to be prompted to the user.
resp: bool
The default value assumed by the caller when user simply
types ENTER.
Returns
-------
bool
True if the user response is 'y' or 'Y'
False if the user response is 'n' or 'N'
| python | {
"resource": ""
} |
q10369 | AuthDecorator.handle_target | train | def handle_target(self, request, controller_args, controller_kwargs):
"""Only here to set self.request and get rid of it after
this will set self.request so the target method can access request using
self.request, just like in the controller.
"""
| python | {
"resource": ""
} |
q10370 | HTTPClient.get | train | def get(self, uri, query=None, **kwargs):
"""make a GET request"""
| python | {
"resource": ""
} |
q10371 | HTTPClient.post | train | def post(self, uri, body=None, **kwargs):
"""make a POST request"""
| python | {
"resource": ""
} |
q10372 | HTTPClient.post_file | train | def post_file(self, uri, body, files, **kwargs):
"""POST a file"""
# requests doesn't actually need us to open the files but we do anyway because
# if we don't then the filename isn't preserved, so we assume each string
# value is a filepath
for key in files.keys():
if isinstance(files[key], basestring):
| python | {
"resource": ""
} |
q10373 | HTTPClient.delete | train | def delete(self, uri, query=None, **kwargs):
"""make a DELETE request"""
| python | {
"resource": ""
} |
q10374 | HTTPClient.get_fetch_headers | train | def get_fetch_headers(self, method, headers):
"""merge class headers with passed in headers
:param method: string, (eg, GET or POST), this is passed in so you can customize
headers based on the method that you are calling
:param headers: dict, all the headers passed into the fetch method
:returns: passed in | python | {
"resource": ""
} |
q10375 | HTTPClient.get_fetch_request | train | def get_fetch_request(self, method, fetch_url, *args, **kwargs):
"""This is handy if you want to modify the request right before passing it
to requests, or you want to do something extra special customized
:param method: string, the http method (eg, GET, POST)
:param fetch_url: string, the full url with query params
:param *args: any other positional | python | {
"resource": ""
} |
q10376 | HTTPClient.get_fetch_response | train | def get_fetch_response(self, res):
"""the goal of this method is to make the requests object more endpoints like
res -- requests Response -- the native requests response instance, we manipulate
it a bit to make it look a bit more like the internal endpoints.Response object
"""
res.code = res.status_code
res.headers = Headers(res.headers)
res._body = None
res.body = ''
body = res.content
| python | {
"resource": ""
} |
q10377 | HTTPClient.is_json | train | def is_json(self, headers):
"""return true if content_type is a json content type"""
ret = False
ct = headers.get("content-type", "").lower()
| python | {
"resource": ""
} |
q10378 | ReflectMethod.params | train | def params(self):
"""return information about the params that the given http option takes"""
ret = {}
for rd in self.decorators:
args = rd.args
kwargs = rd.kwargs
if param in rd:
| python | {
"resource": ""
} |
q10379 | BaseServer.create_call | train | def create_call(self, raw_request, **kwargs):
"""create a call object that has endpoints understandable request and response
instances"""
req = self.create_request(raw_request, **kwargs)
| python | {
"resource": ""
} |
q10380 | RateLimitDecorator.decorate | train | def decorate(self, func, limit=0, ttl=0, *anoop, **kwnoop):
"""see target for an explanation of limit and ttl""" | python | {
"resource": ""
} |
q10381 | ratelimit.decorate | train | def decorate(self, func, limit, ttl, *anoop, **kwnoop):
"""make limit and ttl required"""
| python | {
"resource": ""
} |
q10382 | Base64.encode | train | def encode(cls, s):
"""converts a plain text string to base64 encoding
:param s: unicode str|bytes, the base64 | python | {
"resource": ""
} |
q10383 | Base64.decode | train | def decode(cls, s):
"""decodes a base64 string to plain text
:param s: unicode str|bytes, the base64 | python | {
"resource": ""
} |
q10384 | MimeType.find_type | train | def find_type(cls, val):
"""return the mimetype from the given string value
if value is a path, then the extension will be found, if val is an extension then
that will be used to find the mimetype
| python | {
"resource": ""
} |
q10385 | AcceptHeader.filter | train | def filter(self, media_type, **params):
"""
iterate all the accept media types that match media_type
media_type -- string -- the media type to filter by
**params -- dict -- further filter by key: val
return -- generator -- yields all matching media type info things
"""
mtype, msubtype = self._split_media_type(media_type)
for x in self.__iter__():
# all the params have to match to make the media type valid
matched = True
for k, v in params.items():
if x[2].get(k, None) != v:
matched = False
break
if matched:
if x[0][0] == '*':
if x[0][1] == '*':
| python | {
"resource": ""
} |
q10386 | Application.create_request | train | def create_request(self, raw_request, **kwargs):
"""
create instance of request
raw_request -- the raw request object retrieved from a WSGI server
"""
r = self.request_class()
for k, v in raw_request.items():
if k.startswith('HTTP_'):
r.set_header(k[5:], v)
else:
r.environ[k] = v
r.method = raw_request['REQUEST_METHOD']
r.path = raw_request['PATH_INFO']
r.query = raw_request['QUERY_STRING']
# handle headers not prefixed with http
for k, t in {'CONTENT_TYPE': None, 'CONTENT_LENGTH': int}.items():
v = r.environ.pop(k, None)
if v:
| python | {
"resource": ""
} |
q10387 | WebsocketApplication.create_environ | train | def create_environ(self, req, payload):
"""This will take the original request and the new websocket payload and
merge them into a new request instance"""
ws_req = req.copy()
del ws_req.controller_info
ws_req.environ.pop("wsgi.input", None)
ws_req.body_kwargs = payload.body
ws_req.environ["REQUEST_METHOD"] = payload.method
ws_req.method = payload.method
| python | {
"resource": ""
} |
q10388 | find_module_path | train | def find_module_path():
"""find where the master module is located"""
master_modname = __name__.split(".", 1)[0]
master_module = sys.modules[master_modname]
| python | {
"resource": ""
} |
q10389 | Headers._convert_string_name | train | def _convert_string_name(self, k):
"""converts things like FOO_BAR to Foo-Bar which is the normal form"""
k = String(k, "iso-8859-1")
| python | {
"resource": ""
} |
q10390 | Url._normalize_params | train | def _normalize_params(self, *paths, **query_kwargs):
"""a lot of the helper methods are very similar, this handles their arguments"""
kwargs = {}
if paths:
fragment = paths[-1]
if fragment:
if fragment.startswith("#"):
kwargs["fragment"] = fragment
| python | {
"resource": ""
} |
q10391 | Url.controller | train | def controller(self, *paths, **query_kwargs):
"""create a new url object using the controller path as a base
if you have a controller `foo.BarController` then this would create a new
Url instance with `host/foo/bar` as the base path, so any *paths will be
appended to `/foo/bar`
:example:
# controller foo.BarController
print url # http://host.com/foo/bar/some_random_path
print url.controller() # http://host.com/foo/bar
print url.controller("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the controller path
:param **query_kwargs: dict, any query string params to add
"""
| python | {
"resource": ""
} |
q10392 | Url.base | train | def base(self, *paths, **query_kwargs):
"""create a new url object using the current base path as a base
if you had requested /foo/bar, then this would append *paths and **query_kwargs
to /foo/bar
:example:
# current path: /foo/bar
print url # http://host.com/foo/bar
print url.base() # http://host.com/foo/bar
print url.base("che", boom="bam") | python | {
"resource": ""
} |
q10393 | Url.host | train | def host(self, *paths, **query_kwargs):
"""create a new url object using the host as a base
if you had requested http://host/foo/bar, then this would append *paths and **query_kwargs
to http://host
:example:
# current url: http://host/foo/bar
print url # http://host.com/foo/bar
print url.host_url() # http://host.com/
print url.host_url("che", | python | {
"resource": ""
} |
q10394 | Request.accept_encoding | train | def accept_encoding(self):
"""The encoding the client requested the response to use"""
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Charset
ret = ""
accept_encoding = self.get_header("Accept-Charset", "")
if accept_encoding:
| python | {
"resource": ""
} |
q10395 | Request.encoding | train | def encoding(self):
"""the character encoding of the request, usually only set in POST type requests"""
encoding = None
ct = self.get_header('content-type')
if ct:
ah = AcceptHeader(ct)
| python | {
"resource": ""
} |
q10396 | Request.access_token | train | def access_token(self):
"""return an Oauth 2.0 Bearer access token if it can be found"""
access_token = self.get_auth_bearer()
if | python | {
"resource": ""
} |
q10397 | Request.client_tokens | train | def client_tokens(self):
"""try and get Oauth 2.0 client id and secret first from basic auth header,
then from GET or POST parameters
return -- tuple -- client_id, client_secret
"""
client_id, client_secret = self.get_auth_basic()
if not client_id and not client_secret:
client_id = self.query_kwargs.get('client_id', '')
| python | {
"resource": ""
} |
q10398 | Request.ips | train | def ips(self):
"""return all the possible ips of this request, this will include public and private ips"""
r = []
names = ['X_FORWARDED_FOR', 'CLIENT_IP', 'X_REAL_IP', 'X_FORWARDED',
'X_CLUSTER_CLIENT_IP', 'FORWARDED_FOR', 'FORWARDED', 'VIA',
'REMOTE_ADDR']
for name in names:
vs = self.get_header(name, '')
if vs:
| python | {
"resource": ""
} |
q10399 | Request.ip | train | def ip(self):
"""return the public ip address"""
r = ''
# this was compiled from here:
# https://github.com/un33k/django-ipware
# http://www.ietf.org/rfc/rfc3330.txt (IPv4)
# http://www.ietf.org/rfc/rfc5156.txt (IPv6)
# https://en.wikipedia.org/wiki/Reserved_IP_addresses
format_regex = re.compile(r'\s')
ip_regex = re.compile(r'^(?:{})'.format(r'|'.join([
r'0\.', # reserved for 'self-identification'
r'10\.', # class A
r'169\.254', # link local block
r'172\.(?:1[6-9]|2[0-9]|3[0-1])\.', # class B
r'192\.0\.2\.', # documentation/examples
| python | {
"resource": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.