id int32 0 252k | repo stringlengths 7 55 | path stringlengths 4 127 | func_name stringlengths 1 88 | original_string stringlengths 75 19.8k | language stringclasses 1 value | code stringlengths 51 19.8k | code_tokens list | docstring stringlengths 3 17.3k | docstring_tokens list | sha stringlengths 40 40 | url stringlengths 87 242 |
|---|---|---|---|---|---|---|---|---|---|---|---|
10,300 | taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.getTriggerToken | async def getTriggerToken(self, *args, **kwargs):
"""
Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["getTriggerToken"], *args, **kwargs) | python | async def getTriggerToken(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["getTriggerToken"], *args, **kwargs) | [
"async",
"def",
"getTriggerToken",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"getTriggerToken\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs... | Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable`` | [
"Get",
"a",
"trigger",
"token"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L176-L188 |
10,301 | taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.resetTriggerToken | async def resetTriggerToken(self, *args, **kwargs):
"""
Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["resetTriggerToken"], *args, **kwargs) | python | async def resetTriggerToken(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["resetTriggerToken"], *args, **kwargs) | [
"async",
"def",
"resetTriggerToken",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"resetTriggerToken\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kw... | Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable`` | [
"Reset",
"a",
"trigger",
"token"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L190-L202 |
10,302 | taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.triggerHookWithToken | async def triggerHookWithToken(self, *args, **kwargs):
"""
Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["triggerHookWithToken"], *args, **kwargs) | python | async def triggerHookWithToken(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["triggerHookWithToken"], *args, **kwargs) | [
"async",
"def",
"triggerHookWithToken",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"triggerHookWithToken\"",
"]",
",",
"*",
"args",
",",
"*",
"*",... | Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable`` | [
"Trigger",
"a",
"hook",
"with",
"a",
"token"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L204-L221 |
10,303 | taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.createWorkerType | async def createWorkerType(self, *args, **kwargs):
"""
Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will
begin creating instances for it based on its capacity bounds and its
pending task count from the Queue. It is the worker's responsibility
to shut itself down. The provisioner has a limit (currently 96hours)
for all instances to prevent zombie instances from running indefinitely.
The provisioner will ensure that all instances created are tagged with
aws resource tags containing the provisioner id and the worker type.
If provided, the secrets in the global, region and instance type sections
are available using the secrets api. If specified, the scopes provided
will be used to generate a set of temporary credentials available with
the other secrets.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createWorkerType"], *args, **kwargs) | python | async def createWorkerType(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["createWorkerType"], *args, **kwargs) | [
"async",
"def",
"createWorkerType",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"createWorkerType\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwar... | Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will
begin creating instances for it based on its capacity bounds and its
pending task count from the Queue. It is the worker's responsibility
to shut itself down. The provisioner has a limit (currently 96hours)
for all instances to prevent zombie instances from running indefinitely.
The provisioner will ensure that all instances created are tagged with
aws resource tags containing the provisioner id and the worker type.
If provided, the secrets in the global, region and instance type sections
are available using the secrets api. If specified, the scopes provided
will be used to generate a set of temporary credentials available with
the other secrets.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable`` | [
"Create",
"new",
"Worker",
"Type"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L67-L102 |
10,304 | taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.updateWorkerType | async def updateWorkerType(self, *args, **kwargs):
"""
Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the
create method.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["updateWorkerType"], *args, **kwargs) | python | async def updateWorkerType(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["updateWorkerType"], *args, **kwargs) | [
"async",
"def",
"updateWorkerType",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"updateWorkerType\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwar... | Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the
create method.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable`` | [
"Update",
"Worker",
"Type"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L104-L127 |
10,305 | taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.workerType | async def workerType(self, *args, **kwargs):
"""
Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["workerType"], *args, **kwargs) | python | async def workerType(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["workerType"], *args, **kwargs) | [
"async",
"def",
"workerType",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"workerType\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable`` | [
"Get",
"Worker",
"Type"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L146-L161 |
10,306 | taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.createSecret | async def createSecret(self, *args, **kwargs):
"""
Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for each spot bid.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createSecret"], *args, **kwargs) | python | async def createSecret(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["createSecret"], *args, **kwargs) | [
"async",
"def",
"createSecret",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"createSecret\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
"... | Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for each spot bid.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#``
This method is ``stable`` | [
"Create",
"new",
"Secret"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L199-L215 |
10,307 | taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.removeSecret | async def removeSecret(self, *args, **kwargs):
"""
Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted processes to prevent credential and/or secret leakage.
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["removeSecret"], *args, **kwargs) | python | async def removeSecret(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["removeSecret"], *args, **kwargs) | [
"async",
"def",
"removeSecret",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"removeSecret\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
"... | Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted processes to prevent credential and/or secret leakage.
This method is ``stable`` | [
"Remove",
"a",
"Secret"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L251-L265 |
10,308 | taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.getLaunchSpecs | async def getLaunchSpecs(self, *args, **kwargs):
"""
Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getLaunchSpecs"], *args, **kwargs) | python | async def getLaunchSpecs(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["getLaunchSpecs"], *args, **kwargs) | [
"async",
"def",
"getLaunchSpecs",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"await",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"getLaunchSpecs\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",... | Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#``
This method is ``experimental`` | [
"Get",
"All",
"Launch",
"Specifications",
"for",
"WorkerType"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L267-L282 |
10,309 | jart/fabulous | fabulous/text.py | resolve_font | def resolve_font(name):
"""Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... resolve_font('blahahaha')
... assert False
... except FontNotFound:
... pass
"""
if os.path.exists(name):
return os.path.abspath(name)
fonts = get_font_files()
if name in fonts:
return fonts[name]
raise FontNotFound("Can't find %r :'( Try adding it to ~/.fonts" % name) | python | def resolve_font(name):
if os.path.exists(name):
return os.path.abspath(name)
fonts = get_font_files()
if name in fonts:
return fonts[name]
raise FontNotFound("Can't find %r :'( Try adding it to ~/.fonts" % name) | [
"def",
"resolve_font",
"(",
"name",
")",
":",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"name",
")",
":",
"return",
"os",
".",
"path",
".",
"abspath",
"(",
"name",
")",
"fonts",
"=",
"get_font_files",
"(",
")",
"if",
"name",
"in",
"fonts",
":",
... | Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... resolve_font('blahahaha')
... assert False
... except FontNotFound:
... pass | [
"Turns",
"font",
"names",
"into",
"absolute",
"filenames"
] | 19903cf0a980b82f5928c3bec1f28b6bdd3785bd | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/text.py#L143-L176 |
10,310 | jart/fabulous | fabulous/text.py | get_font_files | def get_font_files():
"""Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True
"""
roots = [
'/usr/share/fonts/truetype', # where ubuntu puts fonts
'/usr/share/fonts', # where fedora puts fonts
os.path.expanduser('~/.fonts'), # custom user fonts
os.path.abspath(os.path.join(os.path.dirname(__file__), 'fonts')),
]
result = {}
for root in roots:
for path, dirs, names in os.walk(root):
for name in names:
if name.endswith(('.ttf', '.otf')):
result[name[:-4]] = os.path.join(path, name)
return result | python | def get_font_files():
roots = [
'/usr/share/fonts/truetype', # where ubuntu puts fonts
'/usr/share/fonts', # where fedora puts fonts
os.path.expanduser('~/.fonts'), # custom user fonts
os.path.abspath(os.path.join(os.path.dirname(__file__), 'fonts')),
]
result = {}
for root in roots:
for path, dirs, names in os.walk(root):
for name in names:
if name.endswith(('.ttf', '.otf')):
result[name[:-4]] = os.path.join(path, name)
return result | [
"def",
"get_font_files",
"(",
")",
":",
"roots",
"=",
"[",
"'/usr/share/fonts/truetype'",
",",
"# where ubuntu puts fonts",
"'/usr/share/fonts'",
",",
"# where fedora puts fonts",
"os",
".",
"path",
".",
"expanduser",
"(",
"'~/.fonts'",
")",
",",
"# custom user fonts",
... | Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True | [
"Returns",
"a",
"list",
"of",
"all",
"font",
"files",
"we",
"could",
"find"
] | 19903cf0a980b82f5928c3bec1f28b6bdd3785bd | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/text.py#L180-L208 |
10,311 | taskcluster/taskcluster-client.py | taskcluster/purgecache.py | PurgeCache.purgeCache | def purgeCache(self, *args, **kwargs):
"""
Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes input: ``v1/purge-cache-request.json#``
This method is ``stable``
"""
return self._makeApiCall(self.funcinfo["purgeCache"], *args, **kwargs) | python | def purgeCache(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["purgeCache"], *args, **kwargs) | [
"def",
"purgeCache",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"purgeCache\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes input: ``v1/purge-cache-request.json#``
This method is ``stable`` | [
"Purge",
"Worker",
"Cache"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/purgecache.py#L40-L53 |
10,312 | lepture/safe | safe/__init__.py | is_asdf | def is_asdf(raw):
"""If the password is in the order on keyboard."""
reverse = raw[::-1]
asdf = ''.join(ASDF)
return raw in asdf or reverse in asdf | python | def is_asdf(raw):
reverse = raw[::-1]
asdf = ''.join(ASDF)
return raw in asdf or reverse in asdf | [
"def",
"is_asdf",
"(",
"raw",
")",
":",
"reverse",
"=",
"raw",
"[",
":",
":",
"-",
"1",
"]",
"asdf",
"=",
"''",
".",
"join",
"(",
"ASDF",
")",
"return",
"raw",
"in",
"asdf",
"or",
"reverse",
"in",
"asdf"
] | If the password is in the order on keyboard. | [
"If",
"the",
"password",
"is",
"in",
"the",
"order",
"on",
"keyboard",
"."
] | 038a72e59557caf97c1b93f66124a8f014eb032b | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L72-L78 |
10,313 | lepture/safe | safe/__init__.py | is_by_step | def is_by_step(raw):
"""If the password is alphabet step by step."""
# make sure it is unicode
delta = ord(raw[1]) - ord(raw[0])
for i in range(2, len(raw)):
if ord(raw[i]) - ord(raw[i-1]) != delta:
return False
return True | python | def is_by_step(raw):
# make sure it is unicode
delta = ord(raw[1]) - ord(raw[0])
for i in range(2, len(raw)):
if ord(raw[i]) - ord(raw[i-1]) != delta:
return False
return True | [
"def",
"is_by_step",
"(",
"raw",
")",
":",
"# make sure it is unicode",
"delta",
"=",
"ord",
"(",
"raw",
"[",
"1",
"]",
")",
"-",
"ord",
"(",
"raw",
"[",
"0",
"]",
")",
"for",
"i",
"in",
"range",
"(",
"2",
",",
"len",
"(",
"raw",
")",
")",
":",... | If the password is alphabet step by step. | [
"If",
"the",
"password",
"is",
"alphabet",
"step",
"by",
"step",
"."
] | 038a72e59557caf97c1b93f66124a8f014eb032b | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L81-L90 |
10,314 | lepture/safe | safe/__init__.py | is_common_password | def is_common_password(raw, freq=0):
"""If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/
"""
frequent = WORDS.get(raw, 0)
if freq:
return frequent > freq
return bool(frequent) | python | def is_common_password(raw, freq=0):
frequent = WORDS.get(raw, 0)
if freq:
return frequent > freq
return bool(frequent) | [
"def",
"is_common_password",
"(",
"raw",
",",
"freq",
"=",
"0",
")",
":",
"frequent",
"=",
"WORDS",
".",
"get",
"(",
"raw",
",",
"0",
")",
"if",
"freq",
":",
"return",
"frequent",
">",
"freq",
"return",
"bool",
"(",
"frequent",
")"
] | If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/ | [
"If",
"the",
"password",
"is",
"common",
"used",
"."
] | 038a72e59557caf97c1b93f66124a8f014eb032b | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L93-L101 |
10,315 | lepture/safe | safe/__init__.py | check | def check(raw, length=8, freq=0, min_types=3, level=STRONG):
"""Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password.
"""
raw = to_unicode(raw)
if level > STRONG:
level = STRONG
if len(raw) < length:
return Strength(False, 'terrible', 'password is too short')
if is_asdf(raw) or is_by_step(raw):
return Strength(False, 'simple', 'password has a pattern')
if is_common_password(raw, freq=freq):
return Strength(False, 'simple', 'password is too common')
types = 0
if LOWER.search(raw):
types += 1
if UPPER.search(raw):
types += 1
if NUMBER.search(raw):
types += 1
if MARKS.search(raw):
types += 1
if types < 2:
return Strength(level <= SIMPLE, 'simple', 'password is too simple')
if types < min_types:
return Strength(level <= MEDIUM, 'medium',
'password is good enough, but not strong')
return Strength(True, 'strong', 'password is perfect') | python | def check(raw, length=8, freq=0, min_types=3, level=STRONG):
raw = to_unicode(raw)
if level > STRONG:
level = STRONG
if len(raw) < length:
return Strength(False, 'terrible', 'password is too short')
if is_asdf(raw) or is_by_step(raw):
return Strength(False, 'simple', 'password has a pattern')
if is_common_password(raw, freq=freq):
return Strength(False, 'simple', 'password is too common')
types = 0
if LOWER.search(raw):
types += 1
if UPPER.search(raw):
types += 1
if NUMBER.search(raw):
types += 1
if MARKS.search(raw):
types += 1
if types < 2:
return Strength(level <= SIMPLE, 'simple', 'password is too simple')
if types < min_types:
return Strength(level <= MEDIUM, 'medium',
'password is good enough, but not strong')
return Strength(True, 'strong', 'password is perfect') | [
"def",
"check",
"(",
"raw",
",",
"length",
"=",
"8",
",",
"freq",
"=",
"0",
",",
"min_types",
"=",
"3",
",",
"level",
"=",
"STRONG",
")",
":",
"raw",
"=",
"to_unicode",
"(",
"raw",
")",
"if",
"level",
">",
"STRONG",
":",
"level",
"=",
"STRONG",
... | Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password. | [
"Check",
"the",
"safety",
"level",
"of",
"the",
"password",
"."
] | 038a72e59557caf97c1b93f66124a8f014eb032b | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L142-L185 |
10,316 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._make_request | def _make_request(self, url, method="get", data=None, extra_headers=None):
"""Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
attempts = 0
while attempts < 1:
# Authenticate first if not authenticated already
if not self._is_authenticated:
self._authenticate()
# Make the request and check for authentication errors
# This allows us to catch session timeouts for long standing connections
try:
return self._send_request(url, method, data, extra_headers)
except HTTPError as e:
if e.response.status_code == 403:
logger.info("Authenticated session against NetMRI timed out. Retrying.")
self._is_authenticated = False
attempts += 1
else:
# re-raise other HTTP errors
raise | python | def _make_request(self, url, method="get", data=None, extra_headers=None):
attempts = 0
while attempts < 1:
# Authenticate first if not authenticated already
if not self._is_authenticated:
self._authenticate()
# Make the request and check for authentication errors
# This allows us to catch session timeouts for long standing connections
try:
return self._send_request(url, method, data, extra_headers)
except HTTPError as e:
if e.response.status_code == 403:
logger.info("Authenticated session against NetMRI timed out. Retrying.")
self._is_authenticated = False
attempts += 1
else:
# re-raise other HTTP errors
raise | [
"def",
"_make_request",
"(",
"self",
",",
"url",
",",
"method",
"=",
"\"get\"",
",",
"data",
"=",
"None",
",",
"extra_headers",
"=",
"None",
")",
":",
"attempts",
"=",
"0",
"while",
"attempts",
"<",
"1",
":",
"# Authenticate first if not authenticated already"... | Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict | [
"Prepares",
"the",
"request",
"checks",
"for",
"authentication",
"and",
"retries",
"in",
"case",
"of",
"issues"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L82-L110 |
10,317 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._send_request | def _send_request(self, url, method="get", data=None, extra_headers=None):
"""Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
headers = {'Content-type': 'application/json'}
if isinstance(extra_headers, dict):
headers.update(extra_headers)
if not data or "password" not in data:
logger.debug("Sending {method} request to {url} with data {data}".format(
method=method.upper(), url=url, data=data)
)
r = self.session.request(method, url, headers=headers, data=data)
r.raise_for_status()
return r.json() | python | def _send_request(self, url, method="get", data=None, extra_headers=None):
headers = {'Content-type': 'application/json'}
if isinstance(extra_headers, dict):
headers.update(extra_headers)
if not data or "password" not in data:
logger.debug("Sending {method} request to {url} with data {data}".format(
method=method.upper(), url=url, data=data)
)
r = self.session.request(method, url, headers=headers, data=data)
r.raise_for_status()
return r.json() | [
"def",
"_send_request",
"(",
"self",
",",
"url",
",",
"method",
"=",
"\"get\"",
",",
"data",
"=",
"None",
",",
"extra_headers",
"=",
"None",
")",
":",
"headers",
"=",
"{",
"'Content-type'",
":",
"'application/json'",
"}",
"if",
"isinstance",
"(",
"extra_he... | Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict | [
"Performs",
"a",
"given",
"request",
"and",
"returns",
"a",
"json",
"object"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L112-L133 |
10,318 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._get_api_version | def _get_api_version(self):
"""Fetches the most recent API version
Returns:
str
"""
url = "{base_url}/api/server_info".format(base_url=self._base_url())
server_info = self._make_request(url=url, method="get")
return server_info["latest_api_version"] | python | def _get_api_version(self):
url = "{base_url}/api/server_info".format(base_url=self._base_url())
server_info = self._make_request(url=url, method="get")
return server_info["latest_api_version"] | [
"def",
"_get_api_version",
"(",
"self",
")",
":",
"url",
"=",
"\"{base_url}/api/server_info\"",
".",
"format",
"(",
"base_url",
"=",
"self",
".",
"_base_url",
"(",
")",
")",
"server_info",
"=",
"self",
".",
"_make_request",
"(",
"url",
"=",
"url",
",",
"me... | Fetches the most recent API version
Returns:
str | [
"Fetches",
"the",
"most",
"recent",
"API",
"version"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L135-L143 |
10,319 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._authenticate | def _authenticate(self):
""" Perform an authentication against NetMRI"""
url = "{base_url}/api/authenticate".format(base_url=self._base_url())
data = json.dumps({'username': self.username, "password": self.password})
# Bypass authentication check in make_request by using _send_request
logger.debug("Authenticating against NetMRI")
self._send_request(url, method="post", data=data)
self._is_authenticated = True | python | def _authenticate(self):
url = "{base_url}/api/authenticate".format(base_url=self._base_url())
data = json.dumps({'username': self.username, "password": self.password})
# Bypass authentication check in make_request by using _send_request
logger.debug("Authenticating against NetMRI")
self._send_request(url, method="post", data=data)
self._is_authenticated = True | [
"def",
"_authenticate",
"(",
"self",
")",
":",
"url",
"=",
"\"{base_url}/api/authenticate\"",
".",
"format",
"(",
"base_url",
"=",
"self",
".",
"_base_url",
"(",
")",
")",
"data",
"=",
"json",
".",
"dumps",
"(",
"{",
"'username'",
":",
"self",
".",
"user... | Perform an authentication against NetMRI | [
"Perform",
"an",
"authentication",
"against",
"NetMRI"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L145-L152 |
10,320 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._controller_name | def _controller_name(self, objtype):
"""Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name
"""
# would be better to use inflect.pluralize here, but would add a dependency
if objtype.endswith('y'):
return objtype[:-1] + 'ies'
if objtype[-1] in 'sx' or objtype[-2:] in ['sh', 'ch']:
return objtype + 'es'
if objtype.endswith('an'):
return objtype[:-2] + 'en'
return objtype + 's' | python | def _controller_name(self, objtype):
# would be better to use inflect.pluralize here, but would add a dependency
if objtype.endswith('y'):
return objtype[:-1] + 'ies'
if objtype[-1] in 'sx' or objtype[-2:] in ['sh', 'ch']:
return objtype + 'es'
if objtype.endswith('an'):
return objtype[:-2] + 'en'
return objtype + 's' | [
"def",
"_controller_name",
"(",
"self",
",",
"objtype",
")",
":",
"# would be better to use inflect.pluralize here, but would add a dependency",
"if",
"objtype",
".",
"endswith",
"(",
"'y'",
")",
":",
"return",
"objtype",
"[",
":",
"-",
"1",
"]",
"+",
"'ies'",
"if... | Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name | [
"Determines",
"the",
"controller",
"name",
"for",
"the",
"object",
"s",
"type"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L154-L173 |
10,321 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._object_url | def _object_url(self, objtype, objid):
"""Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object
"""
return "{base_url}/api/{api_version}/{controller}/{obj_id}".format(
base_url=self._base_url(),
api_version=self.api_version,
controller=self._controller_name(objtype),
obj_id=objid
) | python | def _object_url(self, objtype, objid):
return "{base_url}/api/{api_version}/{controller}/{obj_id}".format(
base_url=self._base_url(),
api_version=self.api_version,
controller=self._controller_name(objtype),
obj_id=objid
) | [
"def",
"_object_url",
"(",
"self",
",",
"objtype",
",",
"objid",
")",
":",
"return",
"\"{base_url}/api/{api_version}/{controller}/{obj_id}\"",
".",
"format",
"(",
"base_url",
"=",
"self",
".",
"_base_url",
"(",
")",
",",
"api_version",
"=",
"self",
".",
"api_ver... | Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object | [
"Generate",
"the",
"URL",
"for",
"the",
"specified",
"object"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L186-L201 |
10,322 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._method_url | def _method_url(self, method_name):
"""Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method
"""
return "{base_url}/api/{api}/{method}".format(
base_url=self._base_url(),
api=self.api_version,
method=method_name
) | python | def _method_url(self, method_name):
return "{base_url}/api/{api}/{method}".format(
base_url=self._base_url(),
api=self.api_version,
method=method_name
) | [
"def",
"_method_url",
"(",
"self",
",",
"method_name",
")",
":",
"return",
"\"{base_url}/api/{api}/{method}\"",
".",
"format",
"(",
"base_url",
"=",
"self",
".",
"_base_url",
"(",
")",
",",
"api",
"=",
"self",
".",
"api_version",
",",
"method",
"=",
"method_... | Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method | [
"Generate",
"the",
"URL",
"for",
"the",
"requested",
"method"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L203-L216 |
10,323 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI.api_request | def api_request(self, method_name, params):
"""Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
Raises:
requests.exceptions.HTTPError
"""
url = self._method_url(method_name)
data = json.dumps(params)
return self._make_request(url=url, method="post", data=data) | python | def api_request(self, method_name, params):
url = self._method_url(method_name)
data = json.dumps(params)
return self._make_request(url=url, method="post", data=data) | [
"def",
"api_request",
"(",
"self",
",",
"method_name",
",",
"params",
")",
":",
"url",
"=",
"self",
".",
"_method_url",
"(",
"method_name",
")",
"data",
"=",
"json",
".",
"dumps",
"(",
"params",
")",
"return",
"self",
".",
"_make_request",
"(",
"url",
... | Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
Raises:
requests.exceptions.HTTPError | [
"Execute",
"an",
"arbitrary",
"method",
"."
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L218-L231 |
10,324 | infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI.show | def show(self, objtype, objid):
"""Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
Raises:
requests.exceptions.HTTPError
"""
url = self._object_url(objtype, int(objid))
return self._make_request(url, method="get") | python | def show(self, objtype, objid):
url = self._object_url(objtype, int(objid))
return self._make_request(url, method="get") | [
"def",
"show",
"(",
"self",
",",
"objtype",
",",
"objid",
")",
":",
"url",
"=",
"self",
".",
"_object_url",
"(",
"objtype",
",",
"int",
"(",
"objid",
")",
")",
"return",
"self",
".",
"_make_request",
"(",
"url",
",",
"method",
"=",
"\"get\"",
")"
] | Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
Raises:
requests.exceptions.HTTPError | [
"Query",
"for",
"a",
"specific",
"resource",
"by",
"ID"
] | e633bef1584101228b6d403df1de0d3d8e88d5c0 | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L233-L245 |
10,325 | taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.irc | def irc(self, *args, **kwargs):
"""
Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed by a background process.
This allows us to re-send the message in face of connection issues.
However, if the user isn't online the message will be dropped without
error. We maybe improve this behavior in the future. For now just keep
in mind that IRC is a best-effort service.
This method takes input: ``v1/irc-request.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["irc"], *args, **kwargs) | python | def irc(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["irc"], *args, **kwargs) | [
"def",
"irc",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"irc\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed by a background process.
This allows us to re-send the message in face of connection issues.
However, if the user isn't online the message will be dropped without
error. We maybe improve this behavior in the future. For now just keep
in mind that IRC is a best-effort service.
This method takes input: ``v1/irc-request.json#``
This method is ``experimental`` | [
"Post",
"IRC",
"Message"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L66-L87 |
10,326 | taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.addDenylistAddress | def addDenylistAddress(self, *args, **kwargs):
"""
Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["addDenylistAddress"], *args, **kwargs) | python | def addDenylistAddress(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["addDenylistAddress"], *args, **kwargs) | [
"def",
"addDenylistAddress",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"addDenylistAddress\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental`` | [
"Denylist",
"Given",
"Address"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L89-L103 |
10,327 | taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.deleteDenylistAddress | def deleteDenylistAddress(self, *args, **kwargs):
"""
Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["deleteDenylistAddress"], *args, **kwargs) | python | def deleteDenylistAddress(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["deleteDenylistAddress"], *args, **kwargs) | [
"def",
"deleteDenylistAddress",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"deleteDenylistAddress\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental`` | [
"Delete",
"Denylisted",
"Address"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L105-L116 |
10,328 | taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.list | def list(self, *args, **kwargs):
"""
List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/notification-address-list.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["list"], *args, **kwargs) | python | def list(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["list"], *args, **kwargs) | [
"def",
"list",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"_makeApiCall",
"(",
"self",
".",
"funcinfo",
"[",
"\"list\"",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/notification-address-list.json#``
This method is ``experimental`` | [
"List",
"Denylisted",
"Notifications"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L118-L139 |
10,329 | taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.clientCreated | def clientCreated(self, *args, **kwargs):
"""
Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-created',
'name': 'clientCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def clientCreated(self, *args, **kwargs):
ref = {
'exchange': 'client-created',
'name': 'clientCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | [
"def",
"clientCreated",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ref",
"=",
"{",
"'exchange'",
":",
"'client-created'",
",",
"'name'",
":",
"'clientCreated'",
",",
"'routingKey'",
":",
"[",
"{",
"'multipleWords'",
":",
"True",
"... | Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | [
"Client",
"Created",
"Messages"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L32-L54 |
10,330 | taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.clientUpdated | def clientUpdated(self, *args, **kwargs):
"""
Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-updated',
'name': 'clientUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def clientUpdated(self, *args, **kwargs):
ref = {
'exchange': 'client-updated',
'name': 'clientUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | [
"def",
"clientUpdated",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ref",
"=",
"{",
"'exchange'",
":",
"'client-updated'",
",",
"'name'",
":",
"'clientUpdated'",
",",
"'routingKey'",
":",
"[",
"{",
"'multipleWords'",
":",
"True",
"... | Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | [
"Client",
"Updated",
"Messages"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L56-L78 |
10,331 | taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.clientDeleted | def clientDeleted(self, *args, **kwargs):
"""
Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-deleted',
'name': 'clientDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def clientDeleted(self, *args, **kwargs):
ref = {
'exchange': 'client-deleted',
'name': 'clientDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | [
"def",
"clientDeleted",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ref",
"=",
"{",
"'exchange'",
":",
"'client-deleted'",
",",
"'name'",
":",
"'clientDeleted'",
",",
"'routingKey'",
":",
"[",
"{",
"'multipleWords'",
":",
"True",
"... | Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | [
"Client",
"Deleted",
"Messages"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L80-L102 |
10,332 | taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.roleCreated | def roleCreated(self, *args, **kwargs):
"""
Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-created',
'name': 'roleCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def roleCreated(self, *args, **kwargs):
ref = {
'exchange': 'role-created',
'name': 'roleCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | [
"def",
"roleCreated",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ref",
"=",
"{",
"'exchange'",
":",
"'role-created'",
",",
"'name'",
":",
"'roleCreated'",
",",
"'routingKey'",
":",
"[",
"{",
"'multipleWords'",
":",
"True",
",",
... | Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | [
"Role",
"Created",
"Messages"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L104-L126 |
10,333 | taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.roleUpdated | def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def roleUpdated(self, *args, **kwargs):
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | [
"def",
"roleUpdated",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ref",
"=",
"{",
"'exchange'",
":",
"'role-updated'",
",",
"'name'",
":",
"'roleUpdated'",
",",
"'routingKey'",
":",
"[",
"{",
"'multipleWords'",
":",
"True",
",",
... | Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | [
"Role",
"Updated",
"Messages"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L128-L150 |
10,334 | taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.roleDeleted | def roleDeleted(self, *args, **kwargs):
"""
Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-deleted',
'name': 'roleDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def roleDeleted(self, *args, **kwargs):
ref = {
'exchange': 'role-deleted',
'name': 'roleDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | [
"def",
"roleDeleted",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"ref",
"=",
"{",
"'exchange'",
":",
"'role-deleted'",
",",
"'name'",
":",
"'roleDeleted'",
",",
"'routingKey'",
":",
"[",
"{",
"'multipleWords'",
":",
"True",
",",
... | Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | [
"Role",
"Deleted",
"Messages"
] | bcc95217f8bf80bed2ae5885a19fa0035da7ebc9 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L152-L174 |
10,335 | jart/fabulous | fabulous/utils.py | memoize | def memoize(function):
"""A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks.
"""
cache = {}
@functools.wraps(function)
def _memoize(*args):
if args in cache:
return cache[args]
result = function(*args)
cache[args] = result
return result
return function | python | def memoize(function):
cache = {}
@functools.wraps(function)
def _memoize(*args):
if args in cache:
return cache[args]
result = function(*args)
cache[args] = result
return result
return function | [
"def",
"memoize",
"(",
"function",
")",
":",
"cache",
"=",
"{",
"}",
"@",
"functools",
".",
"wraps",
"(",
"function",
")",
"def",
"_memoize",
"(",
"*",
"args",
")",
":",
"if",
"args",
"in",
"cache",
":",
"return",
"cache",
"[",
"args",
"]",
"result... | A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks. | [
"A",
"very",
"simple",
"memoize",
"decorator",
"to",
"optimize",
"pure",
"-",
"ish",
"functions"
] | 19903cf0a980b82f5928c3bec1f28b6bdd3785bd | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/utils.py#L34-L48 |
10,336 | jart/fabulous | fabulous/utils.py | TerminalInfo.dimensions | def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
"""
try:
call = fcntl.ioctl(self.termfd, termios.TIOCGWINSZ, "\000" * 8)
except IOError:
return (79, 40)
else:
height, width = struct.unpack("hhhh", call)[:2]
return (width, height) | python | def dimensions(self):
try:
call = fcntl.ioctl(self.termfd, termios.TIOCGWINSZ, "\000" * 8)
except IOError:
return (79, 40)
else:
height, width = struct.unpack("hhhh", call)[:2]
return (width, height) | [
"def",
"dimensions",
"(",
"self",
")",
":",
"try",
":",
"call",
"=",
"fcntl",
".",
"ioctl",
"(",
"self",
".",
"termfd",
",",
"termios",
".",
"TIOCGWINSZ",
",",
"\"\\000\"",
"*",
"8",
")",
"except",
"IOError",
":",
"return",
"(",
"79",
",",
"40",
")... | Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``. | [
"Returns",
"terminal",
"dimensions"
] | 19903cf0a980b82f5928c3bec1f28b6bdd3785bd | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/utils.py#L100-L115 |
10,337 | progrium/skypipe | skypipe/client.py | sp_msg | def sp_msg(cmd, pipe=None, data=None):
"""Produces skypipe protocol multipart message"""
msg = [SP_HEADER, cmd]
if pipe is not None:
msg.append(pipe)
if data is not None:
msg.append(data)
return msg | python | def sp_msg(cmd, pipe=None, data=None):
msg = [SP_HEADER, cmd]
if pipe is not None:
msg.append(pipe)
if data is not None:
msg.append(data)
return msg | [
"def",
"sp_msg",
"(",
"cmd",
",",
"pipe",
"=",
"None",
",",
"data",
"=",
"None",
")",
":",
"msg",
"=",
"[",
"SP_HEADER",
",",
"cmd",
"]",
"if",
"pipe",
"is",
"not",
"None",
":",
"msg",
".",
"append",
"(",
"pipe",
")",
"if",
"data",
"is",
"not",... | Produces skypipe protocol multipart message | [
"Produces",
"skypipe",
"protocol",
"multipart",
"message"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/client.py#L42-L49 |
10,338 | progrium/skypipe | skypipe/client.py | stream_skypipe_output | def stream_skypipe_output(endpoint, name=None):
"""Generator for reading skypipe data"""
name = name or ''
socket = ctx.socket(zmq.DEALER)
socket.connect(endpoint)
try:
socket.send_multipart(sp_msg(SP_CMD_LISTEN, name))
while True:
msg = socket.recv_multipart()
try:
data = parse_skypipe_data_stream(msg, name)
if data:
yield data
except EOFError:
raise StopIteration()
finally:
socket.send_multipart(sp_msg(SP_CMD_UNLISTEN, name))
socket.close() | python | def stream_skypipe_output(endpoint, name=None):
name = name or ''
socket = ctx.socket(zmq.DEALER)
socket.connect(endpoint)
try:
socket.send_multipart(sp_msg(SP_CMD_LISTEN, name))
while True:
msg = socket.recv_multipart()
try:
data = parse_skypipe_data_stream(msg, name)
if data:
yield data
except EOFError:
raise StopIteration()
finally:
socket.send_multipart(sp_msg(SP_CMD_UNLISTEN, name))
socket.close() | [
"def",
"stream_skypipe_output",
"(",
"endpoint",
",",
"name",
"=",
"None",
")",
":",
"name",
"=",
"name",
"or",
"''",
"socket",
"=",
"ctx",
".",
"socket",
"(",
"zmq",
".",
"DEALER",
")",
"socket",
".",
"connect",
"(",
"endpoint",
")",
"try",
":",
"so... | Generator for reading skypipe data | [
"Generator",
"for",
"reading",
"skypipe",
"data"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/client.py#L75-L94 |
10,339 | progrium/skypipe | skypipe/client.py | parse_skypipe_data_stream | def parse_skypipe_data_stream(msg, for_pipe):
"""May return data from skypipe message or raises EOFError"""
header = str(msg.pop(0))
command = str(msg.pop(0))
pipe_name = str(msg.pop(0))
data = str(msg.pop(0))
if header != SP_HEADER: return
if pipe_name != for_pipe: return
if command != SP_CMD_DATA: return
if data == SP_DATA_EOF:
raise EOFError()
else:
return data | python | def parse_skypipe_data_stream(msg, for_pipe):
header = str(msg.pop(0))
command = str(msg.pop(0))
pipe_name = str(msg.pop(0))
data = str(msg.pop(0))
if header != SP_HEADER: return
if pipe_name != for_pipe: return
if command != SP_CMD_DATA: return
if data == SP_DATA_EOF:
raise EOFError()
else:
return data | [
"def",
"parse_skypipe_data_stream",
"(",
"msg",
",",
"for_pipe",
")",
":",
"header",
"=",
"str",
"(",
"msg",
".",
"pop",
"(",
"0",
")",
")",
"command",
"=",
"str",
"(",
"msg",
".",
"pop",
"(",
"0",
")",
")",
"pipe_name",
"=",
"str",
"(",
"msg",
"... | May return data from skypipe message or raises EOFError | [
"May",
"return",
"data",
"from",
"skypipe",
"message",
"or",
"raises",
"EOFError"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/client.py#L96-L108 |
10,340 | progrium/skypipe | skypipe/client.py | skypipe_input_stream | def skypipe_input_stream(endpoint, name=None):
"""Returns a context manager for streaming data into skypipe"""
name = name or ''
class context_manager(object):
def __enter__(self):
self.socket = ctx.socket(zmq.DEALER)
self.socket.connect(endpoint)
return self
def send(self, data):
data_msg = sp_msg(SP_CMD_DATA, name, data)
self.socket.send_multipart(data_msg)
def __exit__(self, *args, **kwargs):
eof_msg = sp_msg(SP_CMD_DATA, name, SP_DATA_EOF)
self.socket.send_multipart(eof_msg)
self.socket.close()
return context_manager() | python | def skypipe_input_stream(endpoint, name=None):
name = name or ''
class context_manager(object):
def __enter__(self):
self.socket = ctx.socket(zmq.DEALER)
self.socket.connect(endpoint)
return self
def send(self, data):
data_msg = sp_msg(SP_CMD_DATA, name, data)
self.socket.send_multipart(data_msg)
def __exit__(self, *args, **kwargs):
eof_msg = sp_msg(SP_CMD_DATA, name, SP_DATA_EOF)
self.socket.send_multipart(eof_msg)
self.socket.close()
return context_manager() | [
"def",
"skypipe_input_stream",
"(",
"endpoint",
",",
"name",
"=",
"None",
")",
":",
"name",
"=",
"name",
"or",
"''",
"class",
"context_manager",
"(",
"object",
")",
":",
"def",
"__enter__",
"(",
"self",
")",
":",
"self",
".",
"socket",
"=",
"ctx",
".",... | Returns a context manager for streaming data into skypipe | [
"Returns",
"a",
"context",
"manager",
"for",
"streaming",
"data",
"into",
"skypipe"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/client.py#L110-L128 |
10,341 | progrium/skypipe | skypipe/client.py | stream_stdin_lines | def stream_stdin_lines():
"""Generator for unbuffered line reading from STDIN"""
stdin = os.fdopen(sys.stdin.fileno(), 'r', 0)
while True:
line = stdin.readline()
if line:
yield line
else:
break | python | def stream_stdin_lines():
stdin = os.fdopen(sys.stdin.fileno(), 'r', 0)
while True:
line = stdin.readline()
if line:
yield line
else:
break | [
"def",
"stream_stdin_lines",
"(",
")",
":",
"stdin",
"=",
"os",
".",
"fdopen",
"(",
"sys",
".",
"stdin",
".",
"fileno",
"(",
")",
",",
"'r'",
",",
"0",
")",
"while",
"True",
":",
"line",
"=",
"stdin",
".",
"readline",
"(",
")",
"if",
"line",
":",... | Generator for unbuffered line reading from STDIN | [
"Generator",
"for",
"unbuffered",
"line",
"reading",
"from",
"STDIN"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/client.py#L130-L138 |
10,342 | progrium/skypipe | skypipe/client.py | run | def run(endpoint, name=None):
"""Runs the skypipe client"""
try:
if os.isatty(0):
# output mode
for data in stream_skypipe_output(endpoint, name):
sys.stdout.write(data)
sys.stdout.flush()
else:
# input mode
with skypipe_input_stream(endpoint, name) as stream:
for line in stream_stdin_lines():
stream.send(line)
except KeyboardInterrupt:
pass | python | def run(endpoint, name=None):
try:
if os.isatty(0):
# output mode
for data in stream_skypipe_output(endpoint, name):
sys.stdout.write(data)
sys.stdout.flush()
else:
# input mode
with skypipe_input_stream(endpoint, name) as stream:
for line in stream_stdin_lines():
stream.send(line)
except KeyboardInterrupt:
pass | [
"def",
"run",
"(",
"endpoint",
",",
"name",
"=",
"None",
")",
":",
"try",
":",
"if",
"os",
".",
"isatty",
"(",
"0",
")",
":",
"# output mode",
"for",
"data",
"in",
"stream_skypipe_output",
"(",
"endpoint",
",",
"name",
")",
":",
"sys",
".",
"stdout",... | Runs the skypipe client | [
"Runs",
"the",
"skypipe",
"client"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/client.py#L140-L156 |
10,343 | thombashi/DateTimeRange | datetimerange/__init__.py | DateTimeRange.validate_time_inversion | def validate_time_inversion(self):
"""
Check time inversion of the time range.
:raises ValueError:
If |attr_start_datetime| is
bigger than |attr_end_datetime|.
:raises TypeError:
Any one of |attr_start_datetime| and |attr_end_datetime|,
or both is inappropriate datetime value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange("2015-03-22T10:10:00+0900", "2015-03-22T10:00:00+0900")
try:
time_range.validate_time_inversion()
except ValueError:
print "time inversion"
:Output:
.. parsed-literal::
time inversion
"""
if not self.is_set():
# for python2/3 compatibility
raise TypeError
if self.start_datetime > self.end_datetime:
raise ValueError(
"time inversion found: {:s} > {:s}".format(
str(self.start_datetime), str(self.end_datetime)
)
) | python | def validate_time_inversion(self):
if not self.is_set():
# for python2/3 compatibility
raise TypeError
if self.start_datetime > self.end_datetime:
raise ValueError(
"time inversion found: {:s} > {:s}".format(
str(self.start_datetime), str(self.end_datetime)
)
) | [
"def",
"validate_time_inversion",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"is_set",
"(",
")",
":",
"# for python2/3 compatibility",
"raise",
"TypeError",
"if",
"self",
".",
"start_datetime",
">",
"self",
".",
"end_datetime",
":",
"raise",
"ValueError",
... | Check time inversion of the time range.
:raises ValueError:
If |attr_start_datetime| is
bigger than |attr_end_datetime|.
:raises TypeError:
Any one of |attr_start_datetime| and |attr_end_datetime|,
or both is inappropriate datetime value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange("2015-03-22T10:10:00+0900", "2015-03-22T10:00:00+0900")
try:
time_range.validate_time_inversion()
except ValueError:
print "time inversion"
:Output:
.. parsed-literal::
time inversion | [
"Check",
"time",
"inversion",
"of",
"the",
"time",
"range",
"."
] | 542a3b69ec256d28cc5d5469fd68207c1b509c9c | https://github.com/thombashi/DateTimeRange/blob/542a3b69ec256d28cc5d5469fd68207c1b509c9c/datetimerange/__init__.py#L239-L274 |
10,344 | thombashi/DateTimeRange | datetimerange/__init__.py | DateTimeRange.set_start_datetime | def set_start_datetime(self, value, timezone=None):
"""
Set the start time of the time range.
:param value: |param_start_datetime|
:type value: |datetime|/|str|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
print(time_range)
time_range.set_start_datetime("2015-03-22T10:00:00+0900")
print(time_range)
:Output:
.. parsed-literal::
NaT - NaT
2015-03-22T10:00:00+0900 - NaT
"""
if value is None:
self.__start_datetime = None
return
try:
self.__start_datetime = typepy.type.DateTime(
value, strict_level=typepy.StrictLevel.MIN, timezone=timezone
).convert()
except typepy.TypeConversionError as e:
raise ValueError(e) | python | def set_start_datetime(self, value, timezone=None):
if value is None:
self.__start_datetime = None
return
try:
self.__start_datetime = typepy.type.DateTime(
value, strict_level=typepy.StrictLevel.MIN, timezone=timezone
).convert()
except typepy.TypeConversionError as e:
raise ValueError(e) | [
"def",
"set_start_datetime",
"(",
"self",
",",
"value",
",",
"timezone",
"=",
"None",
")",
":",
"if",
"value",
"is",
"None",
":",
"self",
".",
"__start_datetime",
"=",
"None",
"return",
"try",
":",
"self",
".",
"__start_datetime",
"=",
"typepy",
".",
"ty... | Set the start time of the time range.
:param value: |param_start_datetime|
:type value: |datetime|/|str|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
print(time_range)
time_range.set_start_datetime("2015-03-22T10:00:00+0900")
print(time_range)
:Output:
.. parsed-literal::
NaT - NaT
2015-03-22T10:00:00+0900 - NaT | [
"Set",
"the",
"start",
"time",
"of",
"the",
"time",
"range",
"."
] | 542a3b69ec256d28cc5d5469fd68207c1b509c9c | https://github.com/thombashi/DateTimeRange/blob/542a3b69ec256d28cc5d5469fd68207c1b509c9c/datetimerange/__init__.py#L408-L440 |
10,345 | thombashi/DateTimeRange | datetimerange/__init__.py | DateTimeRange.set_end_datetime | def set_end_datetime(self, value, timezone=None):
"""
Set the end time of the time range.
:param datetime.datetime/str value: |param_end_datetime|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
print(time_range)
time_range.set_end_datetime("2015-03-22T10:10:00+0900")
print(time_range)
:Output:
.. parsed-literal::
NaT - NaT
NaT - 2015-03-22T10:10:00+0900
"""
if value is None:
self.__end_datetime = None
return
try:
self.__end_datetime = typepy.type.DateTime(
value, strict_level=typepy.StrictLevel.MIN, timezone=timezone
).convert()
except typepy.TypeConversionError as e:
raise ValueError(e) | python | def set_end_datetime(self, value, timezone=None):
if value is None:
self.__end_datetime = None
return
try:
self.__end_datetime = typepy.type.DateTime(
value, strict_level=typepy.StrictLevel.MIN, timezone=timezone
).convert()
except typepy.TypeConversionError as e:
raise ValueError(e) | [
"def",
"set_end_datetime",
"(",
"self",
",",
"value",
",",
"timezone",
"=",
"None",
")",
":",
"if",
"value",
"is",
"None",
":",
"self",
".",
"__end_datetime",
"=",
"None",
"return",
"try",
":",
"self",
".",
"__end_datetime",
"=",
"typepy",
".",
"type",
... | Set the end time of the time range.
:param datetime.datetime/str value: |param_end_datetime|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
print(time_range)
time_range.set_end_datetime("2015-03-22T10:10:00+0900")
print(time_range)
:Output:
.. parsed-literal::
NaT - NaT
NaT - 2015-03-22T10:10:00+0900 | [
"Set",
"the",
"end",
"time",
"of",
"the",
"time",
"range",
"."
] | 542a3b69ec256d28cc5d5469fd68207c1b509c9c | https://github.com/thombashi/DateTimeRange/blob/542a3b69ec256d28cc5d5469fd68207c1b509c9c/datetimerange/__init__.py#L442-L473 |
10,346 | thombashi/DateTimeRange | datetimerange/__init__.py | DateTimeRange.intersection | def intersection(self, x):
"""
Newly set a time range that overlaps
the input and the current time range.
:param DateTimeRange x:
Value to compute intersection with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.intersection(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:05:00+0900 - 2015-03-22T10:10:00+0900
"""
self.validate_time_inversion()
x.validate_time_inversion()
if any([x.start_datetime in self, self.start_datetime in x]):
start_datetime = max(self.start_datetime, x.start_datetime)
end_datetime = min(self.end_datetime, x.end_datetime)
else:
start_datetime = None
end_datetime = None
return DateTimeRange(
start_datetime=start_datetime,
end_datetime=end_datetime,
start_time_format=self.start_time_format,
end_time_format=self.end_time_format,
) | python | def intersection(self, x):
self.validate_time_inversion()
x.validate_time_inversion()
if any([x.start_datetime in self, self.start_datetime in x]):
start_datetime = max(self.start_datetime, x.start_datetime)
end_datetime = min(self.end_datetime, x.end_datetime)
else:
start_datetime = None
end_datetime = None
return DateTimeRange(
start_datetime=start_datetime,
end_datetime=end_datetime,
start_time_format=self.start_time_format,
end_time_format=self.end_time_format,
) | [
"def",
"intersection",
"(",
"self",
",",
"x",
")",
":",
"self",
".",
"validate_time_inversion",
"(",
")",
"x",
".",
"validate_time_inversion",
"(",
")",
"if",
"any",
"(",
"[",
"x",
".",
"start_datetime",
"in",
"self",
",",
"self",
".",
"start_datetime",
... | Newly set a time range that overlaps
the input and the current time range.
:param DateTimeRange x:
Value to compute intersection with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.intersection(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:05:00+0900 - 2015-03-22T10:10:00+0900 | [
"Newly",
"set",
"a",
"time",
"range",
"that",
"overlaps",
"the",
"input",
"and",
"the",
"current",
"time",
"range",
"."
] | 542a3b69ec256d28cc5d5469fd68207c1b509c9c | https://github.com/thombashi/DateTimeRange/blob/542a3b69ec256d28cc5d5469fd68207c1b509c9c/datetimerange/__init__.py#L600-L636 |
10,347 | thombashi/DateTimeRange | datetimerange/__init__.py | DateTimeRange.encompass | def encompass(self, x):
"""
Newly set a time range that encompasses
the input and the current time range.
:param DateTimeRange x:
Value to compute encompass with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.encompass(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:00:00+0900 - 2015-03-22T10:15:00+0900
"""
self.validate_time_inversion()
x.validate_time_inversion()
return DateTimeRange(
start_datetime=min(self.start_datetime, x.start_datetime),
end_datetime=max(self.end_datetime, x.end_datetime),
start_time_format=self.start_time_format,
end_time_format=self.end_time_format,
) | python | def encompass(self, x):
self.validate_time_inversion()
x.validate_time_inversion()
return DateTimeRange(
start_datetime=min(self.start_datetime, x.start_datetime),
end_datetime=max(self.end_datetime, x.end_datetime),
start_time_format=self.start_time_format,
end_time_format=self.end_time_format,
) | [
"def",
"encompass",
"(",
"self",
",",
"x",
")",
":",
"self",
".",
"validate_time_inversion",
"(",
")",
"x",
".",
"validate_time_inversion",
"(",
")",
"return",
"DateTimeRange",
"(",
"start_datetime",
"=",
"min",
"(",
"self",
".",
"start_datetime",
",",
"x",
... | Newly set a time range that encompasses
the input and the current time range.
:param DateTimeRange x:
Value to compute encompass with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.encompass(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:00:00+0900 - 2015-03-22T10:15:00+0900 | [
"Newly",
"set",
"a",
"time",
"range",
"that",
"encompasses",
"the",
"input",
"and",
"the",
"current",
"time",
"range",
"."
] | 542a3b69ec256d28cc5d5469fd68207c1b509c9c | https://github.com/thombashi/DateTimeRange/blob/542a3b69ec256d28cc5d5469fd68207c1b509c9c/datetimerange/__init__.py#L638-L667 |
10,348 | progrium/skypipe | skypipe/cloud.py | wait_for | def wait_for(text, finish=None, io=None):
"""Displays dots until returned event is set"""
if finish:
finish.set()
time.sleep(0.1) # threads, sigh
if not io:
io = sys.stdout
finish = threading.Event()
io.write(text)
def _wait():
while not finish.is_set():
io.write('.')
io.flush()
finish.wait(timeout=1)
io.write('\n')
threading.Thread(target=_wait).start()
return finish | python | def wait_for(text, finish=None, io=None):
if finish:
finish.set()
time.sleep(0.1) # threads, sigh
if not io:
io = sys.stdout
finish = threading.Event()
io.write(text)
def _wait():
while not finish.is_set():
io.write('.')
io.flush()
finish.wait(timeout=1)
io.write('\n')
threading.Thread(target=_wait).start()
return finish | [
"def",
"wait_for",
"(",
"text",
",",
"finish",
"=",
"None",
",",
"io",
"=",
"None",
")",
":",
"if",
"finish",
":",
"finish",
".",
"set",
"(",
")",
"time",
".",
"sleep",
"(",
"0.1",
")",
"# threads, sigh",
"if",
"not",
"io",
":",
"io",
"=",
"sys",... | Displays dots until returned event is set | [
"Displays",
"dots",
"until",
"returned",
"event",
"is",
"set"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/cloud.py#L37-L53 |
10,349 | progrium/skypipe | skypipe/cloud.py | lookup_endpoint | def lookup_endpoint(cli):
"""Looks up the application endpoint from dotcloud"""
url = '/applications/{0}/environment'.format(APPNAME)
environ = cli.user.get(url).item
port = environ['DOTCLOUD_SATELLITE_ZMQ_PORT']
host = socket.gethostbyname(environ['DOTCLOUD_SATELLITE_ZMQ_HOST'])
return "tcp://{0}:{1}".format(host, port) | python | def lookup_endpoint(cli):
url = '/applications/{0}/environment'.format(APPNAME)
environ = cli.user.get(url).item
port = environ['DOTCLOUD_SATELLITE_ZMQ_PORT']
host = socket.gethostbyname(environ['DOTCLOUD_SATELLITE_ZMQ_HOST'])
return "tcp://{0}:{1}".format(host, port) | [
"def",
"lookup_endpoint",
"(",
"cli",
")",
":",
"url",
"=",
"'/applications/{0}/environment'",
".",
"format",
"(",
"APPNAME",
")",
"environ",
"=",
"cli",
".",
"user",
".",
"get",
"(",
"url",
")",
".",
"item",
"port",
"=",
"environ",
"[",
"'DOTCLOUD_SATELLI... | Looks up the application endpoint from dotcloud | [
"Looks",
"up",
"the",
"application",
"endpoint",
"from",
"dotcloud"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/cloud.py#L56-L62 |
10,350 | progrium/skypipe | skypipe/cloud.py | setup | def setup(cli):
"""Everything to make skypipe ready to use"""
if not cli.global_config.loaded:
setup_dotcloud_account(cli)
discover_satellite(cli)
cli.success("Skypipe is ready for action") | python | def setup(cli):
if not cli.global_config.loaded:
setup_dotcloud_account(cli)
discover_satellite(cli)
cli.success("Skypipe is ready for action") | [
"def",
"setup",
"(",
"cli",
")",
":",
"if",
"not",
"cli",
".",
"global_config",
".",
"loaded",
":",
"setup_dotcloud_account",
"(",
"cli",
")",
"discover_satellite",
"(",
"cli",
")",
"cli",
".",
"success",
"(",
"\"Skypipe is ready for action\"",
")"
] | Everything to make skypipe ready to use | [
"Everything",
"to",
"make",
"skypipe",
"ready",
"to",
"use"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/cloud.py#L86-L91 |
10,351 | progrium/skypipe | skypipe/cloud.py | discover_satellite | def discover_satellite(cli, deploy=True, timeout=5):
"""Looks to make sure a satellite exists, returns endpoint
First makes sure we have dotcloud account credentials. Then it looks
up the environment for the satellite app. This will contain host and
port to construct an endpoint. However, if app doesn't exist, or
endpoint does not check out, we call `launch_satellite` to deploy,
which calls `discover_satellite` again when finished. Ultimately we
return a working endpoint. If deploy is False it will not try to
deploy.
"""
if not cli.global_config.loaded:
cli.die("Please setup skypipe by running `skypipe --setup`")
try:
endpoint = lookup_endpoint(cli)
ok = client.check_skypipe_endpoint(endpoint, timeout)
if ok:
return endpoint
else:
return launch_satellite(cli) if deploy else None
except (RESTAPIError, KeyError):
return launch_satellite(cli) if deploy else None | python | def discover_satellite(cli, deploy=True, timeout=5):
if not cli.global_config.loaded:
cli.die("Please setup skypipe by running `skypipe --setup`")
try:
endpoint = lookup_endpoint(cli)
ok = client.check_skypipe_endpoint(endpoint, timeout)
if ok:
return endpoint
else:
return launch_satellite(cli) if deploy else None
except (RESTAPIError, KeyError):
return launch_satellite(cli) if deploy else None | [
"def",
"discover_satellite",
"(",
"cli",
",",
"deploy",
"=",
"True",
",",
"timeout",
"=",
"5",
")",
":",
"if",
"not",
"cli",
".",
"global_config",
".",
"loaded",
":",
"cli",
".",
"die",
"(",
"\"Please setup skypipe by running `skypipe --setup`\"",
")",
"try",
... | Looks to make sure a satellite exists, returns endpoint
First makes sure we have dotcloud account credentials. Then it looks
up the environment for the satellite app. This will contain host and
port to construct an endpoint. However, if app doesn't exist, or
endpoint does not check out, we call `launch_satellite` to deploy,
which calls `discover_satellite` again when finished. Ultimately we
return a working endpoint. If deploy is False it will not try to
deploy. | [
"Looks",
"to",
"make",
"sure",
"a",
"satellite",
"exists",
"returns",
"endpoint"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/cloud.py#L94-L116 |
10,352 | progrium/skypipe | skypipe/cloud.py | launch_satellite | def launch_satellite(cli):
"""Deploys a new satellite app over any existing app"""
cli.info("Launching skypipe satellite:")
finish = wait_for(" Pushing to dotCloud")
# destroy any existing satellite
destroy_satellite(cli)
# create new satellite app
url = '/applications'
try:
cli.user.post(url, {
'name': APPNAME,
'flavor': 'sandbox'
})
except RESTAPIError as e:
if e.code == 409:
cli.die('Application "{0}" already exists.'.format(APPNAME))
else:
cli.die('Creating application "{0}" failed: {1}'.format(APPNAME, e))
class args: application = APPNAME
#cli._connect(args)
# push satellite code
protocol = 'rsync'
url = '/applications/{0}/push-endpoints{1}'.format(APPNAME, '')
endpoint = cli._select_endpoint(cli.user.get(url).items, protocol)
class args: path = satellite_path
cli.push_with_rsync(args, endpoint)
# tell dotcloud to deploy, then wait for it to finish
revision = None
clean = False
url = '/applications/{0}/deployments'.format(APPNAME)
response = cli.user.post(url, {'revision': revision, 'clean': clean})
deploy_trace_id = response.trace_id
deploy_id = response.item['deploy_id']
original_stdout = sys.stdout
finish = wait_for(" Waiting for deployment", finish, original_stdout)
try:
sys.stdout = StringIO()
res = cli._stream_deploy_logs(APPNAME, deploy_id,
deploy_trace_id=deploy_trace_id, follow=True)
if res != 0:
return res
except KeyboardInterrupt:
cli.error('You\'ve closed your log stream with Ctrl-C, ' \
'but the deployment is still running in the background.')
cli.error('If you aborted because of an error ' \
'(e.g. the deployment got stuck), please e-mail\n' \
'support@dotcloud.com and mention this trace ID: {0}'
.format(deploy_trace_id))
cli.error('If you want to continue following your deployment, ' \
'try:\n{0}'.format(
cli._fmt_deploy_logs_command(deploy_id)))
cli.die()
except RuntimeError:
# workaround for a bug in the current dotcloud client code
pass
finally:
sys.stdout = original_stdout
finish = wait_for(" Satellite coming online", finish)
endpoint = lookup_endpoint(cli)
ok = client.check_skypipe_endpoint(endpoint, 120)
finish.set()
time.sleep(0.1) # sigh, threads
if ok:
return endpoint
else:
cli.die("Satellite failed to come online") | python | def launch_satellite(cli):
cli.info("Launching skypipe satellite:")
finish = wait_for(" Pushing to dotCloud")
# destroy any existing satellite
destroy_satellite(cli)
# create new satellite app
url = '/applications'
try:
cli.user.post(url, {
'name': APPNAME,
'flavor': 'sandbox'
})
except RESTAPIError as e:
if e.code == 409:
cli.die('Application "{0}" already exists.'.format(APPNAME))
else:
cli.die('Creating application "{0}" failed: {1}'.format(APPNAME, e))
class args: application = APPNAME
#cli._connect(args)
# push satellite code
protocol = 'rsync'
url = '/applications/{0}/push-endpoints{1}'.format(APPNAME, '')
endpoint = cli._select_endpoint(cli.user.get(url).items, protocol)
class args: path = satellite_path
cli.push_with_rsync(args, endpoint)
# tell dotcloud to deploy, then wait for it to finish
revision = None
clean = False
url = '/applications/{0}/deployments'.format(APPNAME)
response = cli.user.post(url, {'revision': revision, 'clean': clean})
deploy_trace_id = response.trace_id
deploy_id = response.item['deploy_id']
original_stdout = sys.stdout
finish = wait_for(" Waiting for deployment", finish, original_stdout)
try:
sys.stdout = StringIO()
res = cli._stream_deploy_logs(APPNAME, deploy_id,
deploy_trace_id=deploy_trace_id, follow=True)
if res != 0:
return res
except KeyboardInterrupt:
cli.error('You\'ve closed your log stream with Ctrl-C, ' \
'but the deployment is still running in the background.')
cli.error('If you aborted because of an error ' \
'(e.g. the deployment got stuck), please e-mail\n' \
'support@dotcloud.com and mention this trace ID: {0}'
.format(deploy_trace_id))
cli.error('If you want to continue following your deployment, ' \
'try:\n{0}'.format(
cli._fmt_deploy_logs_command(deploy_id)))
cli.die()
except RuntimeError:
# workaround for a bug in the current dotcloud client code
pass
finally:
sys.stdout = original_stdout
finish = wait_for(" Satellite coming online", finish)
endpoint = lookup_endpoint(cli)
ok = client.check_skypipe_endpoint(endpoint, 120)
finish.set()
time.sleep(0.1) # sigh, threads
if ok:
return endpoint
else:
cli.die("Satellite failed to come online") | [
"def",
"launch_satellite",
"(",
"cli",
")",
":",
"cli",
".",
"info",
"(",
"\"Launching skypipe satellite:\"",
")",
"finish",
"=",
"wait_for",
"(",
"\" Pushing to dotCloud\"",
")",
"# destroy any existing satellite",
"destroy_satellite",
"(",
"cli",
")",
"# create new... | Deploys a new satellite app over any existing app | [
"Deploys",
"a",
"new",
"satellite",
"app",
"over",
"any",
"existing",
"app"
] | 6162610a1876282ff1cc8eeca6c8669b8f605482 | https://github.com/progrium/skypipe/blob/6162610a1876282ff1cc8eeca6c8669b8f605482/skypipe/cloud.py#L125-L204 |
10,353 | opengisch/pum | pum/core/dumper.py | Dumper.pg_backup | def pg_backup(self, pg_dump_exe='pg_dump', exclude_schema=None):
"""Call the pg_dump command to create a db backup
Parameters
----------
pg_dump_exe: str
the pg_dump command path
exclude_schema: str[]
list of schemas to be skipped
"""
command = [
pg_dump_exe, '-Fc', '-f', self.file,
'service={}'.format(self.pg_service)
]
if exclude_schema:
command.append(' '.join("--exclude-schema={}".format(schema) for schema in exclude_schema))
subprocess.check_output(command, stderr=subprocess.STDOUT) | python | def pg_backup(self, pg_dump_exe='pg_dump', exclude_schema=None):
command = [
pg_dump_exe, '-Fc', '-f', self.file,
'service={}'.format(self.pg_service)
]
if exclude_schema:
command.append(' '.join("--exclude-schema={}".format(schema) for schema in exclude_schema))
subprocess.check_output(command, stderr=subprocess.STDOUT) | [
"def",
"pg_backup",
"(",
"self",
",",
"pg_dump_exe",
"=",
"'pg_dump'",
",",
"exclude_schema",
"=",
"None",
")",
":",
"command",
"=",
"[",
"pg_dump_exe",
",",
"'-Fc'",
",",
"'-f'",
",",
"self",
".",
"file",
",",
"'service={}'",
".",
"format",
"(",
"self",... | Call the pg_dump command to create a db backup
Parameters
----------
pg_dump_exe: str
the pg_dump command path
exclude_schema: str[]
list of schemas to be skipped | [
"Call",
"the",
"pg_dump",
"command",
"to",
"create",
"a",
"db",
"backup"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/dumper.py#L17-L35 |
10,354 | opengisch/pum | pum/core/dumper.py | Dumper.pg_restore | def pg_restore(self, pg_restore_exe='pg_restore', exclude_schema=None):
"""Call the pg_restore command to restore a db backup
Parameters
----------
pg_restore_exe: str
the pg_restore command path
"""
command = [
pg_restore_exe, '-d',
'service={}'.format(self.pg_service),
'--no-owner'
]
if exclude_schema:
exclude_schema_available = False
try:
pg_version = subprocess.check_output(['pg_restore','--version'])
pg_version = str(pg_version).replace('\\n', '').replace("'", '').split(' ')[-1]
exclude_schema_available = LooseVersion(pg_version) >= LooseVersion("10.0")
except subprocess.CalledProcessError as e:
print("*** Could not get pg_restore version:\n", e.stderr)
if exclude_schema_available:
command.append(' '.join("--exclude-schema={}".format(schema) for schema in exclude_schema))
command.append(self.file)
try:
subprocess.check_output(command)
except subprocess.CalledProcessError as e:
print("*** pg_restore failed:\n", command, '\n', e.stderr) | python | def pg_restore(self, pg_restore_exe='pg_restore', exclude_schema=None):
command = [
pg_restore_exe, '-d',
'service={}'.format(self.pg_service),
'--no-owner'
]
if exclude_schema:
exclude_schema_available = False
try:
pg_version = subprocess.check_output(['pg_restore','--version'])
pg_version = str(pg_version).replace('\\n', '').replace("'", '').split(' ')[-1]
exclude_schema_available = LooseVersion(pg_version) >= LooseVersion("10.0")
except subprocess.CalledProcessError as e:
print("*** Could not get pg_restore version:\n", e.stderr)
if exclude_schema_available:
command.append(' '.join("--exclude-schema={}".format(schema) for schema in exclude_schema))
command.append(self.file)
try:
subprocess.check_output(command)
except subprocess.CalledProcessError as e:
print("*** pg_restore failed:\n", command, '\n', e.stderr) | [
"def",
"pg_restore",
"(",
"self",
",",
"pg_restore_exe",
"=",
"'pg_restore'",
",",
"exclude_schema",
"=",
"None",
")",
":",
"command",
"=",
"[",
"pg_restore_exe",
",",
"'-d'",
",",
"'service={}'",
".",
"format",
"(",
"self",
".",
"pg_service",
")",
",",
"'... | Call the pg_restore command to restore a db backup
Parameters
----------
pg_restore_exe: str
the pg_restore command path | [
"Call",
"the",
"pg_restore",
"command",
"to",
"restore",
"a",
"db",
"backup"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/dumper.py#L37-L67 |
10,355 | opengisch/pum | pum/core/upgrader.py | Upgrader.__get_delta_files | def __get_delta_files(self):
"""Search for delta files and return a dict of Delta objects, keyed by directory names."""
files = [(d, f) for d in self.dirs for f in listdir(d) if isfile(join(d, f))]
deltas = OrderedDict()
for d, f in files:
file_ = join(d, f)
if not Delta.is_valid_delta_name(file_):
continue
delta = Delta(file_)
if d not in deltas:
deltas[d] = []
deltas[d].append(delta)
# sort delta objects in each bucket
for d in deltas:
deltas[d].sort(key=lambda x: (x.get_version(), x.get_priority(), x.get_name()))
return deltas | python | def __get_delta_files(self):
files = [(d, f) for d in self.dirs for f in listdir(d) if isfile(join(d, f))]
deltas = OrderedDict()
for d, f in files:
file_ = join(d, f)
if not Delta.is_valid_delta_name(file_):
continue
delta = Delta(file_)
if d not in deltas:
deltas[d] = []
deltas[d].append(delta)
# sort delta objects in each bucket
for d in deltas:
deltas[d].sort(key=lambda x: (x.get_version(), x.get_priority(), x.get_name()))
return deltas | [
"def",
"__get_delta_files",
"(",
"self",
")",
":",
"files",
"=",
"[",
"(",
"d",
",",
"f",
")",
"for",
"d",
"in",
"self",
".",
"dirs",
"for",
"f",
"in",
"listdir",
"(",
"d",
")",
"if",
"isfile",
"(",
"join",
"(",
"d",
",",
"f",
")",
")",
"]",
... | Search for delta files and return a dict of Delta objects, keyed by directory names. | [
"Search",
"for",
"delta",
"files",
"and",
"return",
"a",
"dict",
"of",
"Delta",
"objects",
"keyed",
"by",
"directory",
"names",
"."
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L121-L142 |
10,356 | opengisch/pum | pum/core/upgrader.py | Upgrader.__run_delta_sql | def __run_delta_sql(self, delta):
"""Execute the delta sql file on the database"""
self.__run_sql_file(delta.get_file())
self.__update_upgrades_table(delta) | python | def __run_delta_sql(self, delta):
self.__run_sql_file(delta.get_file())
self.__update_upgrades_table(delta) | [
"def",
"__run_delta_sql",
"(",
"self",
",",
"delta",
")",
":",
"self",
".",
"__run_sql_file",
"(",
"delta",
".",
"get_file",
"(",
")",
")",
"self",
".",
"__update_upgrades_table",
"(",
"delta",
")"
] | Execute the delta sql file on the database | [
"Execute",
"the",
"delta",
"sql",
"file",
"on",
"the",
"database"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L144-L148 |
10,357 | opengisch/pum | pum/core/upgrader.py | Upgrader.__run_delta_py | def __run_delta_py(self, delta):
"""Execute the delta py file"""
self.__run_py_file(delta.get_file(), delta.get_name())
self.__update_upgrades_table(delta) | python | def __run_delta_py(self, delta):
self.__run_py_file(delta.get_file(), delta.get_name())
self.__update_upgrades_table(delta) | [
"def",
"__run_delta_py",
"(",
"self",
",",
"delta",
")",
":",
"self",
".",
"__run_py_file",
"(",
"delta",
".",
"get_file",
"(",
")",
",",
"delta",
".",
"get_name",
"(",
")",
")",
"self",
".",
"__update_upgrades_table",
"(",
"delta",
")"
] | Execute the delta py file | [
"Execute",
"the",
"delta",
"py",
"file"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L150-L154 |
10,358 | opengisch/pum | pum/core/upgrader.py | Upgrader.__run_pre_all | def __run_pre_all(self):
"""Execute the pre-all.py and pre-all.sql files if they exist"""
# if the list of delta dirs is [delta1, delta2] the pre scripts of delta2 are
# executed before the pre scripts of delta1
for d in reversed(self.dirs):
pre_all_py_path = os.path.join(d, 'pre-all.py')
if os.path.isfile(pre_all_py_path):
print(' Applying pre-all.py...', end=' ')
self.__run_py_file(pre_all_py_path, 'pre-all')
print('OK')
pre_all_sql_path = os.path.join(d, 'pre-all.sql')
if os.path.isfile(pre_all_sql_path):
print(' Applying pre-all.sql...', end=' ')
self.__run_sql_file(pre_all_sql_path)
print('OK') | python | def __run_pre_all(self):
# if the list of delta dirs is [delta1, delta2] the pre scripts of delta2 are
# executed before the pre scripts of delta1
for d in reversed(self.dirs):
pre_all_py_path = os.path.join(d, 'pre-all.py')
if os.path.isfile(pre_all_py_path):
print(' Applying pre-all.py...', end=' ')
self.__run_py_file(pre_all_py_path, 'pre-all')
print('OK')
pre_all_sql_path = os.path.join(d, 'pre-all.sql')
if os.path.isfile(pre_all_sql_path):
print(' Applying pre-all.sql...', end=' ')
self.__run_sql_file(pre_all_sql_path)
print('OK') | [
"def",
"__run_pre_all",
"(",
"self",
")",
":",
"# if the list of delta dirs is [delta1, delta2] the pre scripts of delta2 are",
"# executed before the pre scripts of delta1",
"for",
"d",
"in",
"reversed",
"(",
"self",
".",
"dirs",
")",
":",
"pre_all_py_path",
"=",
"os",
"."... | Execute the pre-all.py and pre-all.sql files if they exist | [
"Execute",
"the",
"pre",
"-",
"all",
".",
"py",
"and",
"pre",
"-",
"all",
".",
"sql",
"files",
"if",
"they",
"exist"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L156-L173 |
10,359 | opengisch/pum | pum/core/upgrader.py | Upgrader.__run_post_all | def __run_post_all(self):
"""Execute the post-all.py and post-all.sql files if they exist"""
# if the list of delta dirs is [delta1, delta2] the post scripts of delta1 are
# executed before the post scripts of delta2
for d in self.dirs:
post_all_py_path = os.path.join(d, 'post-all.py')
if os.path.isfile(post_all_py_path):
print(' Applying post-all.py...', end=' ')
self.__run_py_file(post_all_py_path, 'post-all')
print('OK')
post_all_sql_path = os.path.join(d, 'post-all.sql')
if os.path.isfile(post_all_sql_path):
print(' Applying post-all.sql...', end=' ')
self.__run_sql_file(post_all_sql_path)
print('OK') | python | def __run_post_all(self):
# if the list of delta dirs is [delta1, delta2] the post scripts of delta1 are
# executed before the post scripts of delta2
for d in self.dirs:
post_all_py_path = os.path.join(d, 'post-all.py')
if os.path.isfile(post_all_py_path):
print(' Applying post-all.py...', end=' ')
self.__run_py_file(post_all_py_path, 'post-all')
print('OK')
post_all_sql_path = os.path.join(d, 'post-all.sql')
if os.path.isfile(post_all_sql_path):
print(' Applying post-all.sql...', end=' ')
self.__run_sql_file(post_all_sql_path)
print('OK') | [
"def",
"__run_post_all",
"(",
"self",
")",
":",
"# if the list of delta dirs is [delta1, delta2] the post scripts of delta1 are",
"# executed before the post scripts of delta2",
"for",
"d",
"in",
"self",
".",
"dirs",
":",
"post_all_py_path",
"=",
"os",
".",
"path",
".",
"jo... | Execute the post-all.py and post-all.sql files if they exist | [
"Execute",
"the",
"post",
"-",
"all",
".",
"py",
"and",
"post",
"-",
"all",
".",
"sql",
"files",
"if",
"they",
"exist"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L175-L192 |
10,360 | opengisch/pum | pum/core/upgrader.py | Upgrader.__run_sql_file | def __run_sql_file(self, filepath):
"""Execute the sql file at the passed path
Parameters
----------
filepath: str
the path of the file to execute"""
with open(filepath, 'r') as delta_file:
sql = delta_file.read()
if self.variables:
self.cursor.execute(sql, self.variables)
else:
self.cursor.execute(sql)
self.connection.commit() | python | def __run_sql_file(self, filepath):
with open(filepath, 'r') as delta_file:
sql = delta_file.read()
if self.variables:
self.cursor.execute(sql, self.variables)
else:
self.cursor.execute(sql)
self.connection.commit() | [
"def",
"__run_sql_file",
"(",
"self",
",",
"filepath",
")",
":",
"with",
"open",
"(",
"filepath",
",",
"'r'",
")",
"as",
"delta_file",
":",
"sql",
"=",
"delta_file",
".",
"read",
"(",
")",
"if",
"self",
".",
"variables",
":",
"self",
".",
"cursor",
"... | Execute the sql file at the passed path
Parameters
----------
filepath: str
the path of the file to execute | [
"Execute",
"the",
"sql",
"file",
"at",
"the",
"passed",
"path"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L194-L208 |
10,361 | opengisch/pum | pum/core/upgrader.py | Upgrader.__run_py_file | def __run_py_file(self, filepath, module_name):
"""Execute the python file at the passed path
Parameters
----------
filepath: str
the path of the file to execute
module_name: str
the name of the python module
"""
# Import the module
spec = importlib.util.spec_from_file_location(module_name, filepath)
delta_py = importlib.util.module_from_spec(spec)
spec.loader.exec_module(delta_py)
# Get the python file's directory path
# Note: we add a separator for backward compatibility, as existing DeltaPy subclasses
# may assume that delta_dir ends with a separator
dir_ = dirname(filepath) + os.sep
# Search for subclasses of DeltaPy
for name in dir(delta_py):
obj = getattr(delta_py, name)
if inspect.isclass(obj) and not obj == DeltaPy and issubclass(
obj, DeltaPy):
delta_py_inst = obj(
self.current_db_version(), dir_, self.dirs, self.pg_service,
self.upgrades_table, variables=self.variables)
delta_py_inst.run() | python | def __run_py_file(self, filepath, module_name):
# Import the module
spec = importlib.util.spec_from_file_location(module_name, filepath)
delta_py = importlib.util.module_from_spec(spec)
spec.loader.exec_module(delta_py)
# Get the python file's directory path
# Note: we add a separator for backward compatibility, as existing DeltaPy subclasses
# may assume that delta_dir ends with a separator
dir_ = dirname(filepath) + os.sep
# Search for subclasses of DeltaPy
for name in dir(delta_py):
obj = getattr(delta_py, name)
if inspect.isclass(obj) and not obj == DeltaPy and issubclass(
obj, DeltaPy):
delta_py_inst = obj(
self.current_db_version(), dir_, self.dirs, self.pg_service,
self.upgrades_table, variables=self.variables)
delta_py_inst.run() | [
"def",
"__run_py_file",
"(",
"self",
",",
"filepath",
",",
"module_name",
")",
":",
"# Import the module",
"spec",
"=",
"importlib",
".",
"util",
".",
"spec_from_file_location",
"(",
"module_name",
",",
"filepath",
")",
"delta_py",
"=",
"importlib",
".",
"util",... | Execute the python file at the passed path
Parameters
----------
filepath: str
the path of the file to execute
module_name: str
the name of the python module | [
"Execute",
"the",
"python",
"file",
"at",
"the",
"passed",
"path"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L210-L240 |
10,362 | opengisch/pum | pum/core/upgrader.py | Delta.is_valid_delta_name | def is_valid_delta_name(file):
"""Return if a file has a valid name
A delta file name can be:
- pre-all.py
- pre-all.sql
- delta_x.x.x_ddmmyyyy.pre.py
- delta_x.x.x_ddmmyyyy.pre.sql
- delta_x.x.x_ddmmyyyy.py
- delta_x.x.x_ddmmyyyy.sql
- delta_x.x.x_ddmmyyyy.post.py
- delta_x.x.x_ddmmyyyy.post.sql
- post-all.py
- post-all.sql
where x.x.x is the version number and _ddmmyyyy is an optional
description, usually representing the date of the delta file
"""
filename = basename(file)
pattern = re.compile(Delta.FILENAME_PATTERN)
if re.match(pattern, filename):
return True
return False | python | def is_valid_delta_name(file):
filename = basename(file)
pattern = re.compile(Delta.FILENAME_PATTERN)
if re.match(pattern, filename):
return True
return False | [
"def",
"is_valid_delta_name",
"(",
"file",
")",
":",
"filename",
"=",
"basename",
"(",
"file",
")",
"pattern",
"=",
"re",
".",
"compile",
"(",
"Delta",
".",
"FILENAME_PATTERN",
")",
"if",
"re",
".",
"match",
"(",
"pattern",
",",
"filename",
")",
":",
"... | Return if a file has a valid name
A delta file name can be:
- pre-all.py
- pre-all.sql
- delta_x.x.x_ddmmyyyy.pre.py
- delta_x.x.x_ddmmyyyy.pre.sql
- delta_x.x.x_ddmmyyyy.py
- delta_x.x.x_ddmmyyyy.sql
- delta_x.x.x_ddmmyyyy.post.py
- delta_x.x.x_ddmmyyyy.post.sql
- post-all.py
- post-all.sql
where x.x.x is the version number and _ddmmyyyy is an optional
description, usually representing the date of the delta file | [
"Return",
"if",
"a",
"file",
"has",
"a",
"valid",
"name"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L505-L527 |
10,363 | opengisch/pum | pum/core/upgrader.py | Delta.get_checksum | def get_checksum(self):
"""Return the md5 checksum of the delta file."""
with open(self.file, 'rb') as f:
cs = md5(f.read()).hexdigest()
return cs | python | def get_checksum(self):
with open(self.file, 'rb') as f:
cs = md5(f.read()).hexdigest()
return cs | [
"def",
"get_checksum",
"(",
"self",
")",
":",
"with",
"open",
"(",
"self",
".",
"file",
",",
"'rb'",
")",
"as",
"f",
":",
"cs",
"=",
"md5",
"(",
"f",
".",
"read",
"(",
")",
")",
".",
"hexdigest",
"(",
")",
"return",
"cs"
] | Return the md5 checksum of the delta file. | [
"Return",
"the",
"md5",
"checksum",
"of",
"the",
"delta",
"file",
"."
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L546-L550 |
10,364 | opengisch/pum | pum/core/upgrader.py | Delta.get_type | def get_type(self):
"""Return the type of the delta file.
Returns
-------
type: int
"""
ext = self.match.group(5)
if ext == 'pre.py':
return DeltaType.PRE_PYTHON
elif ext == 'pre.sql':
return DeltaType.PRE_SQL
elif ext == 'py':
return DeltaType.PYTHON
elif ext == 'sql':
return DeltaType.SQL
elif ext == 'post.py':
return DeltaType.POST_PYTHON
elif ext == 'post.sql':
return DeltaType.POST_SQL | python | def get_type(self):
ext = self.match.group(5)
if ext == 'pre.py':
return DeltaType.PRE_PYTHON
elif ext == 'pre.sql':
return DeltaType.PRE_SQL
elif ext == 'py':
return DeltaType.PYTHON
elif ext == 'sql':
return DeltaType.SQL
elif ext == 'post.py':
return DeltaType.POST_PYTHON
elif ext == 'post.sql':
return DeltaType.POST_SQL | [
"def",
"get_type",
"(",
"self",
")",
":",
"ext",
"=",
"self",
".",
"match",
".",
"group",
"(",
"5",
")",
"if",
"ext",
"==",
"'pre.py'",
":",
"return",
"DeltaType",
".",
"PRE_PYTHON",
"elif",
"ext",
"==",
"'pre.sql'",
":",
"return",
"DeltaType",
".",
... | Return the type of the delta file.
Returns
-------
type: int | [
"Return",
"the",
"type",
"of",
"the",
"delta",
"file",
"."
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/upgrader.py#L552-L573 |
10,365 | opengisch/pum | pum/core/deltapy.py | DeltaPy.variable | def variable(self, name: str, default_value=None):
"""
Safely returns the value of the variable given in PUM
Parameters
----------
name
the name of the variable
default_value
the default value for the variable if it does not exist
"""
return self.__variables.get(name, default_value) | python | def variable(self, name: str, default_value=None):
return self.__variables.get(name, default_value) | [
"def",
"variable",
"(",
"self",
",",
"name",
":",
"str",
",",
"default_value",
"=",
"None",
")",
":",
"return",
"self",
".",
"__variables",
".",
"get",
"(",
"name",
",",
"default_value",
")"
] | Safely returns the value of the variable given in PUM
Parameters
----------
name
the name of the variable
default_value
the default value for the variable if it does not exist | [
"Safely",
"returns",
"the",
"value",
"of",
"the",
"variable",
"given",
"in",
"PUM"
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/deltapy.py#L64-L75 |
10,366 | opengisch/pum | pum/core/checker.py | Checker.run_checks | def run_checks(self):
"""Run all the checks functions.
Returns
-------
bool
True if all the checks are true
False otherwise
dict
Dictionary of lists of differences
"""
result = True
differences_dict = {}
if 'tables' not in self.ignore_list:
tmp_result, differences_dict['tables'] = self.check_tables()
result = False if not tmp_result else result
if 'columns' not in self.ignore_list:
tmp_result, differences_dict['columns'] = self.check_columns(
'views' not in self.ignore_list)
result = False if not tmp_result else result
if 'constraints' not in self.ignore_list:
tmp_result, differences_dict['constraints'] = \
self.check_constraints()
result = False if not tmp_result else result
if 'views' not in self.ignore_list:
tmp_result, differences_dict['views'] = self.check_views()
result = False if not tmp_result else result
if 'sequences' not in self.ignore_list:
tmp_result, differences_dict['sequences'] = self.check_sequences()
result = False if not tmp_result else result
if 'indexes' not in self.ignore_list:
tmp_result, differences_dict['indexes'] = self.check_indexes()
result = False if not tmp_result else result
if 'triggers' not in self.ignore_list:
tmp_result, differences_dict['triggers'] = self.check_triggers()
result = False if not tmp_result else result
if 'functions' not in self.ignore_list:
tmp_result, differences_dict['functions'] = self.check_functions()
result = False if not tmp_result else result
if 'rules' not in self.ignore_list:
tmp_result, differences_dict['rules'] = self.check_rules()
result = False if not tmp_result else result
if self.verbose_level == 0:
differences_dict = None
return result, differences_dict | python | def run_checks(self):
result = True
differences_dict = {}
if 'tables' not in self.ignore_list:
tmp_result, differences_dict['tables'] = self.check_tables()
result = False if not tmp_result else result
if 'columns' not in self.ignore_list:
tmp_result, differences_dict['columns'] = self.check_columns(
'views' not in self.ignore_list)
result = False if not tmp_result else result
if 'constraints' not in self.ignore_list:
tmp_result, differences_dict['constraints'] = \
self.check_constraints()
result = False if not tmp_result else result
if 'views' not in self.ignore_list:
tmp_result, differences_dict['views'] = self.check_views()
result = False if not tmp_result else result
if 'sequences' not in self.ignore_list:
tmp_result, differences_dict['sequences'] = self.check_sequences()
result = False if not tmp_result else result
if 'indexes' not in self.ignore_list:
tmp_result, differences_dict['indexes'] = self.check_indexes()
result = False if not tmp_result else result
if 'triggers' not in self.ignore_list:
tmp_result, differences_dict['triggers'] = self.check_triggers()
result = False if not tmp_result else result
if 'functions' not in self.ignore_list:
tmp_result, differences_dict['functions'] = self.check_functions()
result = False if not tmp_result else result
if 'rules' not in self.ignore_list:
tmp_result, differences_dict['rules'] = self.check_rules()
result = False if not tmp_result else result
if self.verbose_level == 0:
differences_dict = None
return result, differences_dict | [
"def",
"run_checks",
"(",
"self",
")",
":",
"result",
"=",
"True",
"differences_dict",
"=",
"{",
"}",
"if",
"'tables'",
"not",
"in",
"self",
".",
"ignore_list",
":",
"tmp_result",
",",
"differences_dict",
"[",
"'tables'",
"]",
"=",
"self",
".",
"check_tabl... | Run all the checks functions.
Returns
-------
bool
True if all the checks are true
False otherwise
dict
Dictionary of lists of differences | [
"Run",
"all",
"the",
"checks",
"functions",
"."
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/checker.py#L51-L97 |
10,367 | opengisch/pum | pum/core/checker.py | Checker.__check_equals | def __check_equals(self, query):
"""Check if the query results on the two databases are equals.
Returns
-------
bool
True if the results are the same
False otherwise
list
A list with the differences
"""
self.cur1.execute(query)
records1 = self.cur1.fetchall()
self.cur2.execute(query)
records2 = self.cur2.fetchall()
result = True
differences = []
d = difflib.Differ()
records1 = [str(x) for x in records1]
records2 = [str(x) for x in records2]
for line in d.compare(records1, records2):
if line[0] in ('-', '+'):
result = False
if self.verbose_level == 1:
differences.append(line[0:79])
elif self.verbose_level == 2:
differences.append(line)
return result, differences | python | def __check_equals(self, query):
self.cur1.execute(query)
records1 = self.cur1.fetchall()
self.cur2.execute(query)
records2 = self.cur2.fetchall()
result = True
differences = []
d = difflib.Differ()
records1 = [str(x) for x in records1]
records2 = [str(x) for x in records2]
for line in d.compare(records1, records2):
if line[0] in ('-', '+'):
result = False
if self.verbose_level == 1:
differences.append(line[0:79])
elif self.verbose_level == 2:
differences.append(line)
return result, differences | [
"def",
"__check_equals",
"(",
"self",
",",
"query",
")",
":",
"self",
".",
"cur1",
".",
"execute",
"(",
"query",
")",
"records1",
"=",
"self",
".",
"cur1",
".",
"fetchall",
"(",
")",
"self",
".",
"cur2",
".",
"execute",
"(",
"query",
")",
"records2",... | Check if the query results on the two databases are equals.
Returns
-------
bool
True if the results are the same
False otherwise
list
A list with the differences | [
"Check",
"if",
"the",
"query",
"results",
"on",
"the",
"two",
"databases",
"are",
"equals",
"."
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/core/checker.py#L375-L407 |
10,368 | opengisch/pum | pum/utils/utils.py | ask_for_confirmation | def ask_for_confirmation(prompt=None, resp=False):
"""Prompt for a yes or no response from the user.
Parameters
----------
prompt: basestring
The question to be prompted to the user.
resp: bool
The default value assumed by the caller when user simply
types ENTER.
Returns
-------
bool
True if the user response is 'y' or 'Y'
False if the user response is 'n' or 'N'
"""
global input
if prompt is None:
prompt = 'Confirm'
if resp:
prompt = '%s [%s]|%s: ' % (prompt, 'y', 'n')
else:
prompt = '%s [%s]|%s: ' % (prompt, 'n', 'y')
while True:
# Fix for Python2. In python3 raw_input() is now input()
try:
input = raw_input
except NameError:
pass
ans = input(prompt)
if not ans:
return resp
if ans not in ['y', 'Y', 'n', 'N']:
print('please enter y or n.')
continue
if ans == 'y' or ans == 'Y':
return True
if ans == 'n' or ans == 'N':
return False | python | def ask_for_confirmation(prompt=None, resp=False):
global input
if prompt is None:
prompt = 'Confirm'
if resp:
prompt = '%s [%s]|%s: ' % (prompt, 'y', 'n')
else:
prompt = '%s [%s]|%s: ' % (prompt, 'n', 'y')
while True:
# Fix for Python2. In python3 raw_input() is now input()
try:
input = raw_input
except NameError:
pass
ans = input(prompt)
if not ans:
return resp
if ans not in ['y', 'Y', 'n', 'N']:
print('please enter y or n.')
continue
if ans == 'y' or ans == 'Y':
return True
if ans == 'n' or ans == 'N':
return False | [
"def",
"ask_for_confirmation",
"(",
"prompt",
"=",
"None",
",",
"resp",
"=",
"False",
")",
":",
"global",
"input",
"if",
"prompt",
"is",
"None",
":",
"prompt",
"=",
"'Confirm'",
"if",
"resp",
":",
"prompt",
"=",
"'%s [%s]|%s: '",
"%",
"(",
"prompt",
",",... | Prompt for a yes or no response from the user.
Parameters
----------
prompt: basestring
The question to be prompted to the user.
resp: bool
The default value assumed by the caller when user simply
types ENTER.
Returns
-------
bool
True if the user response is 'y' or 'Y'
False if the user response is 'n' or 'N' | [
"Prompt",
"for",
"a",
"yes",
"or",
"no",
"response",
"from",
"the",
"user",
"."
] | eaf6af92d723ace60b9e982d7f69b98e00606959 | https://github.com/opengisch/pum/blob/eaf6af92d723ace60b9e982d7f69b98e00606959/pum/utils/utils.py#L4-L45 |
10,369 | Jaymon/endpoints | endpoints/decorators/auth.py | AuthDecorator.handle_target | def handle_target(self, request, controller_args, controller_kwargs):
"""Only here to set self.request and get rid of it after
this will set self.request so the target method can access request using
self.request, just like in the controller.
"""
self.request = request
super(AuthDecorator, self).handle_target(request, controller_args, controller_kwargs)
del self.request | python | def handle_target(self, request, controller_args, controller_kwargs):
self.request = request
super(AuthDecorator, self).handle_target(request, controller_args, controller_kwargs)
del self.request | [
"def",
"handle_target",
"(",
"self",
",",
"request",
",",
"controller_args",
",",
"controller_kwargs",
")",
":",
"self",
".",
"request",
"=",
"request",
"super",
"(",
"AuthDecorator",
",",
"self",
")",
".",
"handle_target",
"(",
"request",
",",
"controller_arg... | Only here to set self.request and get rid of it after
this will set self.request so the target method can access request using
self.request, just like in the controller. | [
"Only",
"here",
"to",
"set",
"self",
".",
"request",
"and",
"get",
"rid",
"of",
"it",
"after"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/decorators/auth.py#L190-L198 |
10,370 | Jaymon/endpoints | endpoints/client.py | HTTPClient.get | def get(self, uri, query=None, **kwargs):
"""make a GET request"""
return self.fetch('get', uri, query, **kwargs) | python | def get(self, uri, query=None, **kwargs):
return self.fetch('get', uri, query, **kwargs) | [
"def",
"get",
"(",
"self",
",",
"uri",
",",
"query",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"fetch",
"(",
"'get'",
",",
"uri",
",",
"query",
",",
"*",
"*",
"kwargs",
")"
] | make a GET request | [
"make",
"a",
"GET",
"request"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L41-L43 |
10,371 | Jaymon/endpoints | endpoints/client.py | HTTPClient.post | def post(self, uri, body=None, **kwargs):
"""make a POST request"""
return self.fetch('post', uri, kwargs.pop("query", {}), body, **kwargs) | python | def post(self, uri, body=None, **kwargs):
return self.fetch('post', uri, kwargs.pop("query", {}), body, **kwargs) | [
"def",
"post",
"(",
"self",
",",
"uri",
",",
"body",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"fetch",
"(",
"'post'",
",",
"uri",
",",
"kwargs",
".",
"pop",
"(",
"\"query\"",
",",
"{",
"}",
")",
",",
"body",
",",
... | make a POST request | [
"make",
"a",
"POST",
"request"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L45-L47 |
10,372 | Jaymon/endpoints | endpoints/client.py | HTTPClient.post_file | def post_file(self, uri, body, files, **kwargs):
"""POST a file"""
# requests doesn't actually need us to open the files but we do anyway because
# if we don't then the filename isn't preserved, so we assume each string
# value is a filepath
for key in files.keys():
if isinstance(files[key], basestring):
files[key] = open(files[key], 'rb')
kwargs["files"] = files
# we ignore content type for posting files since it requires very specific things
ct = self.headers.pop("content-type", None)
ret = self.fetch('post', uri, {}, body, **kwargs)
if ct:
self.headers["content-type"] = ct
# close all the files
for fp in files.values():
fp.close()
return ret | python | def post_file(self, uri, body, files, **kwargs):
# requests doesn't actually need us to open the files but we do anyway because
# if we don't then the filename isn't preserved, so we assume each string
# value is a filepath
for key in files.keys():
if isinstance(files[key], basestring):
files[key] = open(files[key], 'rb')
kwargs["files"] = files
# we ignore content type for posting files since it requires very specific things
ct = self.headers.pop("content-type", None)
ret = self.fetch('post', uri, {}, body, **kwargs)
if ct:
self.headers["content-type"] = ct
# close all the files
for fp in files.values():
fp.close()
return ret | [
"def",
"post_file",
"(",
"self",
",",
"uri",
",",
"body",
",",
"files",
",",
"*",
"*",
"kwargs",
")",
":",
"# requests doesn't actually need us to open the files but we do anyway because",
"# if we don't then the filename isn't preserved, so we assume each string",
"# value is a ... | POST a file | [
"POST",
"a",
"file"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L49-L68 |
10,373 | Jaymon/endpoints | endpoints/client.py | HTTPClient.delete | def delete(self, uri, query=None, **kwargs):
"""make a DELETE request"""
return self.fetch('delete', uri, query, **kwargs) | python | def delete(self, uri, query=None, **kwargs):
return self.fetch('delete', uri, query, **kwargs) | [
"def",
"delete",
"(",
"self",
",",
"uri",
",",
"query",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"self",
".",
"fetch",
"(",
"'delete'",
",",
"uri",
",",
"query",
",",
"*",
"*",
"kwargs",
")"
] | make a DELETE request | [
"make",
"a",
"DELETE",
"request"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L70-L72 |
10,374 | Jaymon/endpoints | endpoints/client.py | HTTPClient.get_fetch_headers | def get_fetch_headers(self, method, headers):
"""merge class headers with passed in headers
:param method: string, (eg, GET or POST), this is passed in so you can customize
headers based on the method that you are calling
:param headers: dict, all the headers passed into the fetch method
:returns: passed in headers merged with global class headers
"""
all_headers = self.headers.copy()
if headers:
all_headers.update(headers)
return Headers(all_headers) | python | def get_fetch_headers(self, method, headers):
all_headers = self.headers.copy()
if headers:
all_headers.update(headers)
return Headers(all_headers) | [
"def",
"get_fetch_headers",
"(",
"self",
",",
"method",
",",
"headers",
")",
":",
"all_headers",
"=",
"self",
".",
"headers",
".",
"copy",
"(",
")",
"if",
"headers",
":",
"all_headers",
".",
"update",
"(",
"headers",
")",
"return",
"Headers",
"(",
"all_h... | merge class headers with passed in headers
:param method: string, (eg, GET or POST), this is passed in so you can customize
headers based on the method that you are calling
:param headers: dict, all the headers passed into the fetch method
:returns: passed in headers merged with global class headers | [
"merge",
"class",
"headers",
"with",
"passed",
"in",
"headers"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L142-L153 |
10,375 | Jaymon/endpoints | endpoints/client.py | HTTPClient.get_fetch_request | def get_fetch_request(self, method, fetch_url, *args, **kwargs):
"""This is handy if you want to modify the request right before passing it
to requests, or you want to do something extra special customized
:param method: string, the http method (eg, GET, POST)
:param fetch_url: string, the full url with query params
:param *args: any other positional arguments
:param **kwargs: any keyword arguments to pass to requests
:returns: a requests.Response compatible object instance
"""
return requests.request(method, fetch_url, *args, **kwargs) | python | def get_fetch_request(self, method, fetch_url, *args, **kwargs):
return requests.request(method, fetch_url, *args, **kwargs) | [
"def",
"get_fetch_request",
"(",
"self",
",",
"method",
",",
"fetch_url",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"requests",
".",
"request",
"(",
"method",
",",
"fetch_url",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | This is handy if you want to modify the request right before passing it
to requests, or you want to do something extra special customized
:param method: string, the http method (eg, GET, POST)
:param fetch_url: string, the full url with query params
:param *args: any other positional arguments
:param **kwargs: any keyword arguments to pass to requests
:returns: a requests.Response compatible object instance | [
"This",
"is",
"handy",
"if",
"you",
"want",
"to",
"modify",
"the",
"request",
"right",
"before",
"passing",
"it",
"to",
"requests",
"or",
"you",
"want",
"to",
"do",
"something",
"extra",
"special",
"customized"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L158-L168 |
10,376 | Jaymon/endpoints | endpoints/client.py | HTTPClient.get_fetch_response | def get_fetch_response(self, res):
"""the goal of this method is to make the requests object more endpoints like
res -- requests Response -- the native requests response instance, we manipulate
it a bit to make it look a bit more like the internal endpoints.Response object
"""
res.code = res.status_code
res.headers = Headers(res.headers)
res._body = None
res.body = ''
body = res.content
if body:
if self.is_json(res.headers):
res._body = res.json()
else:
res._body = body
res.body = String(body, res.encoding)
return res | python | def get_fetch_response(self, res):
res.code = res.status_code
res.headers = Headers(res.headers)
res._body = None
res.body = ''
body = res.content
if body:
if self.is_json(res.headers):
res._body = res.json()
else:
res._body = body
res.body = String(body, res.encoding)
return res | [
"def",
"get_fetch_response",
"(",
"self",
",",
"res",
")",
":",
"res",
".",
"code",
"=",
"res",
".",
"status_code",
"res",
".",
"headers",
"=",
"Headers",
"(",
"res",
".",
"headers",
")",
"res",
".",
"_body",
"=",
"None",
"res",
".",
"body",
"=",
"... | the goal of this method is to make the requests object more endpoints like
res -- requests Response -- the native requests response instance, we manipulate
it a bit to make it look a bit more like the internal endpoints.Response object | [
"the",
"goal",
"of",
"this",
"method",
"is",
"to",
"make",
"the",
"requests",
"object",
"more",
"endpoints",
"like"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L170-L189 |
10,377 | Jaymon/endpoints | endpoints/client.py | HTTPClient.is_json | def is_json(self, headers):
"""return true if content_type is a json content type"""
ret = False
ct = headers.get("content-type", "").lower()
if ct:
ret = ct.lower().rfind("json") >= 0
return ret | python | def is_json(self, headers):
ret = False
ct = headers.get("content-type", "").lower()
if ct:
ret = ct.lower().rfind("json") >= 0
return ret | [
"def",
"is_json",
"(",
"self",
",",
"headers",
")",
":",
"ret",
"=",
"False",
"ct",
"=",
"headers",
".",
"get",
"(",
"\"content-type\"",
",",
"\"\"",
")",
".",
"lower",
"(",
")",
"if",
"ct",
":",
"ret",
"=",
"ct",
".",
"lower",
"(",
")",
".",
"... | return true if content_type is a json content type | [
"return",
"true",
"if",
"content_type",
"is",
"a",
"json",
"content",
"type"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/client.py#L191-L197 |
10,378 | Jaymon/endpoints | endpoints/reflection.py | ReflectMethod.params | def params(self):
"""return information about the params that the given http option takes"""
ret = {}
for rd in self.decorators:
args = rd.args
kwargs = rd.kwargs
if param in rd:
is_required = kwargs.get('required', 'default' not in kwargs)
ret[args[0]] = {'required': is_required, 'other_names': args[1:], 'options': kwargs}
return ret | python | def params(self):
ret = {}
for rd in self.decorators:
args = rd.args
kwargs = rd.kwargs
if param in rd:
is_required = kwargs.get('required', 'default' not in kwargs)
ret[args[0]] = {'required': is_required, 'other_names': args[1:], 'options': kwargs}
return ret | [
"def",
"params",
"(",
"self",
")",
":",
"ret",
"=",
"{",
"}",
"for",
"rd",
"in",
"self",
".",
"decorators",
":",
"args",
"=",
"rd",
".",
"args",
"kwargs",
"=",
"rd",
".",
"kwargs",
"if",
"param",
"in",
"rd",
":",
"is_required",
"=",
"kwargs",
"."... | return information about the params that the given http option takes | [
"return",
"information",
"about",
"the",
"params",
"that",
"the",
"given",
"http",
"option",
"takes"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/reflection.py#L107-L117 |
10,379 | Jaymon/endpoints | endpoints/interface/__init__.py | BaseServer.create_call | def create_call(self, raw_request, **kwargs):
"""create a call object that has endpoints understandable request and response
instances"""
req = self.create_request(raw_request, **kwargs)
res = self.create_response(**kwargs)
rou = self.create_router(**kwargs)
c = self.call_class(req, res, rou)
return c | python | def create_call(self, raw_request, **kwargs):
req = self.create_request(raw_request, **kwargs)
res = self.create_response(**kwargs)
rou = self.create_router(**kwargs)
c = self.call_class(req, res, rou)
return c | [
"def",
"create_call",
"(",
"self",
",",
"raw_request",
",",
"*",
"*",
"kwargs",
")",
":",
"req",
"=",
"self",
".",
"create_request",
"(",
"raw_request",
",",
"*",
"*",
"kwargs",
")",
"res",
"=",
"self",
".",
"create_response",
"(",
"*",
"*",
"kwargs",
... | create a call object that has endpoints understandable request and response
instances | [
"create",
"a",
"call",
"object",
"that",
"has",
"endpoints",
"understandable",
"request",
"and",
"response",
"instances"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/interface/__init__.py#L88-L95 |
10,380 | Jaymon/endpoints | endpoints/decorators/limit.py | RateLimitDecorator.decorate | def decorate(self, func, limit=0, ttl=0, *anoop, **kwnoop):
"""see target for an explanation of limit and ttl"""
self.limit = int(limit)
self.ttl = int(ttl)
return super(RateLimitDecorator, self).decorate(func, target=None, *anoop, **kwnoop) | python | def decorate(self, func, limit=0, ttl=0, *anoop, **kwnoop):
self.limit = int(limit)
self.ttl = int(ttl)
return super(RateLimitDecorator, self).decorate(func, target=None, *anoop, **kwnoop) | [
"def",
"decorate",
"(",
"self",
",",
"func",
",",
"limit",
"=",
"0",
",",
"ttl",
"=",
"0",
",",
"*",
"anoop",
",",
"*",
"*",
"kwnoop",
")",
":",
"self",
".",
"limit",
"=",
"int",
"(",
"limit",
")",
"self",
".",
"ttl",
"=",
"int",
"(",
"ttl",
... | see target for an explanation of limit and ttl | [
"see",
"target",
"for",
"an",
"explanation",
"of",
"limit",
"and",
"ttl"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/decorators/limit.py#L152-L156 |
10,381 | Jaymon/endpoints | endpoints/decorators/limit.py | ratelimit.decorate | def decorate(self, func, limit, ttl, *anoop, **kwnoop):
"""make limit and ttl required"""
return super(ratelimit, self).decorate(func, limit, ttl, *anoop, **kwnoop) | python | def decorate(self, func, limit, ttl, *anoop, **kwnoop):
return super(ratelimit, self).decorate(func, limit, ttl, *anoop, **kwnoop) | [
"def",
"decorate",
"(",
"self",
",",
"func",
",",
"limit",
",",
"ttl",
",",
"*",
"anoop",
",",
"*",
"*",
"kwnoop",
")",
":",
"return",
"super",
"(",
"ratelimit",
",",
"self",
")",
".",
"decorate",
"(",
"func",
",",
"limit",
",",
"ttl",
",",
"*",
... | make limit and ttl required | [
"make",
"limit",
"and",
"ttl",
"required"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/decorators/limit.py#L179-L181 |
10,382 | Jaymon/endpoints | endpoints/utils.py | Base64.encode | def encode(cls, s):
"""converts a plain text string to base64 encoding
:param s: unicode str|bytes, the base64 encoded string
:returns: unicode str
"""
b = ByteString(s)
be = base64.b64encode(b).strip()
return String(be) | python | def encode(cls, s):
b = ByteString(s)
be = base64.b64encode(b).strip()
return String(be) | [
"def",
"encode",
"(",
"cls",
",",
"s",
")",
":",
"b",
"=",
"ByteString",
"(",
"s",
")",
"be",
"=",
"base64",
".",
"b64encode",
"(",
"b",
")",
".",
"strip",
"(",
")",
"return",
"String",
"(",
"be",
")"
] | converts a plain text string to base64 encoding
:param s: unicode str|bytes, the base64 encoded string
:returns: unicode str | [
"converts",
"a",
"plain",
"text",
"string",
"to",
"base64",
"encoding"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/utils.py#L119-L127 |
10,383 | Jaymon/endpoints | endpoints/utils.py | Base64.decode | def decode(cls, s):
"""decodes a base64 string to plain text
:param s: unicode str|bytes, the base64 encoded string
:returns: unicode str
"""
b = ByteString(s)
bd = base64.b64decode(b)
return String(bd) | python | def decode(cls, s):
b = ByteString(s)
bd = base64.b64decode(b)
return String(bd) | [
"def",
"decode",
"(",
"cls",
",",
"s",
")",
":",
"b",
"=",
"ByteString",
"(",
"s",
")",
"bd",
"=",
"base64",
".",
"b64decode",
"(",
"b",
")",
"return",
"String",
"(",
"bd",
")"
] | decodes a base64 string to plain text
:param s: unicode str|bytes, the base64 encoded string
:returns: unicode str | [
"decodes",
"a",
"base64",
"string",
"to",
"plain",
"text"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/utils.py#L130-L138 |
10,384 | Jaymon/endpoints | endpoints/utils.py | MimeType.find_type | def find_type(cls, val):
"""return the mimetype from the given string value
if value is a path, then the extension will be found, if val is an extension then
that will be used to find the mimetype
"""
mt = ""
index = val.rfind(".")
if index == -1:
val = "fake.{}".format(val)
elif index == 0:
val = "fake{}".format(val)
mt = mimetypes.guess_type(val)[0]
if mt is None:
mt = ""
return mt | python | def find_type(cls, val):
mt = ""
index = val.rfind(".")
if index == -1:
val = "fake.{}".format(val)
elif index == 0:
val = "fake{}".format(val)
mt = mimetypes.guess_type(val)[0]
if mt is None:
mt = ""
return mt | [
"def",
"find_type",
"(",
"cls",
",",
"val",
")",
":",
"mt",
"=",
"\"\"",
"index",
"=",
"val",
".",
"rfind",
"(",
"\".\"",
")",
"if",
"index",
"==",
"-",
"1",
":",
"val",
"=",
"\"fake.{}\"",
".",
"format",
"(",
"val",
")",
"elif",
"index",
"==",
... | return the mimetype from the given string value
if value is a path, then the extension will be found, if val is an extension then
that will be used to find the mimetype | [
"return",
"the",
"mimetype",
"from",
"the",
"given",
"string",
"value"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/utils.py#L154-L171 |
10,385 | Jaymon/endpoints | endpoints/utils.py | AcceptHeader.filter | def filter(self, media_type, **params):
"""
iterate all the accept media types that match media_type
media_type -- string -- the media type to filter by
**params -- dict -- further filter by key: val
return -- generator -- yields all matching media type info things
"""
mtype, msubtype = self._split_media_type(media_type)
for x in self.__iter__():
# all the params have to match to make the media type valid
matched = True
for k, v in params.items():
if x[2].get(k, None) != v:
matched = False
break
if matched:
if x[0][0] == '*':
if x[0][1] == '*':
yield x
elif x[0][1] == msubtype:
yield x
elif mtype == '*':
if msubtype == '*':
yield x
elif x[0][1] == msubtype:
yield x
elif x[0][0] == mtype:
if msubtype == '*':
yield x
elif x[0][1] == '*':
yield x
elif x[0][1] == msubtype:
yield x | python | def filter(self, media_type, **params):
mtype, msubtype = self._split_media_type(media_type)
for x in self.__iter__():
# all the params have to match to make the media type valid
matched = True
for k, v in params.items():
if x[2].get(k, None) != v:
matched = False
break
if matched:
if x[0][0] == '*':
if x[0][1] == '*':
yield x
elif x[0][1] == msubtype:
yield x
elif mtype == '*':
if msubtype == '*':
yield x
elif x[0][1] == msubtype:
yield x
elif x[0][0] == mtype:
if msubtype == '*':
yield x
elif x[0][1] == '*':
yield x
elif x[0][1] == msubtype:
yield x | [
"def",
"filter",
"(",
"self",
",",
"media_type",
",",
"*",
"*",
"params",
")",
":",
"mtype",
",",
"msubtype",
"=",
"self",
".",
"_split_media_type",
"(",
"media_type",
")",
"for",
"x",
"in",
"self",
".",
"__iter__",
"(",
")",
":",
"# all the params have ... | iterate all the accept media types that match media_type
media_type -- string -- the media type to filter by
**params -- dict -- further filter by key: val
return -- generator -- yields all matching media type info things | [
"iterate",
"all",
"the",
"accept",
"media",
"types",
"that",
"match",
"media_type"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/utils.py#L259-L300 |
10,386 | Jaymon/endpoints | endpoints/interface/wsgi/__init__.py | Application.create_request | def create_request(self, raw_request, **kwargs):
"""
create instance of request
raw_request -- the raw request object retrieved from a WSGI server
"""
r = self.request_class()
for k, v in raw_request.items():
if k.startswith('HTTP_'):
r.set_header(k[5:], v)
else:
r.environ[k] = v
r.method = raw_request['REQUEST_METHOD']
r.path = raw_request['PATH_INFO']
r.query = raw_request['QUERY_STRING']
# handle headers not prefixed with http
for k, t in {'CONTENT_TYPE': None, 'CONTENT_LENGTH': int}.items():
v = r.environ.pop(k, None)
if v:
r.set_header(k, t(v) if t else v)
if 'wsgi.input' in raw_request:
if "CONTENT_LENGTH" in raw_request and int(r.get_header("CONTENT_LENGTH", 0)) <= 0:
r.body_kwargs = {}
else:
if r.get_header('transfer-encoding', "").lower().startswith('chunked'):
raise IOError("Server does not support chunked requests")
else:
r.body_input = raw_request['wsgi.input']
else:
r.body_kwargs = {}
return r | python | def create_request(self, raw_request, **kwargs):
r = self.request_class()
for k, v in raw_request.items():
if k.startswith('HTTP_'):
r.set_header(k[5:], v)
else:
r.environ[k] = v
r.method = raw_request['REQUEST_METHOD']
r.path = raw_request['PATH_INFO']
r.query = raw_request['QUERY_STRING']
# handle headers not prefixed with http
for k, t in {'CONTENT_TYPE': None, 'CONTENT_LENGTH': int}.items():
v = r.environ.pop(k, None)
if v:
r.set_header(k, t(v) if t else v)
if 'wsgi.input' in raw_request:
if "CONTENT_LENGTH" in raw_request and int(r.get_header("CONTENT_LENGTH", 0)) <= 0:
r.body_kwargs = {}
else:
if r.get_header('transfer-encoding', "").lower().startswith('chunked'):
raise IOError("Server does not support chunked requests")
else:
r.body_input = raw_request['wsgi.input']
else:
r.body_kwargs = {}
return r | [
"def",
"create_request",
"(",
"self",
",",
"raw_request",
",",
"*",
"*",
"kwargs",
")",
":",
"r",
"=",
"self",
".",
"request_class",
"(",
")",
"for",
"k",
",",
"v",
"in",
"raw_request",
".",
"items",
"(",
")",
":",
"if",
"k",
".",
"startswith",
"("... | create instance of request
raw_request -- the raw request object retrieved from a WSGI server | [
"create",
"instance",
"of",
"request"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/interface/wsgi/__init__.py#L51-L89 |
10,387 | Jaymon/endpoints | endpoints/interface/uwsgi/async.py | WebsocketApplication.create_environ | def create_environ(self, req, payload):
"""This will take the original request and the new websocket payload and
merge them into a new request instance"""
ws_req = req.copy()
del ws_req.controller_info
ws_req.environ.pop("wsgi.input", None)
ws_req.body_kwargs = payload.body
ws_req.environ["REQUEST_METHOD"] = payload.method
ws_req.method = payload.method
ws_req.environ["PATH_INFO"] = payload.path
ws_req.path = payload.path
ws_req.environ["WS_PAYLOAD"] = payload
ws_req.environ["WS_ORIGINAL"] = req
ws_req.payload = payload
ws_req.parent = req
return {"WS_REQUEST": ws_req} | python | def create_environ(self, req, payload):
ws_req = req.copy()
del ws_req.controller_info
ws_req.environ.pop("wsgi.input", None)
ws_req.body_kwargs = payload.body
ws_req.environ["REQUEST_METHOD"] = payload.method
ws_req.method = payload.method
ws_req.environ["PATH_INFO"] = payload.path
ws_req.path = payload.path
ws_req.environ["WS_PAYLOAD"] = payload
ws_req.environ["WS_ORIGINAL"] = req
ws_req.payload = payload
ws_req.parent = req
return {"WS_REQUEST": ws_req} | [
"def",
"create_environ",
"(",
"self",
",",
"req",
",",
"payload",
")",
":",
"ws_req",
"=",
"req",
".",
"copy",
"(",
")",
"del",
"ws_req",
".",
"controller_info",
"ws_req",
".",
"environ",
".",
"pop",
"(",
"\"wsgi.input\"",
",",
"None",
")",
"ws_req",
"... | This will take the original request and the new websocket payload and
merge them into a new request instance | [
"This",
"will",
"take",
"the",
"original",
"request",
"and",
"the",
"new",
"websocket",
"payload",
"and",
"merge",
"them",
"into",
"a",
"new",
"request",
"instance"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/interface/uwsgi/async.py#L126-L147 |
10,388 | Jaymon/endpoints | endpoints/interface/wsgi/client.py | find_module_path | def find_module_path():
"""find where the master module is located"""
master_modname = __name__.split(".", 1)[0]
master_module = sys.modules[master_modname]
#return os.path.dirname(os.path.realpath(os.path.join(inspect.getsourcefile(endpoints), "..")))
path = os.path.dirname(inspect.getsourcefile(master_module))
return path | python | def find_module_path():
master_modname = __name__.split(".", 1)[0]
master_module = sys.modules[master_modname]
#return os.path.dirname(os.path.realpath(os.path.join(inspect.getsourcefile(endpoints), "..")))
path = os.path.dirname(inspect.getsourcefile(master_module))
return path | [
"def",
"find_module_path",
"(",
")",
":",
"master_modname",
"=",
"__name__",
".",
"split",
"(",
"\".\"",
",",
"1",
")",
"[",
"0",
"]",
"master_module",
"=",
"sys",
".",
"modules",
"[",
"master_modname",
"]",
"#return os.path.dirname(os.path.realpath(os.path.join(i... | find where the master module is located | [
"find",
"where",
"the",
"master",
"module",
"is",
"located"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/interface/wsgi/client.py#L17-L23 |
10,389 | Jaymon/endpoints | endpoints/http.py | Headers._convert_string_name | def _convert_string_name(self, k):
"""converts things like FOO_BAR to Foo-Bar which is the normal form"""
k = String(k, "iso-8859-1")
klower = k.lower().replace('_', '-')
bits = klower.split('-')
return "-".join((bit.title() for bit in bits)) | python | def _convert_string_name(self, k):
k = String(k, "iso-8859-1")
klower = k.lower().replace('_', '-')
bits = klower.split('-')
return "-".join((bit.title() for bit in bits)) | [
"def",
"_convert_string_name",
"(",
"self",
",",
"k",
")",
":",
"k",
"=",
"String",
"(",
"k",
",",
"\"iso-8859-1\"",
")",
"klower",
"=",
"k",
".",
"lower",
"(",
")",
".",
"replace",
"(",
"'_'",
",",
"'-'",
")",
"bits",
"=",
"klower",
".",
"split",
... | converts things like FOO_BAR to Foo-Bar which is the normal form | [
"converts",
"things",
"like",
"FOO_BAR",
"to",
"Foo",
"-",
"Bar",
"which",
"is",
"the",
"normal",
"form"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L50-L55 |
10,390 | Jaymon/endpoints | endpoints/http.py | Url._normalize_params | def _normalize_params(self, *paths, **query_kwargs):
"""a lot of the helper methods are very similar, this handles their arguments"""
kwargs = {}
if paths:
fragment = paths[-1]
if fragment:
if fragment.startswith("#"):
kwargs["fragment"] = fragment
paths.pop(-1)
kwargs["path"] = "/".join(self.normalize_paths(*paths))
kwargs["query_kwargs"] = query_kwargs
return kwargs | python | def _normalize_params(self, *paths, **query_kwargs):
kwargs = {}
if paths:
fragment = paths[-1]
if fragment:
if fragment.startswith("#"):
kwargs["fragment"] = fragment
paths.pop(-1)
kwargs["path"] = "/".join(self.normalize_paths(*paths))
kwargs["query_kwargs"] = query_kwargs
return kwargs | [
"def",
"_normalize_params",
"(",
"self",
",",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
":",
"kwargs",
"=",
"{",
"}",
"if",
"paths",
":",
"fragment",
"=",
"paths",
"[",
"-",
"1",
"]",
"if",
"fragment",
":",
"if",
"fragment",
".",
"startswith"... | a lot of the helper methods are very similar, this handles their arguments | [
"a",
"lot",
"of",
"the",
"helper",
"methods",
"are",
"very",
"similar",
"this",
"handles",
"their",
"arguments"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L465-L479 |
10,391 | Jaymon/endpoints | endpoints/http.py | Url.controller | def controller(self, *paths, **query_kwargs):
"""create a new url object using the controller path as a base
if you have a controller `foo.BarController` then this would create a new
Url instance with `host/foo/bar` as the base path, so any *paths will be
appended to `/foo/bar`
:example:
# controller foo.BarController
print url # http://host.com/foo/bar/some_random_path
print url.controller() # http://host.com/foo/bar
print url.controller("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the controller path
:param **query_kwargs: dict, any query string params to add
"""
kwargs = self._normalize_params(*paths, **query_kwargs)
if self.controller_path:
if "path" in kwargs:
paths = self.normalize_paths(self.controller_path, kwargs["path"])
kwargs["path"] = "/".join(paths)
else:
kwargs["path"] = self.controller_path
return self.create(self.root, **kwargs) | python | def controller(self, *paths, **query_kwargs):
kwargs = self._normalize_params(*paths, **query_kwargs)
if self.controller_path:
if "path" in kwargs:
paths = self.normalize_paths(self.controller_path, kwargs["path"])
kwargs["path"] = "/".join(paths)
else:
kwargs["path"] = self.controller_path
return self.create(self.root, **kwargs) | [
"def",
"controller",
"(",
"self",
",",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
":",
"kwargs",
"=",
"self",
".",
"_normalize_params",
"(",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
"if",
"self",
".",
"controller_path",
":",
"if",
"\"pa... | create a new url object using the controller path as a base
if you have a controller `foo.BarController` then this would create a new
Url instance with `host/foo/bar` as the base path, so any *paths will be
appended to `/foo/bar`
:example:
# controller foo.BarController
print url # http://host.com/foo/bar/some_random_path
print url.controller() # http://host.com/foo/bar
print url.controller("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the controller path
:param **query_kwargs: dict, any query string params to add | [
"create",
"a",
"new",
"url",
"object",
"using",
"the",
"controller",
"path",
"as",
"a",
"base"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L535-L560 |
10,392 | Jaymon/endpoints | endpoints/http.py | Url.base | def base(self, *paths, **query_kwargs):
"""create a new url object using the current base path as a base
if you had requested /foo/bar, then this would append *paths and **query_kwargs
to /foo/bar
:example:
# current path: /foo/bar
print url # http://host.com/foo/bar
print url.base() # http://host.com/foo/bar
print url.base("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the current path without query params
:param **query_kwargs: dict, any query string params to add
"""
kwargs = self._normalize_params(*paths, **query_kwargs)
if self.path:
if "path" in kwargs:
paths = self.normalize_paths(self.path, kwargs["path"])
kwargs["path"] = "/".join(paths)
else:
kwargs["path"] = self.path
return self.create(self.root, **kwargs) | python | def base(self, *paths, **query_kwargs):
kwargs = self._normalize_params(*paths, **query_kwargs)
if self.path:
if "path" in kwargs:
paths = self.normalize_paths(self.path, kwargs["path"])
kwargs["path"] = "/".join(paths)
else:
kwargs["path"] = self.path
return self.create(self.root, **kwargs) | [
"def",
"base",
"(",
"self",
",",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
":",
"kwargs",
"=",
"self",
".",
"_normalize_params",
"(",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
"if",
"self",
".",
"path",
":",
"if",
"\"path\"",
"in",
... | create a new url object using the current base path as a base
if you had requested /foo/bar, then this would append *paths and **query_kwargs
to /foo/bar
:example:
# current path: /foo/bar
print url # http://host.com/foo/bar
print url.base() # http://host.com/foo/bar
print url.base("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the current path without query params
:param **query_kwargs: dict, any query string params to add | [
"create",
"a",
"new",
"url",
"object",
"using",
"the",
"current",
"base",
"path",
"as",
"a",
"base"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L562-L586 |
10,393 | Jaymon/endpoints | endpoints/http.py | Url.host | def host(self, *paths, **query_kwargs):
"""create a new url object using the host as a base
if you had requested http://host/foo/bar, then this would append *paths and **query_kwargs
to http://host
:example:
# current url: http://host/foo/bar
print url # http://host.com/foo/bar
print url.host_url() # http://host.com/
print url.host_url("che", boom="bam") # http://host/che?boom=bam
:param *paths: list, the paths to append to the current path without query params
:param **query_kwargs: dict, any query string params to add
"""
kwargs = self._normalize_params(*paths, **query_kwargs)
return self.create(self.root, **kwargs) | python | def host(self, *paths, **query_kwargs):
kwargs = self._normalize_params(*paths, **query_kwargs)
return self.create(self.root, **kwargs) | [
"def",
"host",
"(",
"self",
",",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
":",
"kwargs",
"=",
"self",
".",
"_normalize_params",
"(",
"*",
"paths",
",",
"*",
"*",
"query_kwargs",
")",
"return",
"self",
".",
"create",
"(",
"self",
".",
"root",... | create a new url object using the host as a base
if you had requested http://host/foo/bar, then this would append *paths and **query_kwargs
to http://host
:example:
# current url: http://host/foo/bar
print url # http://host.com/foo/bar
print url.host_url() # http://host.com/
print url.host_url("che", boom="bam") # http://host/che?boom=bam
:param *paths: list, the paths to append to the current path without query params
:param **query_kwargs: dict, any query string params to add | [
"create",
"a",
"new",
"url",
"object",
"using",
"the",
"host",
"as",
"a",
"base"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L588-L606 |
10,394 | Jaymon/endpoints | endpoints/http.py | Request.accept_encoding | def accept_encoding(self):
"""The encoding the client requested the response to use"""
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Charset
ret = ""
accept_encoding = self.get_header("Accept-Charset", "")
if accept_encoding:
bits = re.split(r"\s+", accept_encoding)
bits = bits[0].split(";")
ret = bits[0]
return ret | python | def accept_encoding(self):
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Charset
ret = ""
accept_encoding = self.get_header("Accept-Charset", "")
if accept_encoding:
bits = re.split(r"\s+", accept_encoding)
bits = bits[0].split(";")
ret = bits[0]
return ret | [
"def",
"accept_encoding",
"(",
"self",
")",
":",
"# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Charset",
"ret",
"=",
"\"\"",
"accept_encoding",
"=",
"self",
".",
"get_header",
"(",
"\"Accept-Charset\"",
",",
"\"\"",
")",
"if",
"accept_encoding",
":",... | The encoding the client requested the response to use | [
"The",
"encoding",
"the",
"client",
"requested",
"the",
"response",
"to",
"use"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L816-L825 |
10,395 | Jaymon/endpoints | endpoints/http.py | Request.encoding | def encoding(self):
"""the character encoding of the request, usually only set in POST type requests"""
encoding = None
ct = self.get_header('content-type')
if ct:
ah = AcceptHeader(ct)
if ah.media_types:
encoding = ah.media_types[0][2].get("charset", None)
return encoding | python | def encoding(self):
encoding = None
ct = self.get_header('content-type')
if ct:
ah = AcceptHeader(ct)
if ah.media_types:
encoding = ah.media_types[0][2].get("charset", None)
return encoding | [
"def",
"encoding",
"(",
"self",
")",
":",
"encoding",
"=",
"None",
"ct",
"=",
"self",
".",
"get_header",
"(",
"'content-type'",
")",
"if",
"ct",
":",
"ah",
"=",
"AcceptHeader",
"(",
"ct",
")",
"if",
"ah",
".",
"media_types",
":",
"encoding",
"=",
"ah... | the character encoding of the request, usually only set in POST type requests | [
"the",
"character",
"encoding",
"of",
"the",
"request",
"usually",
"only",
"set",
"in",
"POST",
"type",
"requests"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L828-L837 |
10,396 | Jaymon/endpoints | endpoints/http.py | Request.access_token | def access_token(self):
"""return an Oauth 2.0 Bearer access token if it can be found"""
access_token = self.get_auth_bearer()
if not access_token:
access_token = self.query_kwargs.get('access_token', '')
if not access_token:
access_token = self.body_kwargs.get('access_token', '')
return access_token | python | def access_token(self):
access_token = self.get_auth_bearer()
if not access_token:
access_token = self.query_kwargs.get('access_token', '')
if not access_token:
access_token = self.body_kwargs.get('access_token', '')
return access_token | [
"def",
"access_token",
"(",
"self",
")",
":",
"access_token",
"=",
"self",
".",
"get_auth_bearer",
"(",
")",
"if",
"not",
"access_token",
":",
"access_token",
"=",
"self",
".",
"query_kwargs",
".",
"get",
"(",
"'access_token'",
",",
"''",
")",
"if",
"not",... | return an Oauth 2.0 Bearer access token if it can be found | [
"return",
"an",
"Oauth",
"2",
".",
"0",
"Bearer",
"access",
"token",
"if",
"it",
"can",
"be",
"found"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L840-L848 |
10,397 | Jaymon/endpoints | endpoints/http.py | Request.client_tokens | def client_tokens(self):
"""try and get Oauth 2.0 client id and secret first from basic auth header,
then from GET or POST parameters
return -- tuple -- client_id, client_secret
"""
client_id, client_secret = self.get_auth_basic()
if not client_id and not client_secret:
client_id = self.query_kwargs.get('client_id', '')
client_secret = self.query_kwargs.get('client_secret', '')
if not client_id and not client_secret:
client_id = self.body_kwargs.get('client_id', '')
client_secret = self.body_kwargs.get('client_secret', '')
return client_id, client_secret | python | def client_tokens(self):
client_id, client_secret = self.get_auth_basic()
if not client_id and not client_secret:
client_id = self.query_kwargs.get('client_id', '')
client_secret = self.query_kwargs.get('client_secret', '')
if not client_id and not client_secret:
client_id = self.body_kwargs.get('client_id', '')
client_secret = self.body_kwargs.get('client_secret', '')
return client_id, client_secret | [
"def",
"client_tokens",
"(",
"self",
")",
":",
"client_id",
",",
"client_secret",
"=",
"self",
".",
"get_auth_basic",
"(",
")",
"if",
"not",
"client_id",
"and",
"not",
"client_secret",
":",
"client_id",
"=",
"self",
".",
"query_kwargs",
".",
"get",
"(",
"'... | try and get Oauth 2.0 client id and secret first from basic auth header,
then from GET or POST parameters
return -- tuple -- client_id, client_secret | [
"try",
"and",
"get",
"Oauth",
"2",
".",
"0",
"client",
"id",
"and",
"secret",
"first",
"from",
"basic",
"auth",
"header",
"then",
"from",
"GET",
"or",
"POST",
"parameters"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L851-L865 |
10,398 | Jaymon/endpoints | endpoints/http.py | Request.ips | def ips(self):
"""return all the possible ips of this request, this will include public and private ips"""
r = []
names = ['X_FORWARDED_FOR', 'CLIENT_IP', 'X_REAL_IP', 'X_FORWARDED',
'X_CLUSTER_CLIENT_IP', 'FORWARDED_FOR', 'FORWARDED', 'VIA',
'REMOTE_ADDR']
for name in names:
vs = self.get_header(name, '')
if vs:
r.extend(map(lambda v: v.strip(), vs.split(',')))
vs = self.environ.get(name, '')
if vs:
r.extend(map(lambda v: v.strip(), vs.split(',')))
return r | python | def ips(self):
r = []
names = ['X_FORWARDED_FOR', 'CLIENT_IP', 'X_REAL_IP', 'X_FORWARDED',
'X_CLUSTER_CLIENT_IP', 'FORWARDED_FOR', 'FORWARDED', 'VIA',
'REMOTE_ADDR']
for name in names:
vs = self.get_header(name, '')
if vs:
r.extend(map(lambda v: v.strip(), vs.split(',')))
vs = self.environ.get(name, '')
if vs:
r.extend(map(lambda v: v.strip(), vs.split(',')))
return r | [
"def",
"ips",
"(",
"self",
")",
":",
"r",
"=",
"[",
"]",
"names",
"=",
"[",
"'X_FORWARDED_FOR'",
",",
"'CLIENT_IP'",
",",
"'X_REAL_IP'",
",",
"'X_FORWARDED'",
",",
"'X_CLUSTER_CLIENT_IP'",
",",
"'FORWARDED_FOR'",
",",
"'FORWARDED'",
",",
"'VIA'",
",",
"'REMO... | return all the possible ips of this request, this will include public and private ips | [
"return",
"all",
"the",
"possible",
"ips",
"of",
"this",
"request",
"this",
"will",
"include",
"public",
"and",
"private",
"ips"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L868-L884 |
10,399 | Jaymon/endpoints | endpoints/http.py | Request.ip | def ip(self):
"""return the public ip address"""
r = ''
# this was compiled from here:
# https://github.com/un33k/django-ipware
# http://www.ietf.org/rfc/rfc3330.txt (IPv4)
# http://www.ietf.org/rfc/rfc5156.txt (IPv6)
# https://en.wikipedia.org/wiki/Reserved_IP_addresses
format_regex = re.compile(r'\s')
ip_regex = re.compile(r'^(?:{})'.format(r'|'.join([
r'0\.', # reserved for 'self-identification'
r'10\.', # class A
r'169\.254', # link local block
r'172\.(?:1[6-9]|2[0-9]|3[0-1])\.', # class B
r'192\.0\.2\.', # documentation/examples
r'192\.168', # class C
r'255\.{3}', # broadcast address
r'2001\:db8', # documentation/examples
r'fc00\:', # private
r'fe80\:', # link local unicast
r'ff00\:', # multicast
r'127\.', # localhost
r'\:\:1' # localhost
])))
ips = self.ips
for ip in ips:
if not format_regex.search(ip) and not ip_regex.match(ip):
r = ip
break
return r | python | def ip(self):
r = ''
# this was compiled from here:
# https://github.com/un33k/django-ipware
# http://www.ietf.org/rfc/rfc3330.txt (IPv4)
# http://www.ietf.org/rfc/rfc5156.txt (IPv6)
# https://en.wikipedia.org/wiki/Reserved_IP_addresses
format_regex = re.compile(r'\s')
ip_regex = re.compile(r'^(?:{})'.format(r'|'.join([
r'0\.', # reserved for 'self-identification'
r'10\.', # class A
r'169\.254', # link local block
r'172\.(?:1[6-9]|2[0-9]|3[0-1])\.', # class B
r'192\.0\.2\.', # documentation/examples
r'192\.168', # class C
r'255\.{3}', # broadcast address
r'2001\:db8', # documentation/examples
r'fc00\:', # private
r'fe80\:', # link local unicast
r'ff00\:', # multicast
r'127\.', # localhost
r'\:\:1' # localhost
])))
ips = self.ips
for ip in ips:
if not format_regex.search(ip) and not ip_regex.match(ip):
r = ip
break
return r | [
"def",
"ip",
"(",
"self",
")",
":",
"r",
"=",
"''",
"# this was compiled from here:",
"# https://github.com/un33k/django-ipware",
"# http://www.ietf.org/rfc/rfc3330.txt (IPv4)",
"# http://www.ietf.org/rfc/rfc5156.txt (IPv6)",
"# https://en.wikipedia.org/wiki/Reserved_IP_addresses",
"form... | return the public ip address | [
"return",
"the",
"public",
"ip",
"address"
] | 2f1c4ae2c69a168e69447d3d8395ada7becaa5fb | https://github.com/Jaymon/endpoints/blob/2f1c4ae2c69a168e69447d3d8395ada7becaa5fb/endpoints/http.py#L887-L919 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.