repo stringlengths 7 54 | path stringlengths 4 192 | url stringlengths 87 284 | code stringlengths 78 104k | code_tokens list | docstring stringlengths 1 46.9k | docstring_tokens list | language stringclasses 1
value | partition stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|
pjuren/pyokit | src/pyokit/io/genomeAlignment.py | https://github.com/pjuren/pyokit/blob/fddae123b5d817daa39496183f19c000d9c3791f/src/pyokit/io/genomeAlignment.py#L63-L72 | def __trim_extensions_dot(exts):
"""trim leading dots from extensions and drop any empty strings."""
if exts is None:
return None
res = []
for i in range(0, len(exts)):
if exts[i] == "":
continue
res.append(__trim_extension_dot(exts[i]))
return res | [
"def",
"__trim_extensions_dot",
"(",
"exts",
")",
":",
"if",
"exts",
"is",
"None",
":",
"return",
"None",
"res",
"=",
"[",
"]",
"for",
"i",
"in",
"range",
"(",
"0",
",",
"len",
"(",
"exts",
")",
")",
":",
"if",
"exts",
"[",
"i",
"]",
"==",
"\"\... | trim leading dots from extensions and drop any empty strings. | [
"trim",
"leading",
"dots",
"from",
"extensions",
"and",
"drop",
"any",
"empty",
"strings",
"."
] | python | train |
pybel/pybel | src/pybel/io/nodelink.py | https://github.com/pybel/pybel/blob/c8a7a1bdae4c475fa2a8c77f3a9a5f6d79556ca0/src/pybel/io/nodelink.py#L77-L80 | def from_json_file(file: TextIO, check_version=True) -> BELGraph:
"""Build a graph from the Node-Link JSON contained in the given file."""
graph_json_dict = json.load(file)
return from_json(graph_json_dict, check_version=check_version) | [
"def",
"from_json_file",
"(",
"file",
":",
"TextIO",
",",
"check_version",
"=",
"True",
")",
"->",
"BELGraph",
":",
"graph_json_dict",
"=",
"json",
".",
"load",
"(",
"file",
")",
"return",
"from_json",
"(",
"graph_json_dict",
",",
"check_version",
"=",
"chec... | Build a graph from the Node-Link JSON contained in the given file. | [
"Build",
"a",
"graph",
"from",
"the",
"Node",
"-",
"Link",
"JSON",
"contained",
"in",
"the",
"given",
"file",
"."
] | python | train |
wavycloud/pyboto3 | pyboto3/cloudwatch.py | https://github.com/wavycloud/pyboto3/blob/924957ccf994303713a4eed90b775ff2ab95b2e5/pyboto3/cloudwatch.py#L571-L778 | def put_metric_alarm(AlarmName=None, AlarmDescription=None, ActionsEnabled=None, OKActions=None, AlarmActions=None, InsufficientDataActions=None, MetricName=None, Namespace=None, Statistic=None, ExtendedStatistic=None, Dimensions=None, Period=None, Unit=None, EvaluationPeriods=None, Threshold=None, ComparisonOperator=None, TreatMissingData=None, EvaluateLowSampleCountPercentile=None):
"""
Creates or updates an alarm and associates it with the specified metric. Optionally, this operation can associate one or more Amazon SNS resources with the alarm.
When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA . The alarm is evaluated and its state is set appropriately. Any actions associated with the state are then executed.
When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.
If you are an AWS Identity and Access Management (IAM) user, you must have Amazon EC2 permissions for some operations:
If you have read/write permissions for Amazon CloudWatch but not for Amazon EC2, you can still create an alarm, but the stop or terminate actions won't be performed. However, if you are later granted the required permissions, the alarm actions that you created earlier will be performed.
If you are using an IAM role (for example, an Amazon EC2 instance profile), you cannot stop or terminate the instance using alarm actions. However, you can still see the alarm state and perform any other actions such as Amazon SNS notifications or Auto Scaling policies.
If you are using temporary security credentials granted using the AWS Security Token Service (AWS STS), you cannot stop or terminate an Amazon EC2 instance using alarm actions.
Note that you must create at least one stop, terminate, or reboot alarm using the Amazon EC2 or CloudWatch console to create the EC2ActionsAccess IAM role. After this IAM role is created, you can create stop, terminate, or reboot alarms using a command-line interface or an API.
See also: AWS API Documentation
:example: response = client.put_metric_alarm(
AlarmName='string',
AlarmDescription='string',
ActionsEnabled=True|False,
OKActions=[
'string',
],
AlarmActions=[
'string',
],
InsufficientDataActions=[
'string',
],
MetricName='string',
Namespace='string',
Statistic='SampleCount'|'Average'|'Sum'|'Minimum'|'Maximum',
ExtendedStatistic='string',
Dimensions=[
{
'Name': 'string',
'Value': 'string'
},
],
Period=123,
Unit='Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None',
EvaluationPeriods=123,
Threshold=123.0,
ComparisonOperator='GreaterThanOrEqualToThreshold'|'GreaterThanThreshold'|'LessThanThreshold'|'LessThanOrEqualToThreshold',
TreatMissingData='string',
EvaluateLowSampleCountPercentile='string'
)
:type AlarmName: string
:param AlarmName: [REQUIRED]
The name for the alarm. This name must be unique within the AWS account.
:type AlarmDescription: string
:param AlarmDescription: The description for the alarm.
:type ActionsEnabled: boolean
:param ActionsEnabled: Indicates whether actions should be executed during any changes to the alarm state.
:type OKActions: list
:param OKActions: The actions to execute when this alarm transitions to an OK state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
:type AlarmActions: list
:param AlarmActions: The actions to execute when this alarm transitions to the ALARM state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
:type InsufficientDataActions: list
:param InsufficientDataActions: The actions to execute when this alarm transitions to the INSUFFICIENT_DATA state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
:type MetricName: string
:param MetricName: [REQUIRED]
The name for the metric associated with the alarm.
:type Namespace: string
:param Namespace: [REQUIRED]
The namespace for the metric associated with the alarm.
:type Statistic: string
:param Statistic: The statistic for the metric associated with the alarm, other than percentile. For percentile statistics, use ExtendedStatistic .
:type ExtendedStatistic: string
:param ExtendedStatistic: The percentile statistic for the metric associated with the alarm. Specify a value between p0.0 and p100.
:type Dimensions: list
:param Dimensions: The dimensions for the metric associated with the alarm.
(dict) --Expands the identity of a metric.
Name (string) -- [REQUIRED]The name of the dimension.
Value (string) -- [REQUIRED]The value representing the dimension measurement.
:type Period: integer
:param Period: [REQUIRED]
The period, in seconds, over which the specified statistic is applied.
:type Unit: string
:param Unit: The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately.
If you specify a unit, you must use a unit that is appropriate for the metric. Otherwise, the Amazon CloudWatch alarm can get stuck in the INSUFFICIENT DATA state.
:type EvaluationPeriods: integer
:param EvaluationPeriods: [REQUIRED]
The number of periods over which data is compared to the specified threshold.
:type Threshold: float
:param Threshold: [REQUIRED]
The value against which the specified statistic is compared.
:type ComparisonOperator: string
:param ComparisonOperator: [REQUIRED]
The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand.
:type TreatMissingData: string
:param TreatMissingData: Sets how this alarm is to handle missing data points. If TreatMissingData is omitted, the default behavior of missing is used. For more information, see Configuring How CloudWatch Alarms Treats Missing Data .
Valid Values: breaching | notBreaching | ignore | missing
:type EvaluateLowSampleCountPercentile: string
:param EvaluateLowSampleCountPercentile: Used only for alarms based on percentiles. If you specify ignore , the alarm state will not change during periods with too few data points to be statistically significant. If you specify evaluate or omit this parameter, the alarm will always be evaluated and possibly change state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples .
Valid Values: evaluate | ignore
:returns:
AlarmName (string) -- [REQUIRED]
The name for the alarm. This name must be unique within the AWS account.
AlarmDescription (string) -- The description for the alarm.
ActionsEnabled (boolean) -- Indicates whether actions should be executed during any changes to the alarm state.
OKActions (list) -- The actions to execute when this alarm transitions to an OK state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
AlarmActions (list) -- The actions to execute when this alarm transitions to the ALARM state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
InsufficientDataActions (list) -- The actions to execute when this alarm transitions to the INSUFFICIENT_DATA state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
MetricName (string) -- [REQUIRED]
The name for the metric associated with the alarm.
Namespace (string) -- [REQUIRED]
The namespace for the metric associated with the alarm.
Statistic (string) -- The statistic for the metric associated with the alarm, other than percentile. For percentile statistics, use ExtendedStatistic .
ExtendedStatistic (string) -- The percentile statistic for the metric associated with the alarm. Specify a value between p0.0 and p100.
Dimensions (list) -- The dimensions for the metric associated with the alarm.
(dict) --Expands the identity of a metric.
Name (string) -- [REQUIRED]The name of the dimension.
Value (string) -- [REQUIRED]The value representing the dimension measurement.
Period (integer) -- [REQUIRED]
The period, in seconds, over which the specified statistic is applied.
Unit (string) -- The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately.
If you specify a unit, you must use a unit that is appropriate for the metric. Otherwise, the Amazon CloudWatch alarm can get stuck in the INSUFFICIENT DATA state.
EvaluationPeriods (integer) -- [REQUIRED]
The number of periods over which data is compared to the specified threshold.
Threshold (float) -- [REQUIRED]
The value against which the specified statistic is compared.
ComparisonOperator (string) -- [REQUIRED]
The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand.
TreatMissingData (string) -- Sets how this alarm is to handle missing data points. If TreatMissingData is omitted, the default behavior of missing is used. For more information, see Configuring How CloudWatch Alarms Treats Missing Data .
Valid Values: breaching | notBreaching | ignore | missing
EvaluateLowSampleCountPercentile (string) -- Used only for alarms based on percentiles. If you specify ignore , the alarm state will not change during periods with too few data points to be statistically significant. If you specify evaluate or omit this parameter, the alarm will always be evaluated and possibly change state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples .
Valid Values: evaluate | ignore
"""
pass | [
"def",
"put_metric_alarm",
"(",
"AlarmName",
"=",
"None",
",",
"AlarmDescription",
"=",
"None",
",",
"ActionsEnabled",
"=",
"None",
",",
"OKActions",
"=",
"None",
",",
"AlarmActions",
"=",
"None",
",",
"InsufficientDataActions",
"=",
"None",
",",
"MetricName",
... | Creates or updates an alarm and associates it with the specified metric. Optionally, this operation can associate one or more Amazon SNS resources with the alarm.
When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA . The alarm is evaluated and its state is set appropriately. Any actions associated with the state are then executed.
When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.
If you are an AWS Identity and Access Management (IAM) user, you must have Amazon EC2 permissions for some operations:
If you have read/write permissions for Amazon CloudWatch but not for Amazon EC2, you can still create an alarm, but the stop or terminate actions won't be performed. However, if you are later granted the required permissions, the alarm actions that you created earlier will be performed.
If you are using an IAM role (for example, an Amazon EC2 instance profile), you cannot stop or terminate the instance using alarm actions. However, you can still see the alarm state and perform any other actions such as Amazon SNS notifications or Auto Scaling policies.
If you are using temporary security credentials granted using the AWS Security Token Service (AWS STS), you cannot stop or terminate an Amazon EC2 instance using alarm actions.
Note that you must create at least one stop, terminate, or reboot alarm using the Amazon EC2 or CloudWatch console to create the EC2ActionsAccess IAM role. After this IAM role is created, you can create stop, terminate, or reboot alarms using a command-line interface or an API.
See also: AWS API Documentation
:example: response = client.put_metric_alarm(
AlarmName='string',
AlarmDescription='string',
ActionsEnabled=True|False,
OKActions=[
'string',
],
AlarmActions=[
'string',
],
InsufficientDataActions=[
'string',
],
MetricName='string',
Namespace='string',
Statistic='SampleCount'|'Average'|'Sum'|'Minimum'|'Maximum',
ExtendedStatistic='string',
Dimensions=[
{
'Name': 'string',
'Value': 'string'
},
],
Period=123,
Unit='Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None',
EvaluationPeriods=123,
Threshold=123.0,
ComparisonOperator='GreaterThanOrEqualToThreshold'|'GreaterThanThreshold'|'LessThanThreshold'|'LessThanOrEqualToThreshold',
TreatMissingData='string',
EvaluateLowSampleCountPercentile='string'
)
:type AlarmName: string
:param AlarmName: [REQUIRED]
The name for the alarm. This name must be unique within the AWS account.
:type AlarmDescription: string
:param AlarmDescription: The description for the alarm.
:type ActionsEnabled: boolean
:param ActionsEnabled: Indicates whether actions should be executed during any changes to the alarm state.
:type OKActions: list
:param OKActions: The actions to execute when this alarm transitions to an OK state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
:type AlarmActions: list
:param AlarmActions: The actions to execute when this alarm transitions to the ALARM state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
:type InsufficientDataActions: list
:param InsufficientDataActions: The actions to execute when this alarm transitions to the INSUFFICIENT_DATA state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
:type MetricName: string
:param MetricName: [REQUIRED]
The name for the metric associated with the alarm.
:type Namespace: string
:param Namespace: [REQUIRED]
The namespace for the metric associated with the alarm.
:type Statistic: string
:param Statistic: The statistic for the metric associated with the alarm, other than percentile. For percentile statistics, use ExtendedStatistic .
:type ExtendedStatistic: string
:param ExtendedStatistic: The percentile statistic for the metric associated with the alarm. Specify a value between p0.0 and p100.
:type Dimensions: list
:param Dimensions: The dimensions for the metric associated with the alarm.
(dict) --Expands the identity of a metric.
Name (string) -- [REQUIRED]The name of the dimension.
Value (string) -- [REQUIRED]The value representing the dimension measurement.
:type Period: integer
:param Period: [REQUIRED]
The period, in seconds, over which the specified statistic is applied.
:type Unit: string
:param Unit: The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately.
If you specify a unit, you must use a unit that is appropriate for the metric. Otherwise, the Amazon CloudWatch alarm can get stuck in the INSUFFICIENT DATA state.
:type EvaluationPeriods: integer
:param EvaluationPeriods: [REQUIRED]
The number of periods over which data is compared to the specified threshold.
:type Threshold: float
:param Threshold: [REQUIRED]
The value against which the specified statistic is compared.
:type ComparisonOperator: string
:param ComparisonOperator: [REQUIRED]
The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand.
:type TreatMissingData: string
:param TreatMissingData: Sets how this alarm is to handle missing data points. If TreatMissingData is omitted, the default behavior of missing is used. For more information, see Configuring How CloudWatch Alarms Treats Missing Data .
Valid Values: breaching | notBreaching | ignore | missing
:type EvaluateLowSampleCountPercentile: string
:param EvaluateLowSampleCountPercentile: Used only for alarms based on percentiles. If you specify ignore , the alarm state will not change during periods with too few data points to be statistically significant. If you specify evaluate or omit this parameter, the alarm will always be evaluated and possibly change state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples .
Valid Values: evaluate | ignore
:returns:
AlarmName (string) -- [REQUIRED]
The name for the alarm. This name must be unique within the AWS account.
AlarmDescription (string) -- The description for the alarm.
ActionsEnabled (boolean) -- Indicates whether actions should be executed during any changes to the alarm state.
OKActions (list) -- The actions to execute when this alarm transitions to an OK state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
AlarmActions (list) -- The actions to execute when this alarm transitions to the ALARM state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
InsufficientDataActions (list) -- The actions to execute when this alarm transitions to the INSUFFICIENT_DATA state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region :ec2:stop | arn:aws:automate:region :ec2:terminate | arn:aws:automate:region :ec2:recover
Valid Values (for use with IAM roles): arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Stop/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Terminate/1.0 | arn:aws:swf:us-east-1:{customer-account }:action/actions/AWS_EC2.InstanceId.Reboot/1.0
(string) --
MetricName (string) -- [REQUIRED]
The name for the metric associated with the alarm.
Namespace (string) -- [REQUIRED]
The namespace for the metric associated with the alarm.
Statistic (string) -- The statistic for the metric associated with the alarm, other than percentile. For percentile statistics, use ExtendedStatistic .
ExtendedStatistic (string) -- The percentile statistic for the metric associated with the alarm. Specify a value between p0.0 and p100.
Dimensions (list) -- The dimensions for the metric associated with the alarm.
(dict) --Expands the identity of a metric.
Name (string) -- [REQUIRED]The name of the dimension.
Value (string) -- [REQUIRED]The value representing the dimension measurement.
Period (integer) -- [REQUIRED]
The period, in seconds, over which the specified statistic is applied.
Unit (string) -- The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately.
If you specify a unit, you must use a unit that is appropriate for the metric. Otherwise, the Amazon CloudWatch alarm can get stuck in the INSUFFICIENT DATA state.
EvaluationPeriods (integer) -- [REQUIRED]
The number of periods over which data is compared to the specified threshold.
Threshold (float) -- [REQUIRED]
The value against which the specified statistic is compared.
ComparisonOperator (string) -- [REQUIRED]
The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand.
TreatMissingData (string) -- Sets how this alarm is to handle missing data points. If TreatMissingData is omitted, the default behavior of missing is used. For more information, see Configuring How CloudWatch Alarms Treats Missing Data .
Valid Values: breaching | notBreaching | ignore | missing
EvaluateLowSampleCountPercentile (string) -- Used only for alarms based on percentiles. If you specify ignore , the alarm state will not change during periods with too few data points to be statistically significant. If you specify evaluate or omit this parameter, the alarm will always be evaluated and possibly change state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples .
Valid Values: evaluate | ignore | [
"Creates",
"or",
"updates",
"an",
"alarm",
"and",
"associates",
"it",
"with",
"the",
"specified",
"metric",
".",
"Optionally",
"this",
"operation",
"can",
"associate",
"one",
"or",
"more",
"Amazon",
"SNS",
"resources",
"with",
"the",
"alarm",
".",
"When",
"t... | python | train |
jeremymcrae/denovonear | denovonear/ensembl_requester.py | https://github.com/jeremymcrae/denovonear/blob/feaab0fc77e89d70b31e8092899e4f0e68bac9fe/denovonear/ensembl_requester.py#L324-L344 | def get_exon_ranges_for_transcript(self, transcript_id):
""" obtain the sequence for a transcript from ensembl
"""
headers = {"content-type": "application/json"}
self.attempt = 0
ext = "/overlap/id/{}?feature=exon".format(transcript_id)
r = self.ensembl_request(ext, headers)
exon_ranges = []
for exon in json.loads(r):
if exon["Parent"] != transcript_id:
continue
start = exon["start"]
end = exon["end"]
exon_ranges.append((start, end))
return exon_ranges | [
"def",
"get_exon_ranges_for_transcript",
"(",
"self",
",",
"transcript_id",
")",
":",
"headers",
"=",
"{",
"\"content-type\"",
":",
"\"application/json\"",
"}",
"self",
".",
"attempt",
"=",
"0",
"ext",
"=",
"\"/overlap/id/{}?feature=exon\"",
".",
"format",
"(",
"t... | obtain the sequence for a transcript from ensembl | [
"obtain",
"the",
"sequence",
"for",
"a",
"transcript",
"from",
"ensembl"
] | python | train |
scoutapp/scout_apm_python | src/scout_apm/core/config.py | https://github.com/scoutapp/scout_apm_python/blob/e5539ee23b8129be9b75d5007c88b6158b51294f/src/scout_apm/core/config.py#L91-L98 | def set(cls, **kwargs):
"""
Sets a configuration value for the Scout agent. Values set here will
not override values set in ENV.
"""
global SCOUT_PYTHON_VALUES
for key, value in kwargs.items():
SCOUT_PYTHON_VALUES[key] = value | [
"def",
"set",
"(",
"cls",
",",
"*",
"*",
"kwargs",
")",
":",
"global",
"SCOUT_PYTHON_VALUES",
"for",
"key",
",",
"value",
"in",
"kwargs",
".",
"items",
"(",
")",
":",
"SCOUT_PYTHON_VALUES",
"[",
"key",
"]",
"=",
"value"
] | Sets a configuration value for the Scout agent. Values set here will
not override values set in ENV. | [
"Sets",
"a",
"configuration",
"value",
"for",
"the",
"Scout",
"agent",
".",
"Values",
"set",
"here",
"will",
"not",
"override",
"values",
"set",
"in",
"ENV",
"."
] | python | train |
arne-cl/discoursegraphs | src/discoursegraphs/readwrite/exmaralda.py | https://github.com/arne-cl/discoursegraphs/blob/842f0068a3190be2c75905754521b176b25a54fb/src/discoursegraphs/readwrite/exmaralda.py#L127-L158 | def __add_annotation_tier(self, docgraph, body, annotation_layer):
"""
adds a span-based annotation layer as a <tier> to the Exmaralda <body>.
Parameter
---------
docgraph : DiscourseDocumentGraph
the document graph from which the chains will be extracted
body : etree._Element
an etree representation of the <basic_body> element (and all its
descendants) of the Exmaralda file
annotation_layer : str
the name of a layer, e.g. 'tiger', 'tiger:token' or 'mmax:sentence'
"""
layer_cat = annotation_layer.split(':')[-1]
temp_tier = self.E('tier',
{'id': "TIE{}".format(self.tier_count),
'category': layer_cat, 'type': "t",
'display-name': "[{}]".format(annotation_layer)})
self.tier_count += 1
for node_id in select_nodes_by_layer(docgraph, annotation_layer):
span_node_ids = get_span(docgraph, node_id)
if span_node_ids:
start_id, end_id = self.__span2event(span_node_ids)
event_label = docgraph.node[node_id].get('label', '')
event = self.E('event',
{'start': "T{}".format(start_id),
'end': "T{}".format(end_id)},
event_label)
temp_tier.append(event)
body.append(temp_tier) | [
"def",
"__add_annotation_tier",
"(",
"self",
",",
"docgraph",
",",
"body",
",",
"annotation_layer",
")",
":",
"layer_cat",
"=",
"annotation_layer",
".",
"split",
"(",
"':'",
")",
"[",
"-",
"1",
"]",
"temp_tier",
"=",
"self",
".",
"E",
"(",
"'tier'",
",",... | adds a span-based annotation layer as a <tier> to the Exmaralda <body>.
Parameter
---------
docgraph : DiscourseDocumentGraph
the document graph from which the chains will be extracted
body : etree._Element
an etree representation of the <basic_body> element (and all its
descendants) of the Exmaralda file
annotation_layer : str
the name of a layer, e.g. 'tiger', 'tiger:token' or 'mmax:sentence' | [
"adds",
"a",
"span",
"-",
"based",
"annotation",
"layer",
"as",
"a",
"<tier",
">",
"to",
"the",
"Exmaralda",
"<body",
">",
"."
] | python | train |
MonashBI/arcana | arcana/data/input.py | https://github.com/MonashBI/arcana/blob/d6271a29d13733d00422d11417af8d200be62acc/arcana/data/input.py#L135-L141 | def pipeline_getter(self):
"For duck-typing with *Spec types"
if not self.derivable:
raise ArcanaUsageError(
"There is no pipeline getter for {} because it doesn't "
"fallback to a derived spec".format(self))
return self._fallback.pipeline_getter | [
"def",
"pipeline_getter",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"derivable",
":",
"raise",
"ArcanaUsageError",
"(",
"\"There is no pipeline getter for {} because it doesn't \"",
"\"fallback to a derived spec\"",
".",
"format",
"(",
"self",
")",
")",
"return",... | For duck-typing with *Spec types | [
"For",
"duck",
"-",
"typing",
"with",
"*",
"Spec",
"types"
] | python | train |
ubernostrum/django-registration | src/django_registration/validators.py | https://github.com/ubernostrum/django-registration/blob/cf10b13423669346a1f4cfaa31aae0b42856b416/src/django_registration/validators.py#L253-L269 | def validate_confusables_email(value):
"""
Validator which disallows 'dangerous' email addresses likely to
represent homograph attacks.
An email address is 'dangerous' if either the local-part or the
domain, considered on their own, are mixed-script and contain one
or more characters appearing in the Unicode Visually Confusable
Characters file.
"""
if '@' not in value:
return
local_part, domain = value.split('@')
if confusables.is_dangerous(local_part) or \
confusables.is_dangerous(domain):
raise ValidationError(CONFUSABLE_EMAIL, code='invalid') | [
"def",
"validate_confusables_email",
"(",
"value",
")",
":",
"if",
"'@'",
"not",
"in",
"value",
":",
"return",
"local_part",
",",
"domain",
"=",
"value",
".",
"split",
"(",
"'@'",
")",
"if",
"confusables",
".",
"is_dangerous",
"(",
"local_part",
")",
"or",... | Validator which disallows 'dangerous' email addresses likely to
represent homograph attacks.
An email address is 'dangerous' if either the local-part or the
domain, considered on their own, are mixed-script and contain one
or more characters appearing in the Unicode Visually Confusable
Characters file. | [
"Validator",
"which",
"disallows",
"dangerous",
"email",
"addresses",
"likely",
"to",
"represent",
"homograph",
"attacks",
"."
] | python | train |
bcb/jsonrpcserver | jsonrpcserver/methods.py | https://github.com/bcb/jsonrpcserver/blob/26bb70e868f81691816cabfc4b60a83428842b2f/jsonrpcserver/methods.py#L16-L31 | def validate_args(func: Method, *args: Any, **kwargs: Any) -> Method:
"""
Check if the request's arguments match a function's signature.
Raises TypeError exception if arguments cannot be passed to a function.
Args:
func: The function to check.
args: Positional arguments.
kwargs: Keyword arguments.
Raises:
TypeError: If the arguments cannot be passed to the function.
"""
signature(func).bind(*args, **kwargs)
return func | [
"def",
"validate_args",
"(",
"func",
":",
"Method",
",",
"*",
"args",
":",
"Any",
",",
"*",
"*",
"kwargs",
":",
"Any",
")",
"->",
"Method",
":",
"signature",
"(",
"func",
")",
".",
"bind",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"return",... | Check if the request's arguments match a function's signature.
Raises TypeError exception if arguments cannot be passed to a function.
Args:
func: The function to check.
args: Positional arguments.
kwargs: Keyword arguments.
Raises:
TypeError: If the arguments cannot be passed to the function. | [
"Check",
"if",
"the",
"request",
"s",
"arguments",
"match",
"a",
"function",
"s",
"signature",
"."
] | python | train |
fedora-infra/fedmsg | fedmsg/crypto/gpg.py | https://github.com/fedora-infra/fedmsg/blob/c21d6b3ce023fc3c0e881c704f5b55fb6e6392d7/fedmsg/crypto/gpg.py#L56-L87 | def verify(self, data, signature=None, keyrings=None, homedir=None):
'''
`data` <string> the data to verify.
`signature` <string> The signature, if detached from the data.
`keyrings` <list of string> Additional keyrings to search in.
`homedir` <string> Override the configured homedir.
'''
if isinstance(data, six.text_type):
data = data.encode('utf-8')
tmpdir = tempfile.mkdtemp()
data_file, data_path = tempfile.mkstemp(dir=tmpdir)
data_file = os.fdopen(data_file, 'wb')
data_file.write(data)
data_file.close()
if signature:
sig_file, sig_path = tempfile.mkstemp(dir=tmpdir)
sig_file = os.fdopen(sig_file, 'wb')
sig_file.write(signature)
sig_file.close()
else:
sig_path = None
try:
return self.verify_from_file(
data_path,
sig_path=sig_path,
keyrings=keyrings,
homedir=homedir
)
finally:
shutil.rmtree(tmpdir) | [
"def",
"verify",
"(",
"self",
",",
"data",
",",
"signature",
"=",
"None",
",",
"keyrings",
"=",
"None",
",",
"homedir",
"=",
"None",
")",
":",
"if",
"isinstance",
"(",
"data",
",",
"six",
".",
"text_type",
")",
":",
"data",
"=",
"data",
".",
"encod... | `data` <string> the data to verify.
`signature` <string> The signature, if detached from the data.
`keyrings` <list of string> Additional keyrings to search in.
`homedir` <string> Override the configured homedir. | [
"data",
"<string",
">",
"the",
"data",
"to",
"verify",
".",
"signature",
"<string",
">",
"The",
"signature",
"if",
"detached",
"from",
"the",
"data",
".",
"keyrings",
"<list",
"of",
"string",
">",
"Additional",
"keyrings",
"to",
"search",
"in",
".",
"homed... | python | train |
greenbender/pynntp | nntp/nntp.py | https://github.com/greenbender/pynntp/blob/991a76331cdf5d8f9dbf5b18f6e29adc80749a2f/nntp/nntp.py#L364-L382 | def info_gen(self, code, message, compressed=False):
"""Dispatcher for the info generators.
Determines which __info_*_gen() should be used based on the supplied
parameters.
Args:
code: The status code for the command response.
message: The status message for the command reponse.
compressed: Force decompression. Useful for xz* commands.
Returns:
An info generator.
"""
if "COMPRESS=GZIP" in message:
return self.__info_gzip_gen()
if compressed:
return self.__info_yenczlib_gen()
return self.__info_plain_gen() | [
"def",
"info_gen",
"(",
"self",
",",
"code",
",",
"message",
",",
"compressed",
"=",
"False",
")",
":",
"if",
"\"COMPRESS=GZIP\"",
"in",
"message",
":",
"return",
"self",
".",
"__info_gzip_gen",
"(",
")",
"if",
"compressed",
":",
"return",
"self",
".",
"... | Dispatcher for the info generators.
Determines which __info_*_gen() should be used based on the supplied
parameters.
Args:
code: The status code for the command response.
message: The status message for the command reponse.
compressed: Force decompression. Useful for xz* commands.
Returns:
An info generator. | [
"Dispatcher",
"for",
"the",
"info",
"generators",
"."
] | python | test |
mitsei/dlkit | dlkit/json_/utilities.py | https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/json_/utilities.py#L105-L135 | def clean_up_datetime(obj_map):
"""convert datetime objects to dictionaries for storage"""
clean_map = {}
for key, value in obj_map.items():
if isinstance(value, datetime.datetime):
clean_map[key] = {
'year': value.year,
'month': value.month,
'day': value.day,
'hour': value.hour,
'minute': value.minute,
'second': value.second,
'microsecond': value.microsecond,
'tzinfo': value.tzinfo
}
elif isinstance(value, dict):
clean_map[key] = clean_up_datetime(value)
elif isinstance(value, list):
if key not in clean_map:
clean_map[key] = []
if len(value) > 0:
for index, list_value in enumerate(value):
if isinstance(list_value, dict):
clean_map[key].append(clean_up_datetime(list_value))
else:
clean_map[key].append(list_value)
else:
clean_map[key] = value
else:
clean_map[key] = value
return clean_map | [
"def",
"clean_up_datetime",
"(",
"obj_map",
")",
":",
"clean_map",
"=",
"{",
"}",
"for",
"key",
",",
"value",
"in",
"obj_map",
".",
"items",
"(",
")",
":",
"if",
"isinstance",
"(",
"value",
",",
"datetime",
".",
"datetime",
")",
":",
"clean_map",
"[",
... | convert datetime objects to dictionaries for storage | [
"convert",
"datetime",
"objects",
"to",
"dictionaries",
"for",
"storage"
] | python | train |
williamFalcon/test-tube | examples/hpc_cpu_example.py | https://github.com/williamFalcon/test-tube/blob/db5a47067a854f76d89f8066582023c1e184bccb/examples/hpc_cpu_example.py#L5-L31 | def train(hparams, *args):
"""Train your awesome model.
:param hparams: The arguments to run the model with.
"""
# Initialize experiments and track all the hyperparameters
exp = Experiment(
name=hparams.test_tube_exp_name,
# Location to save the metrics.
save_dir=hparams.log_path,
# The experiment version is optional, but using the one
# from SLURM means the exp will not collide with other
# versions if SLURM runs multiple at once.
version=hparams.hpc_exp_number,
autosave=False,
)
exp.argparse(hparams)
# Pretend to train.
x = hparams.x_val
for train_step in range(0, 100):
y = hparams.y_val
out = x * y
exp.log({'fake_err': out.item()}) # Log metrics.
# Save exp when done.
exp.save() | [
"def",
"train",
"(",
"hparams",
",",
"*",
"args",
")",
":",
"# Initialize experiments and track all the hyperparameters",
"exp",
"=",
"Experiment",
"(",
"name",
"=",
"hparams",
".",
"test_tube_exp_name",
",",
"# Location to save the metrics.",
"save_dir",
"=",
"hparams"... | Train your awesome model.
:param hparams: The arguments to run the model with. | [
"Train",
"your",
"awesome",
"model",
"."
] | python | test |
sbusard/wagoner | wagoner/tree.py | https://github.com/sbusard/wagoner/blob/7f83d66bbd0e009e4d4232ffdf319bd5a2a5683b/wagoner/tree.py#L33-L70 | def from_table(cls, table, length, prefix=0, flatten=False):
"""
Extract from the given table a tree for word length, taking only
prefixes of prefix length (if greater than 0) into account to compute
successors.
:param table: the table to extract the tree from;
:param length: the length of words generated by the extracted tree;
greater or equal to 1;
:param prefix: if greater than 0, the length of the prefixes used for
computing successors;
:param flatten: whether to flatten the table or not;
:return: the tree corresponding to words of length from table.
"""
# Build the expanded tree with necessary suffix and length
tree = defaultdict(dict) # The tree
pending = {(">", 0)} # The nodes to expand
while pending:
suffix, size = pending.pop()
if size < length:
choices = table.weighted_choices(suffix, exclude={"<"},
flatten=flatten)
# The word length is not reached yet, expand
for successor, weight in choices.items():
expanded = suffix + successor
if prefix > 0:
expanded = expanded[-prefix:]
new_node = (expanded, size + 1)
tree[(suffix, size)][new_node] = weight
pending.add(new_node)
else:
choices = table.weighted_choices(suffix, flatten=flatten)
# The word length is reached, only add < if present
if "<" in choices:
tree[(suffix, size)][("<", size + 1)] = 1
else:
tree[(suffix, size)] = dict()
return cls(cls.trim_tree(tree)) | [
"def",
"from_table",
"(",
"cls",
",",
"table",
",",
"length",
",",
"prefix",
"=",
"0",
",",
"flatten",
"=",
"False",
")",
":",
"# Build the expanded tree with necessary suffix and length",
"tree",
"=",
"defaultdict",
"(",
"dict",
")",
"# The tree",
"pending",
"=... | Extract from the given table a tree for word length, taking only
prefixes of prefix length (if greater than 0) into account to compute
successors.
:param table: the table to extract the tree from;
:param length: the length of words generated by the extracted tree;
greater or equal to 1;
:param prefix: if greater than 0, the length of the prefixes used for
computing successors;
:param flatten: whether to flatten the table or not;
:return: the tree corresponding to words of length from table. | [
"Extract",
"from",
"the",
"given",
"table",
"a",
"tree",
"for",
"word",
"length",
"taking",
"only",
"prefixes",
"of",
"prefix",
"length",
"(",
"if",
"greater",
"than",
"0",
")",
"into",
"account",
"to",
"compute",
"successors",
".",
":",
"param",
"table",
... | python | train |
pypa/pipenv | pipenv/vendor/requirementslib/utils.py | https://github.com/pypa/pipenv/blob/cae8d76c210b9777e90aab76e9c4b0e53bb19cde/pipenv/vendor/requirementslib/utils.py#L148-L160 | def is_vcs(pipfile_entry):
# type: (PipfileType) -> bool
"""Determine if dictionary entry from Pipfile is for a vcs dependency."""
if isinstance(pipfile_entry, Mapping):
return any(key for key in pipfile_entry.keys() if key in VCS_LIST)
elif isinstance(pipfile_entry, six.string_types):
if not is_valid_url(pipfile_entry) and pipfile_entry.startswith("git+"):
pipfile_entry = add_ssh_scheme_to_git_uri(pipfile_entry)
parsed_entry = urlsplit(pipfile_entry)
return parsed_entry.scheme in VCS_SCHEMES
return False | [
"def",
"is_vcs",
"(",
"pipfile_entry",
")",
":",
"# type: (PipfileType) -> bool",
"if",
"isinstance",
"(",
"pipfile_entry",
",",
"Mapping",
")",
":",
"return",
"any",
"(",
"key",
"for",
"key",
"in",
"pipfile_entry",
".",
"keys",
"(",
")",
"if",
"key",
"in",
... | Determine if dictionary entry from Pipfile is for a vcs dependency. | [
"Determine",
"if",
"dictionary",
"entry",
"from",
"Pipfile",
"is",
"for",
"a",
"vcs",
"dependency",
"."
] | python | train |
bethgelab/foolbox | foolbox/attacks/adef_attack.py | https://github.com/bethgelab/foolbox/blob/8ab54248c70e45d8580a7d9ee44c9c0fb5755c4a/foolbox/attacks/adef_attack.py#L76-L117 | def _compose(image, vec_field, color_axis):
"""Calculate the composition of the function image with the vector
field vec_field by interpolation.
new_func = compose(image, vec_field)
In:
image: numpy.ndarray
of shape C x h x w with C = 3 or C = 1 (color channels),
h, w >= 2, and [type] = 'Float' or 'Double'.
Contains the values of a function f: R ^ 2 -> R ^ C
on the grid {0, ..., h - 1} x {0, ..., w - 1}.
vec_field: numpy.array
of shape (h, w, 2)
vec_field[y, x, 0] is the x-coordinate of the vector vec_field[y, x]
vec_field[y, x, 1] is the y-coordinate of the vector vec_field[y, x]
positive x-direction is along rows from left to right
positive y-direction is along columns from above to below
"""
if color_axis == 2:
image = _transpose_image(image)
c, h, w = image.shape # colors, height, width
hrange = np.arange(h)
wrange = np.arange(w)
MGx, MGy = np.meshgrid(wrange, hrange)
defMGx = (MGx + vec_field[:, :, 0]).clip(0, w - 1)
defMGy = (MGy + vec_field[:, :, 1]).clip(0, h - 1)
new_image = np.empty_like(image)
for channel in range(c):
# Get a linear interpolation for this color channel.
interpolation = RectBivariateSpline(hrange, wrange, image[channel],
kx=1, ky=1)
# grid = False since the deformed grid is irregular
new_image[channel] = interpolation(defMGy, defMGx, grid=False)
if color_axis == 2:
return _re_transpose_image(new_image)
else:
return new_image | [
"def",
"_compose",
"(",
"image",
",",
"vec_field",
",",
"color_axis",
")",
":",
"if",
"color_axis",
"==",
"2",
":",
"image",
"=",
"_transpose_image",
"(",
"image",
")",
"c",
",",
"h",
",",
"w",
"=",
"image",
".",
"shape",
"# colors, height, width",
"hran... | Calculate the composition of the function image with the vector
field vec_field by interpolation.
new_func = compose(image, vec_field)
In:
image: numpy.ndarray
of shape C x h x w with C = 3 or C = 1 (color channels),
h, w >= 2, and [type] = 'Float' or 'Double'.
Contains the values of a function f: R ^ 2 -> R ^ C
on the grid {0, ..., h - 1} x {0, ..., w - 1}.
vec_field: numpy.array
of shape (h, w, 2)
vec_field[y, x, 0] is the x-coordinate of the vector vec_field[y, x]
vec_field[y, x, 1] is the y-coordinate of the vector vec_field[y, x]
positive x-direction is along rows from left to right
positive y-direction is along columns from above to below | [
"Calculate",
"the",
"composition",
"of",
"the",
"function",
"image",
"with",
"the",
"vector",
"field",
"vec_field",
"by",
"interpolation",
".",
"new_func",
"=",
"compose",
"(",
"image",
"vec_field",
")",
"In",
":",
"image",
":",
"numpy",
".",
"ndarray",
"of"... | python | valid |
keenlabs/KeenClient-Python | keen/api.py | https://github.com/keenlabs/KeenClient-Python/blob/266387c3376d1e000d117e17c45045ae3439d43f/keen/api.py#L184-L207 | def query(self, analysis_type, params, all_keys=False):
"""
Performs a query using the Keen IO analysis API. A read key must be set first.
"""
if not self._order_by_is_valid_or_none(params):
raise ValueError("order_by given is invalid or is missing required group_by.")
if not self._limit_is_valid_or_none(params):
raise ValueError("limit given is invalid or is missing required order_by.")
url = "{0}/{1}/projects/{2}/queries/{3}".format(self.base_url, self.api_version,
self.project_id, analysis_type)
headers = utilities.headers(self.read_key)
payload = params
response = self.fulfill(HTTPMethods.GET, url, params=payload, headers=headers, timeout=self.get_timeout)
self._error_handling(response)
response = response.json()
if not all_keys:
response = response["result"]
return response | [
"def",
"query",
"(",
"self",
",",
"analysis_type",
",",
"params",
",",
"all_keys",
"=",
"False",
")",
":",
"if",
"not",
"self",
".",
"_order_by_is_valid_or_none",
"(",
"params",
")",
":",
"raise",
"ValueError",
"(",
"\"order_by given is invalid or is missing requi... | Performs a query using the Keen IO analysis API. A read key must be set first. | [
"Performs",
"a",
"query",
"using",
"the",
"Keen",
"IO",
"analysis",
"API",
".",
"A",
"read",
"key",
"must",
"be",
"set",
"first",
"."
] | python | train |
KrzyHonk/bpmn-python | bpmn_python/bpmn_diagram_metrics.py | https://github.com/KrzyHonk/bpmn-python/blob/6e5e28e3d656dbf5bd3d85d78fe8e3f2fb462629/bpmn_python/bpmn_diagram_metrics.py#L179-L191 | def TNE_metric(bpmn_graph):
"""
Returns the value of the TNE metric (Total Number of Events of the Model)
for the BPMNDiagramGraph instance.
:param bpmn_graph: an instance of BpmnDiagramGraph representing BPMN model.
"""
events_counts = get_events_counts(bpmn_graph)
return sum(
[count for _, count in events_counts.items()]
) | [
"def",
"TNE_metric",
"(",
"bpmn_graph",
")",
":",
"events_counts",
"=",
"get_events_counts",
"(",
"bpmn_graph",
")",
"return",
"sum",
"(",
"[",
"count",
"for",
"_",
",",
"count",
"in",
"events_counts",
".",
"items",
"(",
")",
"]",
")"
] | Returns the value of the TNE metric (Total Number of Events of the Model)
for the BPMNDiagramGraph instance.
:param bpmn_graph: an instance of BpmnDiagramGraph representing BPMN model. | [
"Returns",
"the",
"value",
"of",
"the",
"TNE",
"metric",
"(",
"Total",
"Number",
"of",
"Events",
"of",
"the",
"Model",
")",
"for",
"the",
"BPMNDiagramGraph",
"instance",
"."
] | python | train |
saltstack/salt | salt/modules/vsphere.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/vsphere.py#L5150-L5278 | def _apply_cluster_dict(cluster_spec, cluster_dict, vsan_spec=None,
vsan_61=True):
'''
Applies the values of cluster_dict dictionary to a cluster spec
(vim.ClusterConfigSpecEx).
All vsan values (cluster_dict['vsan']) will be applied to
vsan_spec (vim.vsan.cluster.ConfigInfoEx). Can be not omitted
if not required.
VSAN 6.1 config needs to be applied differently than the post VSAN 6.1 way.
The type of configuration desired is dictated by the flag vsan_61.
'''
log.trace('Applying cluster dict %s', cluster_dict)
if cluster_dict.get('ha'):
ha_dict = cluster_dict['ha']
if not cluster_spec.dasConfig:
cluster_spec.dasConfig = vim.ClusterDasConfigInfo()
das_config = cluster_spec.dasConfig
if 'enabled' in ha_dict:
das_config.enabled = ha_dict['enabled']
if ha_dict['enabled']:
# Default values when ha is enabled
das_config.failoverLevel = 1
if 'admission_control_enabled' in ha_dict:
das_config.admissionControlEnabled = \
ha_dict['admission_control_enabled']
if 'admission_control_policy' in ha_dict:
adm_pol_dict = ha_dict['admission_control_policy']
if not das_config.admissionControlPolicy or \
not isinstance(
das_config.admissionControlPolicy,
vim.ClusterFailoverResourcesAdmissionControlPolicy):
das_config.admissionControlPolicy = \
vim.ClusterFailoverResourcesAdmissionControlPolicy(
cpuFailoverResourcesPercent=
adm_pol_dict['cpu_failover_percent'],
memoryFailoverResourcesPercent=
adm_pol_dict['memory_failover_percent'])
if 'default_vm_settings' in ha_dict:
vm_set_dict = ha_dict['default_vm_settings']
if not das_config.defaultVmSettings:
das_config.defaultVmSettings = vim.ClusterDasVmSettings()
if 'isolation_response' in vm_set_dict:
das_config.defaultVmSettings.isolationResponse = \
vm_set_dict['isolation_response']
if 'restart_priority' in vm_set_dict:
das_config.defaultVmSettings.restartPriority = \
vm_set_dict['restart_priority']
if 'hb_ds_candidate_policy' in ha_dict:
das_config.hBDatastoreCandidatePolicy = \
ha_dict['hb_ds_candidate_policy']
if 'host_monitoring' in ha_dict:
das_config.hostMonitoring = ha_dict['host_monitoring']
if 'options' in ha_dict:
das_config.option = []
for opt_dict in ha_dict['options']:
das_config.option.append(
vim.OptionValue(key=opt_dict['key']))
if 'value' in opt_dict:
das_config.option[-1].value = opt_dict['value']
if 'vm_monitoring' in ha_dict:
das_config.vmMonitoring = ha_dict['vm_monitoring']
cluster_spec.dasConfig = das_config
if cluster_dict.get('drs'):
drs_dict = cluster_dict['drs']
drs_config = vim.ClusterDrsConfigInfo()
if 'enabled' in drs_dict:
drs_config.enabled = drs_dict['enabled']
if 'vmotion_rate' in drs_dict:
drs_config.vmotionRate = 6 - drs_dict['vmotion_rate']
if 'default_vm_behavior' in drs_dict:
drs_config.defaultVmBehavior = \
vim.DrsBehavior(drs_dict['default_vm_behavior'])
cluster_spec.drsConfig = drs_config
if cluster_dict.get('vm_swap_placement'):
cluster_spec.vmSwapPlacement = cluster_dict['vm_swap_placement']
if cluster_dict.get('vsan'):
vsan_dict = cluster_dict['vsan']
if not vsan_61: # VSAN is 6.2 and above
if 'enabled' in vsan_dict:
if not vsan_spec.vsanClusterConfig:
vsan_spec.vsanClusterConfig = \
vim.vsan.cluster.ConfigInfo()
vsan_spec.vsanClusterConfig.enabled = vsan_dict['enabled']
if 'auto_claim_storage' in vsan_dict:
if not vsan_spec.vsanClusterConfig:
vsan_spec.vsanClusterConfig = \
vim.vsan.cluster.ConfigInfo()
if not vsan_spec.vsanClusterConfig.defaultConfig:
vsan_spec.vsanClusterConfig.defaultConfig = \
vim.VsanClusterConfigInfoHostDefaultInfo()
elif vsan_spec.vsanClusterConfig.defaultConfig.uuid:
# If this remains set it caused an error
vsan_spec.vsanClusterConfig.defaultConfig.uuid = None
vsan_spec.vsanClusterConfig.defaultConfig.autoClaimStorage = \
vsan_dict['auto_claim_storage']
if 'compression_enabled' in vsan_dict:
if not vsan_spec.dataEfficiencyConfig:
vsan_spec.dataEfficiencyConfig = \
vim.vsan.DataEfficiencyConfig()
vsan_spec.dataEfficiencyConfig.compressionEnabled = \
vsan_dict['compression_enabled']
if 'dedup_enabled' in vsan_dict:
if not vsan_spec.dataEfficiencyConfig:
vsan_spec.dataEfficiencyConfig = \
vim.vsan.DataEfficiencyConfig()
vsan_spec.dataEfficiencyConfig.dedupEnabled = \
vsan_dict['dedup_enabled']
# In all cases we need to configure the vsan on the cluster
# directly so not to have a missmatch between vsan_spec and
# cluster_spec
if not cluster_spec.vsanConfig:
cluster_spec.vsanConfig = \
vim.VsanClusterConfigInfo()
vsan_config = cluster_spec.vsanConfig
if 'enabled' in vsan_dict:
vsan_config.enabled = vsan_dict['enabled']
if 'auto_claim_storage' in vsan_dict:
if not vsan_config.defaultConfig:
vsan_config.defaultConfig = \
vim.VsanClusterConfigInfoHostDefaultInfo()
elif vsan_config.defaultConfig.uuid:
# If this remains set it caused an error
vsan_config.defaultConfig.uuid = None
vsan_config.defaultConfig.autoClaimStorage = \
vsan_dict['auto_claim_storage']
log.trace('cluster_spec = %s', cluster_spec) | [
"def",
"_apply_cluster_dict",
"(",
"cluster_spec",
",",
"cluster_dict",
",",
"vsan_spec",
"=",
"None",
",",
"vsan_61",
"=",
"True",
")",
":",
"log",
".",
"trace",
"(",
"'Applying cluster dict %s'",
",",
"cluster_dict",
")",
"if",
"cluster_dict",
".",
"get",
"(... | Applies the values of cluster_dict dictionary to a cluster spec
(vim.ClusterConfigSpecEx).
All vsan values (cluster_dict['vsan']) will be applied to
vsan_spec (vim.vsan.cluster.ConfigInfoEx). Can be not omitted
if not required.
VSAN 6.1 config needs to be applied differently than the post VSAN 6.1 way.
The type of configuration desired is dictated by the flag vsan_61. | [
"Applies",
"the",
"values",
"of",
"cluster_dict",
"dictionary",
"to",
"a",
"cluster",
"spec",
"(",
"vim",
".",
"ClusterConfigSpecEx",
")",
"."
] | python | train |
ChrisCummins/labm8 | modules.py | https://github.com/ChrisCummins/labm8/blob/dd10d67a757aefb180cb508f86696f99440c94f5/modules.py#L28-L68 | def import_foreign(name, custom_name=None):
"""
Import a module with a custom name.
NOTE this is only needed for Python2. For Python3, import the
module using the "as" keyword to declare the custom name.
For implementation details, see:
http://stackoverflow.com/a/6032023
Example:
To import the standard module "math" as "std_math":
if labm8.is_python3():
import math as std_math
else:
std_math = modules.import_foreign("math", "std_math")
Arguments:
name (str): The name of the module to import.
custom_name (str, optional): The custom name to assign the module to.
Raises:
ImportError: If the module is not found.
"""
if lab.is_python3():
io.error(("Ignoring attempt to import foreign module '{mod}' "
"using python version {major}.{minor}"
.format(mod=name, major=sys.version_info[0],
minor=sys.version_info[1])))
return
custom_name = custom_name or name
f, pathname, desc = imp.find_module(name, sys.path[1:])
module = imp.load_module(custom_name, f, pathname, desc)
f.close()
return module | [
"def",
"import_foreign",
"(",
"name",
",",
"custom_name",
"=",
"None",
")",
":",
"if",
"lab",
".",
"is_python3",
"(",
")",
":",
"io",
".",
"error",
"(",
"(",
"\"Ignoring attempt to import foreign module '{mod}' \"",
"\"using python version {major}.{minor}\"",
".",
"... | Import a module with a custom name.
NOTE this is only needed for Python2. For Python3, import the
module using the "as" keyword to declare the custom name.
For implementation details, see:
http://stackoverflow.com/a/6032023
Example:
To import the standard module "math" as "std_math":
if labm8.is_python3():
import math as std_math
else:
std_math = modules.import_foreign("math", "std_math")
Arguments:
name (str): The name of the module to import.
custom_name (str, optional): The custom name to assign the module to.
Raises:
ImportError: If the module is not found. | [
"Import",
"a",
"module",
"with",
"a",
"custom",
"name",
"."
] | python | train |
apache/incubator-mxnet | tools/caffe_translator/scripts/convert_caffe_model.py | https://github.com/apache/incubator-mxnet/blob/1af29e9c060a4c7d60eeaacba32afdb9a7775ba7/tools/caffe_translator/scripts/convert_caffe_model.py#L33-L36 | def add_param(self, param_name, layer_index, blob_index):
"""Add a param to the .params file"""
blobs = self.layers[layer_index].blobs
self.dict_param[param_name] = mx.nd.array(caffe.io.blobproto_to_array(blobs[blob_index])) | [
"def",
"add_param",
"(",
"self",
",",
"param_name",
",",
"layer_index",
",",
"blob_index",
")",
":",
"blobs",
"=",
"self",
".",
"layers",
"[",
"layer_index",
"]",
".",
"blobs",
"self",
".",
"dict_param",
"[",
"param_name",
"]",
"=",
"mx",
".",
"nd",
".... | Add a param to the .params file | [
"Add",
"a",
"param",
"to",
"the",
".",
"params",
"file"
] | python | train |
sentinel-hub/sentinelhub-py | sentinelhub/areas.py | https://github.com/sentinel-hub/sentinelhub-py/blob/08a83b7f1e289187159a643336995d8369860fea/sentinelhub/areas.py#L181-L184 | def _reduce_sizes(self, bbox_list):
"""Reduces sizes of bounding boxes
"""
return [BBox(self._intersection_area(bbox).bounds, self.crs).transform(bbox.crs) for bbox in bbox_list] | [
"def",
"_reduce_sizes",
"(",
"self",
",",
"bbox_list",
")",
":",
"return",
"[",
"BBox",
"(",
"self",
".",
"_intersection_area",
"(",
"bbox",
")",
".",
"bounds",
",",
"self",
".",
"crs",
")",
".",
"transform",
"(",
"bbox",
".",
"crs",
")",
"for",
"bbo... | Reduces sizes of bounding boxes | [
"Reduces",
"sizes",
"of",
"bounding",
"boxes"
] | python | train |
saltstack/salt | salt/cloud/clouds/ec2.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/cloud/clouds/ec2.py#L3792-L3809 | def _toggle_term_protect(name, value):
'''
Enable or Disable termination protection on a node
'''
instance_id = _get_node(name)['instanceId']
params = {'Action': 'ModifyInstanceAttribute',
'InstanceId': instance_id,
'DisableApiTermination.Value': value}
result = aws.query(params,
location=get_location(),
provider=get_provider(),
return_root=True,
opts=__opts__,
sigver='4')
return show_term_protect(name=name, instance_id=instance_id, call='action') | [
"def",
"_toggle_term_protect",
"(",
"name",
",",
"value",
")",
":",
"instance_id",
"=",
"_get_node",
"(",
"name",
")",
"[",
"'instanceId'",
"]",
"params",
"=",
"{",
"'Action'",
":",
"'ModifyInstanceAttribute'",
",",
"'InstanceId'",
":",
"instance_id",
",",
"'D... | Enable or Disable termination protection on a node | [
"Enable",
"or",
"Disable",
"termination",
"protection",
"on",
"a",
"node"
] | python | train |
ghukill/pyfc4 | pyfc4/models.py | https://github.com/ghukill/pyfc4/blob/59011df592f08978c4a901a908862d112a5dcf02/pyfc4/models.py#L1085-L1106 | def _build_rdf(self, data=None):
'''
Parse incoming rdf as self.rdf.orig_graph, create copy at self.rdf.graph
Args:
data (): payload from GET request, expected RDF content in various serialization formats
Returns:
None
'''
# recreate rdf data
self.rdf = SimpleNamespace()
self.rdf.data = data
self.rdf.prefixes = SimpleNamespace()
self.rdf.uris = SimpleNamespace()
# populate prefixes
for prefix,uri in self.repo.context.items():
setattr(self.rdf.prefixes, prefix, rdflib.Namespace(uri))
# graph
self._parse_graph() | [
"def",
"_build_rdf",
"(",
"self",
",",
"data",
"=",
"None",
")",
":",
"# recreate rdf data",
"self",
".",
"rdf",
"=",
"SimpleNamespace",
"(",
")",
"self",
".",
"rdf",
".",
"data",
"=",
"data",
"self",
".",
"rdf",
".",
"prefixes",
"=",
"SimpleNamespace",
... | Parse incoming rdf as self.rdf.orig_graph, create copy at self.rdf.graph
Args:
data (): payload from GET request, expected RDF content in various serialization formats
Returns:
None | [
"Parse",
"incoming",
"rdf",
"as",
"self",
".",
"rdf",
".",
"orig_graph",
"create",
"copy",
"at",
"self",
".",
"rdf",
".",
"graph"
] | python | train |
zagaran/mongolia | mongolia/database_object.py | https://github.com/zagaran/mongolia/blob/82c499345f0a8610c7289545e19f5f633e8a81c0/mongolia/database_object.py#L454-L492 | def json_update(self, json_str, exclude=[], ignore_non_defaults=True):
"""
Updates a database object based on a json object. The intent of this
method is to allow passing json to an interface which then subsequently
manipulates the object and then sends back an update.
Mongolia will also automatically convert any json values that were
initially converted from ObjectId and datetime.datetime objects back
to their native python object types.
Note: if using AngularJS, make sure to pass json back using
`angular.toJson(obj)` instead of `JSON.stringify(obj)` since angular
sometimes adds `$$hashkey` to javascript objects and this will cause
a mongo error due to the "$" prefix in keys.
@param json_str: the json string containing the new object to use for
the update
@param exclude: a list of top-level keys to exclude from the update
(ID_KEY need not be included in this list; it is automatically
deleted since it can't be part of a mongo update operation)
@param ignore_non_defaults: if this is True and the database object
has non-empty DEFAULTS, then any top-level keys in the update json
that do not appear in DEFAULTS will also be excluded from the update
"""
update_dict = json.loads(json_str, cls=MongoliaJSONDecoder, encoding="utf-8")
# Remove ID_KEY since it can't be part of a mongo update operation
if ID_KEY in update_dict:
del update_dict[ID_KEY]
# Remove all keys in the exclude list from the update
for key in frozenset(exclude).intersection(frozenset(update_dict)):
del update_dict[key]
# Remove all keys not in DEFAULTS if ignore_non_defaults is True
if self.DEFAULTS and ignore_non_defaults:
for key in frozenset(update_dict).difference(frozenset(self.DEFAULTS)):
del update_dict[key]
self.update(update_dict) | [
"def",
"json_update",
"(",
"self",
",",
"json_str",
",",
"exclude",
"=",
"[",
"]",
",",
"ignore_non_defaults",
"=",
"True",
")",
":",
"update_dict",
"=",
"json",
".",
"loads",
"(",
"json_str",
",",
"cls",
"=",
"MongoliaJSONDecoder",
",",
"encoding",
"=",
... | Updates a database object based on a json object. The intent of this
method is to allow passing json to an interface which then subsequently
manipulates the object and then sends back an update.
Mongolia will also automatically convert any json values that were
initially converted from ObjectId and datetime.datetime objects back
to their native python object types.
Note: if using AngularJS, make sure to pass json back using
`angular.toJson(obj)` instead of `JSON.stringify(obj)` since angular
sometimes adds `$$hashkey` to javascript objects and this will cause
a mongo error due to the "$" prefix in keys.
@param json_str: the json string containing the new object to use for
the update
@param exclude: a list of top-level keys to exclude from the update
(ID_KEY need not be included in this list; it is automatically
deleted since it can't be part of a mongo update operation)
@param ignore_non_defaults: if this is True and the database object
has non-empty DEFAULTS, then any top-level keys in the update json
that do not appear in DEFAULTS will also be excluded from the update | [
"Updates",
"a",
"database",
"object",
"based",
"on",
"a",
"json",
"object",
".",
"The",
"intent",
"of",
"this",
"method",
"is",
"to",
"allow",
"passing",
"json",
"to",
"an",
"interface",
"which",
"then",
"subsequently",
"manipulates",
"the",
"object",
"and",... | python | train |
saltstack/salt | salt/grains/core.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/grains/core.py#L538-L563 | def _aix_memdata():
'''
Return the memory information for AIX systems
'''
grains = {'mem_total': 0, 'swap_total': 0}
prtconf = salt.utils.path.which('prtconf')
if prtconf:
for line in __salt__['cmd.run'](prtconf, python_shell=True).splitlines():
comps = [x for x in line.strip().split(' ') if x]
if len(comps) > 2 and 'Memory' in comps[0] and 'Size' in comps[1]:
grains['mem_total'] = int(comps[2])
break
else:
log.error('The \'prtconf\' binary was not found in $PATH.')
swap_cmd = salt.utils.path.which('swap')
if swap_cmd:
swap_data = __salt__['cmd.run']('{0} -s'.format(swap_cmd)).split()
try:
swap_total = (int(swap_data[-2]) + int(swap_data[-6])) * 4
except ValueError:
swap_total = None
grains['swap_total'] = swap_total
else:
log.error('The \'swap\' binary was not found in $PATH.')
return grains | [
"def",
"_aix_memdata",
"(",
")",
":",
"grains",
"=",
"{",
"'mem_total'",
":",
"0",
",",
"'swap_total'",
":",
"0",
"}",
"prtconf",
"=",
"salt",
".",
"utils",
".",
"path",
".",
"which",
"(",
"'prtconf'",
")",
"if",
"prtconf",
":",
"for",
"line",
"in",
... | Return the memory information for AIX systems | [
"Return",
"the",
"memory",
"information",
"for",
"AIX",
"systems"
] | python | train |
squaresLab/BugZoo | bugzoo/mgr/container.py | https://github.com/squaresLab/BugZoo/blob/68664f1977e85b37a78604f7c570382ffae1fa3b/bugzoo/mgr/container.py#L364-L379 | def coverage(self,
container: Container,
tests: Optional[Iterable[TestCase]] = None,
*,
instrument: bool = True
) -> TestSuiteCoverage:
"""
Computes line coverage information over a provided set of tests for
the program inside a given container.
"""
extractor = self.coverage_extractor(container)
if tests is None:
bugs = self.__installation.bugs
bug = bugs[container.bug]
tests = bug.tests
return extractor.run(tests, instrument=instrument) | [
"def",
"coverage",
"(",
"self",
",",
"container",
":",
"Container",
",",
"tests",
":",
"Optional",
"[",
"Iterable",
"[",
"TestCase",
"]",
"]",
"=",
"None",
",",
"*",
",",
"instrument",
":",
"bool",
"=",
"True",
")",
"->",
"TestSuiteCoverage",
":",
"ext... | Computes line coverage information over a provided set of tests for
the program inside a given container. | [
"Computes",
"line",
"coverage",
"information",
"over",
"a",
"provided",
"set",
"of",
"tests",
"for",
"the",
"program",
"inside",
"a",
"given",
"container",
"."
] | python | train |
juju/charm-helpers | charmhelpers/contrib/hardening/utils.py | https://github.com/juju/charm-helpers/blob/aa785c40c3b7a8c69dbfbc7921d6b9f30142e171/charmhelpers/contrib/hardening/utils.py#L63-L84 | def _get_user_provided_overrides(modules):
"""Load user-provided config overrides.
:param modules: stack modules to lookup in user overrides yaml file.
:returns: overrides dictionary.
"""
overrides = os.path.join(os.environ['JUJU_CHARM_DIR'],
'hardening.yaml')
if os.path.exists(overrides):
log("Found user-provided config overrides file '%s'" %
(overrides), level=DEBUG)
settings = yaml.safe_load(open(overrides))
if settings and settings.get(modules):
log("Applying '%s' overrides" % (modules), level=DEBUG)
return settings.get(modules)
log("No overrides found for '%s'" % (modules), level=DEBUG)
else:
log("No hardening config overrides file '%s' found in charm "
"root dir" % (overrides), level=DEBUG)
return {} | [
"def",
"_get_user_provided_overrides",
"(",
"modules",
")",
":",
"overrides",
"=",
"os",
".",
"path",
".",
"join",
"(",
"os",
".",
"environ",
"[",
"'JUJU_CHARM_DIR'",
"]",
",",
"'hardening.yaml'",
")",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"override... | Load user-provided config overrides.
:param modules: stack modules to lookup in user overrides yaml file.
:returns: overrides dictionary. | [
"Load",
"user",
"-",
"provided",
"config",
"overrides",
"."
] | python | train |
lemieuxl/pyGenClean | pyGenClean/Ethnicity/check_ethnicity.py | https://github.com/lemieuxl/pyGenClean/blob/6173a48ccc0cf3a3bd711b1f2a1fa16248b8bf55/pyGenClean/Ethnicity/check_ethnicity.py#L303-L327 | def compute_eigenvalues(in_prefix, out_prefix):
"""Computes the Eigenvalues using smartpca from Eigensoft.
:param in_prefix: the prefix of the input files.
:param out_prefix: the prefix of the output files.
:type in_prefix: str
:type out_prefix: str
Creates a "parameter file" used by smartpca and runs it.
"""
# First, we create the parameter file
with open(out_prefix + ".parameters", "w") as o_file:
print >>o_file, "genotypename: " + in_prefix + ".bed"
print >>o_file, "snpname: " + in_prefix + ".bim"
print >>o_file, "indivname: " + in_prefix + ".fam"
print >>o_file, "evecoutname: " + out_prefix + ".evec.txt"
print >>o_file, "evaloutname: " + out_prefix + ".eval.txt"
print >>o_file, "numoutlieriter: 0"
print >>o_file, "altnormstyle: NO"
# Executing smartpca
command = ["smartpca", "-p", out_prefix + ".parameters"]
runCommand(command) | [
"def",
"compute_eigenvalues",
"(",
"in_prefix",
",",
"out_prefix",
")",
":",
"# First, we create the parameter file",
"with",
"open",
"(",
"out_prefix",
"+",
"\".parameters\"",
",",
"\"w\"",
")",
"as",
"o_file",
":",
"print",
">>",
"o_file",
",",
"\"genotypename: ... | Computes the Eigenvalues using smartpca from Eigensoft.
:param in_prefix: the prefix of the input files.
:param out_prefix: the prefix of the output files.
:type in_prefix: str
:type out_prefix: str
Creates a "parameter file" used by smartpca and runs it. | [
"Computes",
"the",
"Eigenvalues",
"using",
"smartpca",
"from",
"Eigensoft",
"."
] | python | train |
mardix/Yass | yass/yass.py | https://github.com/mardix/Yass/blob/32f804c1a916f5b0a13d13fa750e52be3b6d666d/yass/yass.py#L398-L471 | def create_page(self, build_dir, filepath, context={}, content=None, template=None, markup=None, layout=None):
"""
To dynamically create a page and save it in the build_dir
:param build_dir: (path) The base directory that will hold the created page
:param filepath: (string) the name of the file to create. May contain slash to indicate directory
It will also create the url based on that name
If the filename doesn't end with .html, it will create a subdirectory
and create `index.html`
If file contains `.html` it will stays as is
ie:
post/waldo/where-is-waldo/ -> post/waldo/where-is-waldo/index.html
another/music/new-rap-song.html -> another/music/new-rap-song.html
post/page/5 -> post/page/5/index.html
:param context: (dict) context data
:param content: (text) The content of the file to be created. Will be overriden by template
:param template: (path) if source is not provided, template can be used to create the page.
Along with context it allows to create dynamic pages.
The file is relative to `/templates/`
file can be in html|jade|md
:param markup: (string: html|jade|md), when using content. To indicate which markup to use.
based on the markup it will parse the data
html: will render as is
jade and md: convert to the appropriate format
:param layout: (string) when using content. The layout to use.
The file location is relative to `/templates/`
file can be in html|jade|md
:return:
"""
build_dir = build_dir.rstrip("/")
filepath = filepath.lstrip("/").rstrip("/")
if not filepath.endswith(".html"):
filepath += "/index.html"
dest_file = os.path.join(build_dir, filepath)
dest_dir = os.path.dirname(dest_file)
if not os.path.isdir(dest_dir):
os.makedirs(dest_dir)
_context = context
if "page" not in _context:
_context["page"] = self.default_page_meta.copy()
if "url" not in _context["page"]:
_context["page"]["url"] = "/" + filepath.lstrip("/").replace(
"index.html", "")
if template:
if template not in self._templates:
self._templates[template] = self.tpl_env.get_template(template)
tpl = self._templates[template]
else:
if markup == "md":
_context["page"]["__toc__"] = md.get_toc(content)
content = md.convert(content)
elif markup == "jade":
content = jade.convert(content)
# Page must be extended by a layout and have a block 'body'
# These tags will be included if they are missing
if re.search(self.RE_EXTENDS, content) is None:
layout = layout or self.default_layout
content = "\n{% extends '{}' %} \n\n".replace("{}",
layout) + content
if re.search(self.RE_BLOCK_BODY, content) is None:
_layout_block = re.search(self.RE_EXTENDS, content).group(0)
content = content.replace(_layout_block, "")
content = "\n" + _layout_block + "\n" + \
"{% block body %} \n" + content.strip() + "\n{% endblock %}"
tpl = self.tpl_env.from_string(content)
with open(dest_file, "w") as fw:
fw.write(tpl.render(**_context)) | [
"def",
"create_page",
"(",
"self",
",",
"build_dir",
",",
"filepath",
",",
"context",
"=",
"{",
"}",
",",
"content",
"=",
"None",
",",
"template",
"=",
"None",
",",
"markup",
"=",
"None",
",",
"layout",
"=",
"None",
")",
":",
"build_dir",
"=",
"build... | To dynamically create a page and save it in the build_dir
:param build_dir: (path) The base directory that will hold the created page
:param filepath: (string) the name of the file to create. May contain slash to indicate directory
It will also create the url based on that name
If the filename doesn't end with .html, it will create a subdirectory
and create `index.html`
If file contains `.html` it will stays as is
ie:
post/waldo/where-is-waldo/ -> post/waldo/where-is-waldo/index.html
another/music/new-rap-song.html -> another/music/new-rap-song.html
post/page/5 -> post/page/5/index.html
:param context: (dict) context data
:param content: (text) The content of the file to be created. Will be overriden by template
:param template: (path) if source is not provided, template can be used to create the page.
Along with context it allows to create dynamic pages.
The file is relative to `/templates/`
file can be in html|jade|md
:param markup: (string: html|jade|md), when using content. To indicate which markup to use.
based on the markup it will parse the data
html: will render as is
jade and md: convert to the appropriate format
:param layout: (string) when using content. The layout to use.
The file location is relative to `/templates/`
file can be in html|jade|md
:return: | [
"To",
"dynamically",
"create",
"a",
"page",
"and",
"save",
"it",
"in",
"the",
"build_dir",
":",
"param",
"build_dir",
":",
"(",
"path",
")",
"The",
"base",
"directory",
"that",
"will",
"hold",
"the",
"created",
"page",
":",
"param",
"filepath",
":",
"(",... | python | train |
briandilley/ebs-deploy | ebs_deploy/__init__.py | https://github.com/briandilley/ebs-deploy/blob/4178c9c1282a9025fb987dab3470bea28c202e10/ebs_deploy/__init__.py#L326-L334 | def environment_exists(self, env_name):
"""
Returns whether or not the given environment exists
"""
response = self.ebs.describe_environments(application_name=self.app_name, environment_names=[env_name],
include_deleted=False)
return len(response['DescribeEnvironmentsResponse']['DescribeEnvironmentsResult']['Environments']) > 0 \
and response['DescribeEnvironmentsResponse']['DescribeEnvironmentsResult']['Environments'][0][
'Status'] != 'Terminated' | [
"def",
"environment_exists",
"(",
"self",
",",
"env_name",
")",
":",
"response",
"=",
"self",
".",
"ebs",
".",
"describe_environments",
"(",
"application_name",
"=",
"self",
".",
"app_name",
",",
"environment_names",
"=",
"[",
"env_name",
"]",
",",
"include_de... | Returns whether or not the given environment exists | [
"Returns",
"whether",
"or",
"not",
"the",
"given",
"environment",
"exists"
] | python | valid |
Ex-Mente/auxi.0 | auxi/tools/transportphenomena/dimensionlessquantities.py | https://github.com/Ex-Mente/auxi.0/blob/2dcdae74154f136f8ca58289fe5b20772f215046/auxi/tools/transportphenomena/dimensionlessquantities.py#L66-L77 | def Re(L: float, v: float, nu: float) -> float:
"""
Calculate the Reynolds number.
:param L: [m] surface characteristic length.
:param v: [m/s] fluid velocity relative to the object.
:param nu: [m2/s] fluid kinematic viscosity.
:returns: float
"""
return v * L / nu | [
"def",
"Re",
"(",
"L",
":",
"float",
",",
"v",
":",
"float",
",",
"nu",
":",
"float",
")",
"->",
"float",
":",
"return",
"v",
"*",
"L",
"/",
"nu"
] | Calculate the Reynolds number.
:param L: [m] surface characteristic length.
:param v: [m/s] fluid velocity relative to the object.
:param nu: [m2/s] fluid kinematic viscosity.
:returns: float | [
"Calculate",
"the",
"Reynolds",
"number",
"."
] | python | valid |
fp12/achallonge | challonge/tournament.py | https://github.com/fp12/achallonge/blob/25780b3c48b66400a50ff9f884e4287afd4c89e4/challonge/tournament.py#L713-L737 | async def process_check_ins(self):
""" finalize the check in phase
|methcoro|
Warning:
|unstable|
Note:
|from_api| This should be invoked after a tournament's check-in window closes before the tournament is started.
1. Marks participants who have not checked in as inactive.
2. Moves inactive participants to bottom seeds (ordered by original seed).
3. Transitions the tournament state from 'checking_in' to 'checked_in'
NOTE: Checked in participants on the waiting list will be promoted if slots become available.
Raises:
APIException
"""
params = {
'include_participants': 1, # forced to 1 since we need to update the Participant instances
'include_matches': 1 if AUTO_GET_MATCHES else 0
}
res = await self.connection('POST', 'tournaments/{}/process_check_ins'.format(self._id), **params)
self._refresh_from_json(res) | [
"async",
"def",
"process_check_ins",
"(",
"self",
")",
":",
"params",
"=",
"{",
"'include_participants'",
":",
"1",
",",
"# forced to 1 since we need to update the Participant instances",
"'include_matches'",
":",
"1",
"if",
"AUTO_GET_MATCHES",
"else",
"0",
"}",
"res",
... | finalize the check in phase
|methcoro|
Warning:
|unstable|
Note:
|from_api| This should be invoked after a tournament's check-in window closes before the tournament is started.
1. Marks participants who have not checked in as inactive.
2. Moves inactive participants to bottom seeds (ordered by original seed).
3. Transitions the tournament state from 'checking_in' to 'checked_in'
NOTE: Checked in participants on the waiting list will be promoted if slots become available.
Raises:
APIException | [
"finalize",
"the",
"check",
"in",
"phase"
] | python | train |
n8henrie/pycookiecheat | src/pycookiecheat/pycookiecheat.py | https://github.com/n8henrie/pycookiecheat/blob/1e0ba783da31689f5b37f9706205ff366d72f03d/src/pycookiecheat/pycookiecheat.py#L150-L244 | def chrome_cookies(
url: str,
cookie_file: str = None,
browser: str = "Chrome",
curl_cookie_file: str = None,
) -> dict:
"""Retrieve cookies from Chrome/Chromium on OSX or Linux.
Args:
url: Domain from which to retrieve cookies, starting with http(s)
cookie_file: Path to alternate file to search for cookies
browser: Name of the browser's cookies to read ('Chrome' or 'Chromium')
curl_cookie_file: Path to save the cookie file to be used with cURL
Returns:
Dictionary of cookie values for URL
"""
# If running Chrome on OSX
if sys.platform == 'darwin':
config = get_osx_config(browser)
elif sys.platform.startswith('linux'):
config = get_linux_config(browser)
else:
raise OSError("This script only works on OSX or Linux.")
config.update({
'init_vector': b' ' * 16,
'length': 16,
'salt': b'saltysalt',
})
if cookie_file:
cookie_file = str(pathlib.Path(cookie_file).expanduser())
else:
cookie_file = str(pathlib.Path(config['cookie_file']).expanduser())
enc_key = pbkdf2_hmac(hash_name='sha1',
password=config['my_pass'].encode('utf8'),
salt=config['salt'],
iterations=config['iterations'],
dklen=config['length'])
parsed_url = urllib.parse.urlparse(url)
if parsed_url.scheme:
domain = parsed_url.netloc
else:
raise urllib.error.URLError("You must include a scheme with your URL.")
try:
conn = sqlite3.connect(cookie_file)
except sqlite3.OperationalError:
print("Unable to connect to cookie_file at: {}\n".format(cookie_file))
raise
# Check whether the column name is `secure` or `is_secure`
secure_column_name = 'is_secure'
for sl_no, column_name, data_type, is_null, default_val, pk \
in conn.execute('PRAGMA table_info(cookies)'):
if column_name == 'secure':
secure_column_name = 'secure'
break
sql = ('select host_key, path, ' + secure_column_name +
', expires_utc, name, value, encrypted_value '
'from cookies where host_key like ?')
cookies = dict()
curl_cookies = []
for host_key in generate_host_keys(domain):
for hk, path, is_secure, expires_utc, cookie_key, val, enc_val \
in conn.execute(sql, (host_key,)):
# if there is a not encrypted value or if the encrypted value
# doesn't start with the 'v1[01]' prefix, return v
if val or (enc_val[:3] not in (b'v10', b'v11')):
pass
else:
val = chrome_decrypt(enc_val, key=enc_key,
init_vector=config['init_vector'])
cookies[cookie_key] = val
if curl_cookie_file:
# http://www.cookiecentral.com/faq/#3.5
curl_cookies.append('\t'.join(
[hk, 'TRUE', path, 'TRUE' if is_secure else 'FALSE',
str(expires_utc), cookie_key, val]
))
conn.rollback()
# Save the file to destination
if curl_cookie_file:
with open(curl_cookie_file, "w") as text_file:
text_file.write('\n'.join(curl_cookies) + '\n')
return cookies | [
"def",
"chrome_cookies",
"(",
"url",
":",
"str",
",",
"cookie_file",
":",
"str",
"=",
"None",
",",
"browser",
":",
"str",
"=",
"\"Chrome\"",
",",
"curl_cookie_file",
":",
"str",
"=",
"None",
",",
")",
"->",
"dict",
":",
"# If running Chrome on OSX",
"if",
... | Retrieve cookies from Chrome/Chromium on OSX or Linux.
Args:
url: Domain from which to retrieve cookies, starting with http(s)
cookie_file: Path to alternate file to search for cookies
browser: Name of the browser's cookies to read ('Chrome' or 'Chromium')
curl_cookie_file: Path to save the cookie file to be used with cURL
Returns:
Dictionary of cookie values for URL | [
"Retrieve",
"cookies",
"from",
"Chrome",
"/",
"Chromium",
"on",
"OSX",
"or",
"Linux",
"."
] | python | train |
timkpaine/pyEX | pyEX/marketdata/http.py | https://github.com/timkpaine/pyEX/blob/91cf751dafdb208a0c8b5377945e5808b99f94ba/pyEX/marketdata/http.py#L85-L106 | def deep(symbol=None, token='', version=''):
'''DEEP is used to receive real-time depth of book quotations direct from IEX.
The depth of book quotations received via DEEP provide an aggregated size of resting displayed orders at a price and side,
and do not indicate the size or number of individual orders at any price level.
Non-displayed orders and non-displayed portions of reserve orders are not represented in DEEP.
DEEP also provides last trade price and size information. Trades resulting from either displayed or non-displayed orders matching on IEX will be reported. Routed executions will not be reported.
https://iexcloud.io/docs/api/#deep
Args:
symbol (string); Ticker to request
token (string); Access token
version (string); API version
Returns:
dict: result
'''
_raiseIfNotStr(symbol)
if symbol:
return _getJson('deep?symbols=' + symbol, token, version)
return _getJson('deep', token, version) | [
"def",
"deep",
"(",
"symbol",
"=",
"None",
",",
"token",
"=",
"''",
",",
"version",
"=",
"''",
")",
":",
"_raiseIfNotStr",
"(",
"symbol",
")",
"if",
"symbol",
":",
"return",
"_getJson",
"(",
"'deep?symbols='",
"+",
"symbol",
",",
"token",
",",
"version... | DEEP is used to receive real-time depth of book quotations direct from IEX.
The depth of book quotations received via DEEP provide an aggregated size of resting displayed orders at a price and side,
and do not indicate the size or number of individual orders at any price level.
Non-displayed orders and non-displayed portions of reserve orders are not represented in DEEP.
DEEP also provides last trade price and size information. Trades resulting from either displayed or non-displayed orders matching on IEX will be reported. Routed executions will not be reported.
https://iexcloud.io/docs/api/#deep
Args:
symbol (string); Ticker to request
token (string); Access token
version (string); API version
Returns:
dict: result | [
"DEEP",
"is",
"used",
"to",
"receive",
"real",
"-",
"time",
"depth",
"of",
"book",
"quotations",
"direct",
"from",
"IEX",
".",
"The",
"depth",
"of",
"book",
"quotations",
"received",
"via",
"DEEP",
"provide",
"an",
"aggregated",
"size",
"of",
"resting",
"d... | python | valid |
astropy/astropy-healpix | astropy_healpix/core.py | https://github.com/astropy/astropy-healpix/blob/c7fbe36305aadda9946dd37969d5dcb9ff6b1440/astropy_healpix/core.py#L529-L558 | def interpolate_bilinear_lonlat(lon, lat, values, order='ring'):
"""
Interpolate values at specific longitudes/latitudes using bilinear interpolation
Parameters
----------
lon, lat : :class:`~astropy.units.Quantity`
The longitude and latitude values as :class:`~astropy.units.Quantity` instances
with angle units.
values : `~numpy.ndarray`
Array with the values in each HEALPix pixel. The first dimension should
have length 12 * nside ** 2 (and nside is determined automatically from
this).
order : { 'nested' | 'ring' }
Order of HEALPix pixels
Returns
-------
result : float `~numpy.ndarray`
The interpolated values
"""
nside = npix_to_nside(values.shape[0])
indices, weights = bilinear_interpolation_weights(lon, lat, nside, order=order)
values = values[indices]
# At this point values has shape (N, M) where both N and M might be several
# dimensions, and weights has shape (N,), so we need to transpose in order
# to benefit from broadcasting, then transpose back so that the dimension
# with length 4 is at the start again, ready for summing.
result = (values.T * weights.T).T
return result.sum(axis=0) | [
"def",
"interpolate_bilinear_lonlat",
"(",
"lon",
",",
"lat",
",",
"values",
",",
"order",
"=",
"'ring'",
")",
":",
"nside",
"=",
"npix_to_nside",
"(",
"values",
".",
"shape",
"[",
"0",
"]",
")",
"indices",
",",
"weights",
"=",
"bilinear_interpolation_weight... | Interpolate values at specific longitudes/latitudes using bilinear interpolation
Parameters
----------
lon, lat : :class:`~astropy.units.Quantity`
The longitude and latitude values as :class:`~astropy.units.Quantity` instances
with angle units.
values : `~numpy.ndarray`
Array with the values in each HEALPix pixel. The first dimension should
have length 12 * nside ** 2 (and nside is determined automatically from
this).
order : { 'nested' | 'ring' }
Order of HEALPix pixels
Returns
-------
result : float `~numpy.ndarray`
The interpolated values | [
"Interpolate",
"values",
"at",
"specific",
"longitudes",
"/",
"latitudes",
"using",
"bilinear",
"interpolation"
] | python | train |
JdeRobot/base | src/drivers/MAVLinkServer/MAVProxy/pymavlink/dialects/v10/matrixpilot.py | https://github.com/JdeRobot/base/blob/303b18992785b2fe802212f2d758a60873007f1f/src/drivers/MAVLinkServer/MAVProxy/pymavlink/dialects/v10/matrixpilot.py#L8826-L8835 | def param_request_list_send(self, target_system, target_component, force_mavlink1=False):
'''
Request all parameters of this component. After this request, all
parameters are emitted.
target_system : System ID (uint8_t)
target_component : Component ID (uint8_t)
'''
return self.send(self.param_request_list_encode(target_system, target_component), force_mavlink1=force_mavlink1) | [
"def",
"param_request_list_send",
"(",
"self",
",",
"target_system",
",",
"target_component",
",",
"force_mavlink1",
"=",
"False",
")",
":",
"return",
"self",
".",
"send",
"(",
"self",
".",
"param_request_list_encode",
"(",
"target_system",
",",
"target_component",
... | Request all parameters of this component. After this request, all
parameters are emitted.
target_system : System ID (uint8_t)
target_component : Component ID (uint8_t) | [
"Request",
"all",
"parameters",
"of",
"this",
"component",
".",
"After",
"this",
"request",
"all",
"parameters",
"are",
"emitted",
"."
] | python | train |
vals/umis | umis/umis.py | https://github.com/vals/umis/blob/e8adb8486d9e9134ab8a6cad9811a7e74dcc4a2c/umis/umis.py#L959-L985 | def cb_histogram(fastq, umi_histogram):
''' Counts the number of reads for each cellular barcode
Expects formatted fastq files.
'''
annotations = detect_fastq_annotations(fastq)
re_string = construct_transformed_regex(annotations)
parser_re = re.compile(re_string)
cb_counter = collections.Counter()
umi_counter = collections.Counter()
for read in read_fastq(fastq):
match = parser_re.search(read).groupdict()
cb = match['CB']
cb_counter[cb] += 1
if umi_histogram:
umi = match['MB']
umi_counter[(cb, umi)] += 1
for bc, count in cb_counter.most_common():
sys.stdout.write('{}\t{}\n'.format(bc, count))
if umi_histogram:
with open(umi_histogram, "w") as umi_handle:
for cbumi, count in umi_counter.most_common():
umi_handle.write('{}\t{}\t{}\n'.format(cbumi[0], cbumi[1], count)) | [
"def",
"cb_histogram",
"(",
"fastq",
",",
"umi_histogram",
")",
":",
"annotations",
"=",
"detect_fastq_annotations",
"(",
"fastq",
")",
"re_string",
"=",
"construct_transformed_regex",
"(",
"annotations",
")",
"parser_re",
"=",
"re",
".",
"compile",
"(",
"re_strin... | Counts the number of reads for each cellular barcode
Expects formatted fastq files. | [
"Counts",
"the",
"number",
"of",
"reads",
"for",
"each",
"cellular",
"barcode"
] | python | train |
digidotcom/python-wvalib | wva/cli.py | https://github.com/digidotcom/python-wvalib/blob/4252735e2775f80ebaffd813fbe84046d26906b3/wva/cli.py#L110-L123 | def cli(ctx, hostname, username, password, config_dir, https):
"""Command-line interface for interacting with a WVA device"""
ctx.is_root = True
ctx.user_values_entered = False
ctx.config_dir = os.path.abspath(os.path.expanduser(config_dir))
ctx.config = load_config(ctx)
ctx.hostname = hostname
ctx.username = username
ctx.password = password
ctx.https = https
# Creating the WVA object is deferred as some commands like clearconfig
# should not require a username/password to perform them
ctx.wva = None | [
"def",
"cli",
"(",
"ctx",
",",
"hostname",
",",
"username",
",",
"password",
",",
"config_dir",
",",
"https",
")",
":",
"ctx",
".",
"is_root",
"=",
"True",
"ctx",
".",
"user_values_entered",
"=",
"False",
"ctx",
".",
"config_dir",
"=",
"os",
".",
"path... | Command-line interface for interacting with a WVA device | [
"Command",
"-",
"line",
"interface",
"for",
"interacting",
"with",
"a",
"WVA",
"device"
] | python | train |
JukeboxPipeline/jukebox-core | src/jukeboxcore/addons/guerilla/guerillamgmt.py | https://github.com/JukeboxPipeline/jukebox-core/blob/bac2280ca49940355270e4b69400ce9976ab2e6f/src/jukeboxcore/addons/guerilla/guerillamgmt.py#L2176-L2191 | def dep_add_prj(self, *args, **kwargs):
"""Add projects to the current department
:returns: None
:rtype: None
:raises: None
"""
if not self.cur_dep:
return
dialog = ProjectAdderDialog(department=self.cur_dep)
dialog.exec_()
prjs = dialog.projects
for prj in prjs:
prjdata = djitemdata.ProjectItemData(prj)
treemodel.TreeItem(prjdata, self.dep_prj_model.root) | [
"def",
"dep_add_prj",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"not",
"self",
".",
"cur_dep",
":",
"return",
"dialog",
"=",
"ProjectAdderDialog",
"(",
"department",
"=",
"self",
".",
"cur_dep",
")",
"dialog",
".",
"exec_"... | Add projects to the current department
:returns: None
:rtype: None
:raises: None | [
"Add",
"projects",
"to",
"the",
"current",
"department"
] | python | train |
SheffieldML/GPy | GPy/likelihoods/likelihood.py | https://github.com/SheffieldML/GPy/blob/54c32d79d289d622fb18b898aee65a2a431d90cf/GPy/likelihoods/likelihood.py#L413-L438 | def predictive_mean(self, mu, variance, Y_metadata=None):
"""
Quadrature calculation of the predictive mean: E(Y_star|Y) = E( E(Y_star|f_star, Y) )
:param mu: mean of posterior
:param sigma: standard deviation of posterior
"""
#conditional_mean: the edpected value of y given some f, under this likelihood
fmin = -np.inf
fmax = np.inf
def int_mean(f,m,v):
exponent = -(0.5/v)*np.square(f - m)
#If exponent is under -30 then exp(exponent) will be very small, so don't exp it!)
#If p is zero then conditional_mean will overflow
assert v.all() > 0
p = safe_exp(exponent)
#If p is zero then conditional_variance will overflow
if p < 1e-10:
return 0.
else:
return self.conditional_mean(f)*p
scaled_mean = [quad(int_mean, fmin, fmax,args=(mj,s2j))[0] for mj,s2j in zip(mu,variance)]
mean = np.array(scaled_mean)[:,None] / np.sqrt(2*np.pi*(variance))
return mean | [
"def",
"predictive_mean",
"(",
"self",
",",
"mu",
",",
"variance",
",",
"Y_metadata",
"=",
"None",
")",
":",
"#conditional_mean: the edpected value of y given some f, under this likelihood",
"fmin",
"=",
"-",
"np",
".",
"inf",
"fmax",
"=",
"np",
".",
"inf",
"def",... | Quadrature calculation of the predictive mean: E(Y_star|Y) = E( E(Y_star|f_star, Y) )
:param mu: mean of posterior
:param sigma: standard deviation of posterior | [
"Quadrature",
"calculation",
"of",
"the",
"predictive",
"mean",
":",
"E",
"(",
"Y_star|Y",
")",
"=",
"E",
"(",
"E",
"(",
"Y_star|f_star",
"Y",
")",
")"
] | python | train |
qacafe/cdrouter.py | cdrouter/results.py | https://github.com/qacafe/cdrouter.py/blob/aacf2c6ab0b987250f7b1892f4bba14bb2b7dbe5/cdrouter/results.py#L623-L636 | def updates(self, id, update_id=None): # pylint: disable=invalid-name,redefined-builtin
"""Get updates of a running result via long-polling. If no updates are available, CDRouter waits up to 10 seconds before sending an empty response.
:param id: Result ID as an int.
:param update_id: (optional) Update ID as an int.
:return: :class:`results.Update <results.Update>` object
:rtype: results.Update
"""
if update_id is None:
update_id = -1
schema = UpdateSchema()
resp = self.service.get_id(self.base, id, params={'updates': update_id})
return self.service.decode(schema, resp) | [
"def",
"updates",
"(",
"self",
",",
"id",
",",
"update_id",
"=",
"None",
")",
":",
"# pylint: disable=invalid-name,redefined-builtin",
"if",
"update_id",
"is",
"None",
":",
"update_id",
"=",
"-",
"1",
"schema",
"=",
"UpdateSchema",
"(",
")",
"resp",
"=",
"se... | Get updates of a running result via long-polling. If no updates are available, CDRouter waits up to 10 seconds before sending an empty response.
:param id: Result ID as an int.
:param update_id: (optional) Update ID as an int.
:return: :class:`results.Update <results.Update>` object
:rtype: results.Update | [
"Get",
"updates",
"of",
"a",
"running",
"result",
"via",
"long",
"-",
"polling",
".",
"If",
"no",
"updates",
"are",
"available",
"CDRouter",
"waits",
"up",
"to",
"10",
"seconds",
"before",
"sending",
"an",
"empty",
"response",
"."
] | python | train |
Qiskit/qiskit-terra | qiskit/visualization/text.py | https://github.com/Qiskit/qiskit-terra/blob/d4f58d903bc96341b816f7c35df936d6421267d1/qiskit/visualization/text.py#L935-L944 | def set_cl_multibox(self, creg, label, top_connect='┴'):
"""
Sets the multi clbit box.
Args:
creg (string): The affected classical register.
label (string): The label for the multi clbit box.
top_connect (char): The char to connect the box on the top.
"""
clbit = [bit for bit in self.cregs if bit[0] == creg]
self._set_multibox("cl", clbit, label, top_connect=top_connect) | [
"def",
"set_cl_multibox",
"(",
"self",
",",
"creg",
",",
"label",
",",
"top_connect",
"=",
"'┴'):",
"",
"",
"clbit",
"=",
"[",
"bit",
"for",
"bit",
"in",
"self",
".",
"cregs",
"if",
"bit",
"[",
"0",
"]",
"==",
"creg",
"]",
"self",
".",
"_set_multib... | Sets the multi clbit box.
Args:
creg (string): The affected classical register.
label (string): The label for the multi clbit box.
top_connect (char): The char to connect the box on the top. | [
"Sets",
"the",
"multi",
"clbit",
"box",
".",
"Args",
":",
"creg",
"(",
"string",
")",
":",
"The",
"affected",
"classical",
"register",
".",
"label",
"(",
"string",
")",
":",
"The",
"label",
"for",
"the",
"multi",
"clbit",
"box",
".",
"top_connect",
"("... | python | test |
arista-eosplus/pyeapi | pyeapi/api/mlag.py | https://github.com/arista-eosplus/pyeapi/blob/96a74faef1fe3bd79c4e900aed29c9956a0587d6/pyeapi/api/mlag.py#L227-L239 | def set_peer_address(self, value=None, default=False, disable=False):
"""Configures the mlag peer-address value
Args:
value (str): The value to configure the peer-address
default (bool): Configures the peer-address using the
default keyword
disable (bool): Negates the peer-address using the no keyword
Returns:
bool: Returns True if the commands complete successfully
"""
return self._configure_mlag('peer-address', value, default, disable) | [
"def",
"set_peer_address",
"(",
"self",
",",
"value",
"=",
"None",
",",
"default",
"=",
"False",
",",
"disable",
"=",
"False",
")",
":",
"return",
"self",
".",
"_configure_mlag",
"(",
"'peer-address'",
",",
"value",
",",
"default",
",",
"disable",
")"
] | Configures the mlag peer-address value
Args:
value (str): The value to configure the peer-address
default (bool): Configures the peer-address using the
default keyword
disable (bool): Negates the peer-address using the no keyword
Returns:
bool: Returns True if the commands complete successfully | [
"Configures",
"the",
"mlag",
"peer",
"-",
"address",
"value"
] | python | train |
Gorialis/jishaku | jishaku/cog.py | https://github.com/Gorialis/jishaku/blob/fc7c479b9d510ede189a929c8aa6f7c8ef7f9a6e/jishaku/cog.py#L446-L477 | async def jsk_curl(self, ctx: commands.Context, url: str):
"""
Download and display a text file from the internet.
This command is similar to jsk cat, but accepts a URL.
"""
# remove embed maskers if present
url = url.lstrip("<").rstrip(">")
async with ReplResponseReactor(ctx.message):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.read()
hints = (
response.content_type,
url
)
code = response.status
if not data:
return await ctx.send(f"HTTP response was empty (status code {code}).")
try:
paginator = WrappedFilePaginator(io.BytesIO(data), language_hints=hints, max_size=1985)
except UnicodeDecodeError:
return await ctx.send(f"Couldn't determine the encoding of the response. (status code {code})")
except ValueError as exc:
return await ctx.send(f"Couldn't read response (status code {code}), {exc}")
interface = PaginatorInterface(ctx.bot, paginator, owner=ctx.author)
await interface.send_to(ctx) | [
"async",
"def",
"jsk_curl",
"(",
"self",
",",
"ctx",
":",
"commands",
".",
"Context",
",",
"url",
":",
"str",
")",
":",
"# remove embed maskers if present",
"url",
"=",
"url",
".",
"lstrip",
"(",
"\"<\"",
")",
".",
"rstrip",
"(",
"\">\"",
")",
"async",
... | Download and display a text file from the internet.
This command is similar to jsk cat, but accepts a URL. | [
"Download",
"and",
"display",
"a",
"text",
"file",
"from",
"the",
"internet",
"."
] | python | train |
wavefrontHQ/python-client | wavefront_api_client/models/chart.py | https://github.com/wavefrontHQ/python-client/blob/b0f1046a8f68c2c7d69e395f7167241f224c738a/wavefront_api_client/models/chart.py#L323-L338 | def summarization(self, summarization):
"""Sets the summarization of this Chart.
Summarization strategy for the chart. MEAN is default # noqa: E501
:param summarization: The summarization of this Chart. # noqa: E501
:type: str
"""
allowed_values = ["MEAN", "MEDIAN", "MIN", "MAX", "SUM", "COUNT", "LAST", "FIRST"] # noqa: E501
if summarization not in allowed_values:
raise ValueError(
"Invalid value for `summarization` ({0}), must be one of {1}" # noqa: E501
.format(summarization, allowed_values)
)
self._summarization = summarization | [
"def",
"summarization",
"(",
"self",
",",
"summarization",
")",
":",
"allowed_values",
"=",
"[",
"\"MEAN\"",
",",
"\"MEDIAN\"",
",",
"\"MIN\"",
",",
"\"MAX\"",
",",
"\"SUM\"",
",",
"\"COUNT\"",
",",
"\"LAST\"",
",",
"\"FIRST\"",
"]",
"# noqa: E501",
"if",
"s... | Sets the summarization of this Chart.
Summarization strategy for the chart. MEAN is default # noqa: E501
:param summarization: The summarization of this Chart. # noqa: E501
:type: str | [
"Sets",
"the",
"summarization",
"of",
"this",
"Chart",
"."
] | python | train |
SHDShim/pytheos | pytheos/eqn_hugoniot.py | https://github.com/SHDShim/pytheos/blob/be079624405e92fbec60c5ead253eb5917e55237/pytheos/eqn_hugoniot.py#L146-L163 | def hugoniot_rho_single(p, rho0, c0, s, min_strain=0.01):
"""
calculate density in g/cm^3 from a hugoniot curve
:param p: pressure in GPa
:param rho0: density at 1 bar in g/cm^3
:param c0: velocity at 1 bar in km/s
:param s: slope of the velocity change
:param min_strain: defining minimum v/v0 value to search volume for
:return: density in g/cm^3
"""
if p <= 1.e-5:
return rho0
def f_diff(rho):
return hugoniot_p(rho, rho0, c0, s) - p
rho = brenth(f_diff, rho0, rho0 / min_strain)
return rho | [
"def",
"hugoniot_rho_single",
"(",
"p",
",",
"rho0",
",",
"c0",
",",
"s",
",",
"min_strain",
"=",
"0.01",
")",
":",
"if",
"p",
"<=",
"1.e-5",
":",
"return",
"rho0",
"def",
"f_diff",
"(",
"rho",
")",
":",
"return",
"hugoniot_p",
"(",
"rho",
",",
"rh... | calculate density in g/cm^3 from a hugoniot curve
:param p: pressure in GPa
:param rho0: density at 1 bar in g/cm^3
:param c0: velocity at 1 bar in km/s
:param s: slope of the velocity change
:param min_strain: defining minimum v/v0 value to search volume for
:return: density in g/cm^3 | [
"calculate",
"density",
"in",
"g",
"/",
"cm^3",
"from",
"a",
"hugoniot",
"curve"
] | python | train |
pandas-dev/pandas | pandas/core/generic.py | https://github.com/pandas-dev/pandas/blob/9feb3ad92cc0397a04b665803a49299ee7aa1037/pandas/core/generic.py#L6899-L7067 | def asof(self, where, subset=None):
"""
Return the last row(s) without any NaNs before `where`.
The last row (for each element in `where`, if list) without any
NaN is taken.
In case of a :class:`~pandas.DataFrame`, the last row without NaN
considering only the subset of columns (if not `None`)
.. versionadded:: 0.19.0 For DataFrame
If there is no good value, NaN is returned for a Series or
a Series of NaN values for a DataFrame
Parameters
----------
where : date or array-like of dates
Date(s) before which the last row(s) are returned.
subset : str or array-like of str, default `None`
For DataFrame, if not `None`, only use these columns to
check for NaNs.
Returns
-------
scalar, Series, or DataFrame
The return can be:
* scalar : when `self` is a Series and `where` is a scalar
* Series: when `self` is a Series and `where` is an array-like,
or when `self` is a DataFrame and `where` is a scalar
* DataFrame : when `self` is a DataFrame and `where` is an
array-like
Return scalar, Series, or DataFrame.
See Also
--------
merge_asof : Perform an asof merge. Similar to left join.
Notes
-----
Dates are assumed to be sorted. Raises if this is not the case.
Examples
--------
A Series and a scalar `where`.
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
>>> s.asof(20)
2.0
For a sequence `where`, a Series is returned. The first value is
NaN, because the first element of `where` is before the first
index value.
>>> s.asof([5, 20])
5 NaN
20 2.0
dtype: float64
Missing values are not considered. The following is ``2.0``, not
NaN, even though NaN is at the index location for ``30``.
>>> s.asof(30)
2.0
Take all columns into consideration
>>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50],
... 'b': [None, None, None, None, 500]},
... index=pd.DatetimeIndex(['2018-02-27 09:01:00',
... '2018-02-27 09:02:00',
... '2018-02-27 09:03:00',
... '2018-02-27 09:04:00',
... '2018-02-27 09:05:00']))
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']))
a b
2018-02-27 09:03:30 NaN NaN
2018-02-27 09:04:30 NaN NaN
Take a single column into consideration
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']),
... subset=['a'])
a b
2018-02-27 09:03:30 30.0 NaN
2018-02-27 09:04:30 40.0 NaN
"""
if isinstance(where, str):
from pandas import to_datetime
where = to_datetime(where)
if not self.index.is_monotonic:
raise ValueError("asof requires a sorted index")
is_series = isinstance(self, ABCSeries)
if is_series:
if subset is not None:
raise ValueError("subset is not valid for Series")
elif self.ndim > 2:
raise NotImplementedError("asof is not implemented "
"for {type}".format(type=type(self)))
else:
if subset is None:
subset = self.columns
if not is_list_like(subset):
subset = [subset]
is_list = is_list_like(where)
if not is_list:
start = self.index[0]
if isinstance(self.index, PeriodIndex):
where = Period(where, freq=self.index.freq).ordinal
start = start.ordinal
if where < start:
if not is_series:
from pandas import Series
return Series(index=self.columns, name=where)
return np.nan
# It's always much faster to use a *while* loop here for
# Series than pre-computing all the NAs. However a
# *while* loop is extremely expensive for DataFrame
# so we later pre-compute all the NAs and use the same
# code path whether *where* is a scalar or list.
# See PR: https://github.com/pandas-dev/pandas/pull/14476
if is_series:
loc = self.index.searchsorted(where, side='right')
if loc > 0:
loc -= 1
values = self._values
while loc > 0 and isna(values[loc]):
loc -= 1
return values[loc]
if not isinstance(where, Index):
where = Index(where) if is_list else Index([where])
nulls = self.isna() if is_series else self[subset].isna().any(1)
if nulls.all():
if is_series:
return self._constructor(np.nan, index=where, name=self.name)
elif is_list:
from pandas import DataFrame
return DataFrame(np.nan, index=where, columns=self.columns)
else:
from pandas import Series
return Series(np.nan, index=self.columns, name=where[0])
locs = self.index.asof_locs(where, ~(nulls.values))
# mask the missing
missing = locs == -1
data = self.take(locs, is_copy=False)
data.index = where
data.loc[missing] = np.nan
return data if is_list else data.iloc[-1] | [
"def",
"asof",
"(",
"self",
",",
"where",
",",
"subset",
"=",
"None",
")",
":",
"if",
"isinstance",
"(",
"where",
",",
"str",
")",
":",
"from",
"pandas",
"import",
"to_datetime",
"where",
"=",
"to_datetime",
"(",
"where",
")",
"if",
"not",
"self",
".... | Return the last row(s) without any NaNs before `where`.
The last row (for each element in `where`, if list) without any
NaN is taken.
In case of a :class:`~pandas.DataFrame`, the last row without NaN
considering only the subset of columns (if not `None`)
.. versionadded:: 0.19.0 For DataFrame
If there is no good value, NaN is returned for a Series or
a Series of NaN values for a DataFrame
Parameters
----------
where : date or array-like of dates
Date(s) before which the last row(s) are returned.
subset : str or array-like of str, default `None`
For DataFrame, if not `None`, only use these columns to
check for NaNs.
Returns
-------
scalar, Series, or DataFrame
The return can be:
* scalar : when `self` is a Series and `where` is a scalar
* Series: when `self` is a Series and `where` is an array-like,
or when `self` is a DataFrame and `where` is a scalar
* DataFrame : when `self` is a DataFrame and `where` is an
array-like
Return scalar, Series, or DataFrame.
See Also
--------
merge_asof : Perform an asof merge. Similar to left join.
Notes
-----
Dates are assumed to be sorted. Raises if this is not the case.
Examples
--------
A Series and a scalar `where`.
>>> s = pd.Series([1, 2, np.nan, 4], index=[10, 20, 30, 40])
>>> s
10 1.0
20 2.0
30 NaN
40 4.0
dtype: float64
>>> s.asof(20)
2.0
For a sequence `where`, a Series is returned. The first value is
NaN, because the first element of `where` is before the first
index value.
>>> s.asof([5, 20])
5 NaN
20 2.0
dtype: float64
Missing values are not considered. The following is ``2.0``, not
NaN, even though NaN is at the index location for ``30``.
>>> s.asof(30)
2.0
Take all columns into consideration
>>> df = pd.DataFrame({'a': [10, 20, 30, 40, 50],
... 'b': [None, None, None, None, 500]},
... index=pd.DatetimeIndex(['2018-02-27 09:01:00',
... '2018-02-27 09:02:00',
... '2018-02-27 09:03:00',
... '2018-02-27 09:04:00',
... '2018-02-27 09:05:00']))
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']))
a b
2018-02-27 09:03:30 NaN NaN
2018-02-27 09:04:30 NaN NaN
Take a single column into consideration
>>> df.asof(pd.DatetimeIndex(['2018-02-27 09:03:30',
... '2018-02-27 09:04:30']),
... subset=['a'])
a b
2018-02-27 09:03:30 30.0 NaN
2018-02-27 09:04:30 40.0 NaN | [
"Return",
"the",
"last",
"row",
"(",
"s",
")",
"without",
"any",
"NaNs",
"before",
"where",
"."
] | python | train |
librosa/librosa | librosa/feature/spectral.py | https://github.com/librosa/librosa/blob/180e8e6eb8f958fa6b20b8cba389f7945d508247/librosa/feature/spectral.py#L1631-L1744 | def melspectrogram(y=None, sr=22050, S=None, n_fft=2048, hop_length=512,
win_length=None, window='hann', center=True, pad_mode='reflect',
power=2.0, **kwargs):
"""Compute a mel-scaled spectrogram.
If a spectrogram input `S` is provided, then it is mapped directly onto
the mel basis `mel_f` by `mel_f.dot(S)`.
If a time-series input `y, sr` is provided, then its magnitude spectrogram
`S` is first computed, and then mapped onto the mel scale by
`mel_f.dot(S**power)`. By default, `power=2` operates on a power spectrum.
Parameters
----------
y : np.ndarray [shape=(n,)] or None
audio time-series
sr : number > 0 [scalar]
sampling rate of `y`
S : np.ndarray [shape=(d, t)]
spectrogram
n_fft : int > 0 [scalar]
length of the FFT window
hop_length : int > 0 [scalar]
number of samples between successive frames.
See `librosa.core.stft`
win_length : int <= n_fft [scalar]
Each frame of audio is windowed by `window()`.
The window will be of length `win_length` and then padded
with zeros to match `n_fft`.
If unspecified, defaults to ``win_length = n_fft``.
window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]
- a window specification (string, tuple, or number);
see `scipy.signal.get_window`
- a window function, such as `scipy.signal.hanning`
- a vector or array of length `n_fft`
.. see also:: `filters.get_window`
center : boolean
- If `True`, the signal `y` is padded so that frame
`t` is centered at `y[t * hop_length]`.
- If `False`, then frame `t` begins at `y[t * hop_length]`
pad_mode : string
If `center=True`, the padding mode to use at the edges of the signal.
By default, STFT uses reflection padding.
power : float > 0 [scalar]
Exponent for the magnitude melspectrogram.
e.g., 1 for energy, 2 for power, etc.
kwargs : additional keyword arguments
Mel filter bank parameters.
See `librosa.filters.mel` for details.
Returns
-------
S : np.ndarray [shape=(n_mels, t)]
Mel spectrogram
See Also
--------
librosa.filters.mel
Mel filter bank construction
librosa.core.stft
Short-time Fourier Transform
Examples
--------
>>> y, sr = librosa.load(librosa.util.example_audio_file())
>>> librosa.feature.melspectrogram(y=y, sr=sr)
array([[ 2.891e-07, 2.548e-03, ..., 8.116e-09, 5.633e-09],
[ 1.986e-07, 1.162e-02, ..., 9.332e-08, 6.716e-09],
...,
[ 3.668e-09, 2.029e-08, ..., 3.208e-09, 2.864e-09],
[ 2.561e-10, 2.096e-09, ..., 7.543e-10, 6.101e-10]])
Using a pre-computed power spectrogram
>>> D = np.abs(librosa.stft(y))**2
>>> S = librosa.feature.melspectrogram(S=D)
>>> # Passing through arguments to the Mel filters
>>> S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,
... fmax=8000)
>>> import matplotlib.pyplot as plt
>>> plt.figure(figsize=(10, 4))
>>> librosa.display.specshow(librosa.power_to_db(S,
... ref=np.max),
... y_axis='mel', fmax=8000,
... x_axis='time')
>>> plt.colorbar(format='%+2.0f dB')
>>> plt.title('Mel spectrogram')
>>> plt.tight_layout()
"""
S, n_fft = _spectrogram(y=y, S=S, n_fft=n_fft, hop_length=hop_length, power=power,
win_length=win_length, window=window, center=center,
pad_mode=pad_mode)
# Build a Mel filter
mel_basis = filters.mel(sr, n_fft, **kwargs)
return np.dot(mel_basis, S) | [
"def",
"melspectrogram",
"(",
"y",
"=",
"None",
",",
"sr",
"=",
"22050",
",",
"S",
"=",
"None",
",",
"n_fft",
"=",
"2048",
",",
"hop_length",
"=",
"512",
",",
"win_length",
"=",
"None",
",",
"window",
"=",
"'hann'",
",",
"center",
"=",
"True",
",",... | Compute a mel-scaled spectrogram.
If a spectrogram input `S` is provided, then it is mapped directly onto
the mel basis `mel_f` by `mel_f.dot(S)`.
If a time-series input `y, sr` is provided, then its magnitude spectrogram
`S` is first computed, and then mapped onto the mel scale by
`mel_f.dot(S**power)`. By default, `power=2` operates on a power spectrum.
Parameters
----------
y : np.ndarray [shape=(n,)] or None
audio time-series
sr : number > 0 [scalar]
sampling rate of `y`
S : np.ndarray [shape=(d, t)]
spectrogram
n_fft : int > 0 [scalar]
length of the FFT window
hop_length : int > 0 [scalar]
number of samples between successive frames.
See `librosa.core.stft`
win_length : int <= n_fft [scalar]
Each frame of audio is windowed by `window()`.
The window will be of length `win_length` and then padded
with zeros to match `n_fft`.
If unspecified, defaults to ``win_length = n_fft``.
window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]
- a window specification (string, tuple, or number);
see `scipy.signal.get_window`
- a window function, such as `scipy.signal.hanning`
- a vector or array of length `n_fft`
.. see also:: `filters.get_window`
center : boolean
- If `True`, the signal `y` is padded so that frame
`t` is centered at `y[t * hop_length]`.
- If `False`, then frame `t` begins at `y[t * hop_length]`
pad_mode : string
If `center=True`, the padding mode to use at the edges of the signal.
By default, STFT uses reflection padding.
power : float > 0 [scalar]
Exponent for the magnitude melspectrogram.
e.g., 1 for energy, 2 for power, etc.
kwargs : additional keyword arguments
Mel filter bank parameters.
See `librosa.filters.mel` for details.
Returns
-------
S : np.ndarray [shape=(n_mels, t)]
Mel spectrogram
See Also
--------
librosa.filters.mel
Mel filter bank construction
librosa.core.stft
Short-time Fourier Transform
Examples
--------
>>> y, sr = librosa.load(librosa.util.example_audio_file())
>>> librosa.feature.melspectrogram(y=y, sr=sr)
array([[ 2.891e-07, 2.548e-03, ..., 8.116e-09, 5.633e-09],
[ 1.986e-07, 1.162e-02, ..., 9.332e-08, 6.716e-09],
...,
[ 3.668e-09, 2.029e-08, ..., 3.208e-09, 2.864e-09],
[ 2.561e-10, 2.096e-09, ..., 7.543e-10, 6.101e-10]])
Using a pre-computed power spectrogram
>>> D = np.abs(librosa.stft(y))**2
>>> S = librosa.feature.melspectrogram(S=D)
>>> # Passing through arguments to the Mel filters
>>> S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,
... fmax=8000)
>>> import matplotlib.pyplot as plt
>>> plt.figure(figsize=(10, 4))
>>> librosa.display.specshow(librosa.power_to_db(S,
... ref=np.max),
... y_axis='mel', fmax=8000,
... x_axis='time')
>>> plt.colorbar(format='%+2.0f dB')
>>> plt.title('Mel spectrogram')
>>> plt.tight_layout() | [
"Compute",
"a",
"mel",
"-",
"scaled",
"spectrogram",
"."
] | python | test |
google/openhtf | openhtf/util/conf.py | https://github.com/google/openhtf/blob/655e85df7134db7bdf8f8fdd6ff9a6bf932e7b09/openhtf/util/conf.py#L368-L397 | def load_from_file(self, yamlfile, _override=True, _allow_undeclared=False):
"""Loads the configuration from a file.
Parsed contents must be a single dict mapping config key to value.
Args:
yamlfile: The opened file object to load configuration from.
See load_from_dict() for other args' descriptions.
Raises:
ConfigurationInvalidError: If configuration file can't be read, or can't
be parsed as either YAML (or JSON, which is a subset of YAML).
"""
self._logger.info('Loading configuration from file: %s', yamlfile)
try:
parsed_yaml = self._modules['yaml'].safe_load(yamlfile.read())
except self._modules['yaml'].YAMLError:
self._logger.exception('Problem parsing YAML')
raise self.ConfigurationInvalidError(
'Failed to load from %s as YAML' % yamlfile)
if not isinstance(parsed_yaml, dict):
# Parsed YAML, but it's not a dict.
raise self.ConfigurationInvalidError(
'YAML parsed, but wrong type, should be dict', parsed_yaml)
self._logger.debug('Configuration loaded from file: %s', parsed_yaml)
self.load_from_dict(
parsed_yaml, _override=_override, _allow_undeclared=_allow_undeclared) | [
"def",
"load_from_file",
"(",
"self",
",",
"yamlfile",
",",
"_override",
"=",
"True",
",",
"_allow_undeclared",
"=",
"False",
")",
":",
"self",
".",
"_logger",
".",
"info",
"(",
"'Loading configuration from file: %s'",
",",
"yamlfile",
")",
"try",
":",
"parsed... | Loads the configuration from a file.
Parsed contents must be a single dict mapping config key to value.
Args:
yamlfile: The opened file object to load configuration from.
See load_from_dict() for other args' descriptions.
Raises:
ConfigurationInvalidError: If configuration file can't be read, or can't
be parsed as either YAML (or JSON, which is a subset of YAML). | [
"Loads",
"the",
"configuration",
"from",
"a",
"file",
"."
] | python | train |
hwmrocker/smtplibaio | smtplibaio/smtp.py | https://github.com/hwmrocker/smtplibaio/blob/84ce8e45b7e706476739d0efcb416c18ecabbbb6/smtplibaio/smtp.py#L723-L810 | async def sendmail(
self, sender, recipients, message, mail_options=None, rcpt_options=None
):
"""
Performs an entire e-mail transaction.
Example:
>>> try:
>>> with SMTP() as client:
>>> try:
>>> r = client.sendmail(sender, recipients, message)
>>> except SMTPException:
>>> print("Error while sending message.")
>>> else:
>>> print("Result: {}.".format(r))
>>> except ConnectionError as e:
>>> print(e)
Result: {}.
Args:
sender (str): E-mail address of the sender.
recipients (list of str or str): E-mail(s) address(es) of the
recipient(s).
message (str or bytes): Message body.
mail_options (list of str): ESMTP options (such as *8BITMIME*) to
send along the *MAIL* command.
rcpt_options (list of str): ESMTP options (such as *DSN*) to
send along all the *RCPT* commands.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPCommandFailedError: If the server refuses our EHLO/HELO
greeting.
SMTPCommandFailedError: If the server refuses our MAIL command.
SMTPCommandFailedError: If the server refuses our DATA command.
SMTPNoRecipientError: If the server refuses all given
recipients.
Returns:
dict: A dict containing an entry for each recipient that was
refused. Each entry is associated with a (code, message)
2-tuple containing the error code and message, as returned by
the server.
When everythign runs smoothly, the returning dict is empty.
.. note:: The connection remains open after. It's your responsibility
to close it. A good practice is to use the asynchronous context
manager instead. See :meth:`SMTP.__aenter__` for further details.
"""
# Make sure `recipients` is a list:
if isinstance(recipients, str):
recipients = [recipients]
# Set some defaults values:
if mail_options is None:
mail_options = []
if rcpt_options is None:
rcpt_options = []
# EHLO or HELO is required:
await self.ehlo_or_helo_if_needed()
if self.supports_esmtp:
if "size" in self.esmtp_extensions:
mail_options.append("size={}".format(len(message)))
await self.mail(sender, mail_options)
errors = []
for recipient in recipients:
try:
await self.rcpt(recipient, rcpt_options)
except SMTPCommandFailedError as e:
errors.append(e)
if len(recipients) == len(errors):
# The server refused all our recipients:
raise SMTPNoRecipientError(errors)
await self.data(message)
# If we got here then somebody got our mail:
return errors | [
"async",
"def",
"sendmail",
"(",
"self",
",",
"sender",
",",
"recipients",
",",
"message",
",",
"mail_options",
"=",
"None",
",",
"rcpt_options",
"=",
"None",
")",
":",
"# Make sure `recipients` is a list:",
"if",
"isinstance",
"(",
"recipients",
",",
"str",
"... | Performs an entire e-mail transaction.
Example:
>>> try:
>>> with SMTP() as client:
>>> try:
>>> r = client.sendmail(sender, recipients, message)
>>> except SMTPException:
>>> print("Error while sending message.")
>>> else:
>>> print("Result: {}.".format(r))
>>> except ConnectionError as e:
>>> print(e)
Result: {}.
Args:
sender (str): E-mail address of the sender.
recipients (list of str or str): E-mail(s) address(es) of the
recipient(s).
message (str or bytes): Message body.
mail_options (list of str): ESMTP options (such as *8BITMIME*) to
send along the *MAIL* command.
rcpt_options (list of str): ESMTP options (such as *DSN*) to
send along all the *RCPT* commands.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPCommandFailedError: If the server refuses our EHLO/HELO
greeting.
SMTPCommandFailedError: If the server refuses our MAIL command.
SMTPCommandFailedError: If the server refuses our DATA command.
SMTPNoRecipientError: If the server refuses all given
recipients.
Returns:
dict: A dict containing an entry for each recipient that was
refused. Each entry is associated with a (code, message)
2-tuple containing the error code and message, as returned by
the server.
When everythign runs smoothly, the returning dict is empty.
.. note:: The connection remains open after. It's your responsibility
to close it. A good practice is to use the asynchronous context
manager instead. See :meth:`SMTP.__aenter__` for further details. | [
"Performs",
"an",
"entire",
"e",
"-",
"mail",
"transaction",
"."
] | python | train |
saltstack/salt | salt/modules/network.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/network.py#L1250-L1376 | def mod_hostname(hostname):
'''
Modify hostname
.. versionchanged:: 2015.8.0
Added support for SunOS (Solaris 10, Illumos, SmartOS)
CLI Example:
.. code-block:: bash
salt '*' network.mod_hostname master.saltstack.com
'''
#
# SunOS tested on SmartOS and OmniOS (Solaris 10 compatible)
# Oracle Solaris 11 uses smf, currently not supported
#
# /etc/nodename is the hostname only, not fqdn
# /etc/defaultdomain is the domain
# /etc/hosts should have both fqdn and hostname entries
#
if hostname is None:
return False
hostname_cmd = salt.utils.path.which('hostnamectl') or salt.utils.path.which('hostname')
if salt.utils.platform.is_sunos():
uname_cmd = '/usr/bin/uname' if salt.utils.platform.is_smartos() else salt.utils.path.which('uname')
check_hostname_cmd = salt.utils.path.which('check-hostname')
# Grab the old hostname so we know which hostname to change and then
# change the hostname using the hostname command
if hostname_cmd.endswith('hostnamectl'):
result = __salt__['cmd.run_all']('{0} status'.format(hostname_cmd))
if 0 == result['retcode']:
out = result['stdout']
for line in out.splitlines():
line = line.split(':')
if 'Static hostname' in line[0]:
o_hostname = line[1].strip()
else:
log.debug('%s was unable to get hostname', hostname_cmd)
o_hostname = __salt__['network.get_hostname']()
elif not salt.utils.platform.is_sunos():
# don't run hostname -f because -f is not supported on all platforms
o_hostname = socket.getfqdn()
else:
# output: Hostname core OK: fully qualified as core.acheron.be
o_hostname = __salt__['cmd.run'](check_hostname_cmd).split(' ')[-1]
if hostname_cmd.endswith('hostnamectl'):
result = __salt__['cmd.run_all']('{0} set-hostname {1}'.format(
hostname_cmd,
hostname,
))
if result['retcode'] != 0:
log.debug('%s was unable to set hostname. Error: %s',
hostname_cmd, result['stderr'])
return False
elif not salt.utils.platform.is_sunos():
__salt__['cmd.run']('{0} {1}'.format(hostname_cmd, hostname))
else:
__salt__['cmd.run']('{0} -S {1}'.format(uname_cmd, hostname.split('.')[0]))
# Modify the /etc/hosts file to replace the old hostname with the
# new hostname
with salt.utils.files.fopen('/etc/hosts', 'r') as fp_:
host_c = [salt.utils.stringutils.to_unicode(_l)
for _l in fp_.readlines()]
with salt.utils.files.fopen('/etc/hosts', 'w') as fh_:
for host in host_c:
host = host.split()
try:
host[host.index(o_hostname)] = hostname
if salt.utils.platform.is_sunos():
# also set a copy of the hostname
host[host.index(o_hostname.split('.')[0])] = hostname.split('.')[0]
except ValueError:
pass
fh_.write(salt.utils.stringutils.to_str('\t'.join(host) + '\n'))
# Modify the /etc/sysconfig/network configuration file to set the
# new hostname
if __grains__['os_family'] == 'RedHat':
with salt.utils.files.fopen('/etc/sysconfig/network', 'r') as fp_:
network_c = [salt.utils.stringutils.to_unicode(_l)
for _l in fp_.readlines()]
with salt.utils.files.fopen('/etc/sysconfig/network', 'w') as fh_:
for net in network_c:
if net.startswith('HOSTNAME'):
old_hostname = net.split('=', 1)[1].rstrip()
quote_type = salt.utils.stringutils.is_quoted(old_hostname)
fh_.write(salt.utils.stringutils.to_str(
'HOSTNAME={1}{0}{1}\n'.format(
salt.utils.stringutils.dequote(hostname),
quote_type)))
else:
fh_.write(salt.utils.stringutils.to_str(net))
elif __grains__['os_family'] in ('Debian', 'NILinuxRT'):
with salt.utils.files.fopen('/etc/hostname', 'w') as fh_:
fh_.write(salt.utils.stringutils.to_str(hostname + '\n'))
if __grains__['lsb_distrib_id'] == 'nilrt':
str_hostname = salt.utils.stringutils.to_str(hostname)
nirtcfg_cmd = '/usr/local/natinst/bin/nirtcfg'
nirtcfg_cmd += ' --set section=SystemSettings,token=\'Host_Name\',value=\'{0}\''.format(str_hostname)
if __salt__['cmd.run_all'](nirtcfg_cmd)['retcode'] != 0:
raise CommandExecutionError('Couldn\'t set hostname to: {0}\n'.format(str_hostname))
elif __grains__['os_family'] == 'OpenBSD':
with salt.utils.files.fopen('/etc/myname', 'w') as fh_:
fh_.write(salt.utils.stringutils.to_str(hostname + '\n'))
# Update /etc/nodename and /etc/defaultdomain on SunOS
if salt.utils.platform.is_sunos():
with salt.utils.files.fopen('/etc/nodename', 'w') as fh_:
fh_.write(salt.utils.stringutils.to_str(
hostname.split('.')[0] + '\n')
)
with salt.utils.files.fopen('/etc/defaultdomain', 'w') as fh_:
fh_.write(salt.utils.stringutils.to_str(
".".join(hostname.split('.')[1:]) + '\n')
)
return True | [
"def",
"mod_hostname",
"(",
"hostname",
")",
":",
"#",
"# SunOS tested on SmartOS and OmniOS (Solaris 10 compatible)",
"# Oracle Solaris 11 uses smf, currently not supported",
"#",
"# /etc/nodename is the hostname only, not fqdn",
"# /etc/defaultdomain is the domain",
"# /etc/hosts should ha... | Modify hostname
.. versionchanged:: 2015.8.0
Added support for SunOS (Solaris 10, Illumos, SmartOS)
CLI Example:
.. code-block:: bash
salt '*' network.mod_hostname master.saltstack.com | [
"Modify",
"hostname"
] | python | train |
dstufft/crust | crust/query.py | https://github.com/dstufft/crust/blob/5d4011ecace12fd3f68a03a17dbefb78390a9fc0/crust/query.py#L382-L389 | def create(self, **kwargs):
"""
Creates a new object with the given kwargs, saving it to the api
and returning the created object.
"""
obj = self.resource(**kwargs)
obj.save(force_insert=True)
return obj | [
"def",
"create",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"obj",
"=",
"self",
".",
"resource",
"(",
"*",
"*",
"kwargs",
")",
"obj",
".",
"save",
"(",
"force_insert",
"=",
"True",
")",
"return",
"obj"
] | Creates a new object with the given kwargs, saving it to the api
and returning the created object. | [
"Creates",
"a",
"new",
"object",
"with",
"the",
"given",
"kwargs",
"saving",
"it",
"to",
"the",
"api",
"and",
"returning",
"the",
"created",
"object",
"."
] | python | train |
wummel/linkchecker | third_party/miniboa-r42/miniboa/telnet.py | https://github.com/wummel/linkchecker/blob/c2ce810c3fb00b895a841a7be6b2e78c64e7b042/third_party/miniboa-r42/miniboa/telnet.py#L185-L191 | def send_wrapped(self, text):
"""
Send text padded and wrapped to the user's screen width.
"""
lines = word_wrap(text, self.columns)
for line in lines:
self.send_cc(line + '\n') | [
"def",
"send_wrapped",
"(",
"self",
",",
"text",
")",
":",
"lines",
"=",
"word_wrap",
"(",
"text",
",",
"self",
".",
"columns",
")",
"for",
"line",
"in",
"lines",
":",
"self",
".",
"send_cc",
"(",
"line",
"+",
"'\\n'",
")"
] | Send text padded and wrapped to the user's screen width. | [
"Send",
"text",
"padded",
"and",
"wrapped",
"to",
"the",
"user",
"s",
"screen",
"width",
"."
] | python | train |
mrstephenneal/mysql-toolkit | mysql/toolkit/commands/dump.py | https://github.com/mrstephenneal/mysql-toolkit/blob/6964f718f4b72eb30f2259adfcfaf3090526c53d/mysql/toolkit/commands/dump.py#L20-L39 | def set_dump_directory(base=None, sub_dir=None):
"""Create directory for dumping SQL commands."""
# Set current timestamp
timestamp = datetime.fromtimestamp(time()).strftime('%Y-%m-%d %H-%M-%S')
# Clean sub_dir
if sub_dir and '.' in sub_dir:
sub_dir = sub_dir.rsplit('.', 1)[0]
# Create a directory to save fail SQL scripts
# TODO: Replace with function that recursively creates directories until path exists
if not os.path.exists(base):
os.mkdir(base)
dump_dir = os.path.join(base, sub_dir) if sub_dir else base
if not os.path.exists(dump_dir):
os.mkdir(dump_dir)
dump_dir = os.path.join(dump_dir, timestamp)
if not os.path.exists(dump_dir):
os.mkdir(dump_dir)
return dump_dir | [
"def",
"set_dump_directory",
"(",
"base",
"=",
"None",
",",
"sub_dir",
"=",
"None",
")",
":",
"# Set current timestamp",
"timestamp",
"=",
"datetime",
".",
"fromtimestamp",
"(",
"time",
"(",
")",
")",
".",
"strftime",
"(",
"'%Y-%m-%d %H-%M-%S'",
")",
"# Clean ... | Create directory for dumping SQL commands. | [
"Create",
"directory",
"for",
"dumping",
"SQL",
"commands",
"."
] | python | train |
faroit/stempeg | stempeg/write.py | https://github.com/faroit/stempeg/blob/ebbaec87ea440fcbb06423d708e7847749e63d38/stempeg/write.py#L10-L35 | def check_available_aac_encoders():
"""Returns the available AAC encoders
Returns
----------
codecs : list(str)
List of available encoder codecs
"""
cmd = [
'ffmpeg',
'-v', 'error',
'-codecs'
]
output = sp.check_output(cmd)
aac_codecs = [
x for x in
output.splitlines() if "AAC (Advanced Audio Coding)" in str(x)
][0]
hay = aac_codecs.decode('ascii')
match = re.findall(r'\(encoders: ([^\)]*) \)', hay)
if match:
return match[0].split(" ")
else:
return None | [
"def",
"check_available_aac_encoders",
"(",
")",
":",
"cmd",
"=",
"[",
"'ffmpeg'",
",",
"'-v'",
",",
"'error'",
",",
"'-codecs'",
"]",
"output",
"=",
"sp",
".",
"check_output",
"(",
"cmd",
")",
"aac_codecs",
"=",
"[",
"x",
"for",
"x",
"in",
"output",
"... | Returns the available AAC encoders
Returns
----------
codecs : list(str)
List of available encoder codecs | [
"Returns",
"the",
"available",
"AAC",
"encoders"
] | python | train |
delph-in/pydelphin | delphin/tdl.py | https://github.com/delph-in/pydelphin/blob/7bd2cd63ab7cf74803e1d6547b9ebc014b382abd/delphin/tdl.py#L1872-L1906 | def format(obj, indent=0):
"""
Serialize TDL objects to strings.
Args:
obj: instance of :class:`Term`, :class:`Conjunction`, or
:class:`TypeDefinition` classes or subclasses
indent (int): number of spaces to indent the formatted object
Returns:
str: serialized form of *obj*
Example:
>>> conj = tdl.Conjunction([
... tdl.TypeIdentifier('lex-item'),
... tdl.AVM([('SYNSEM.LOCAL.CAT.HEAD.MOD',
... tdl.ConsList(end=tdl.EMPTY_LIST_TYPE))])
... ])
>>> t = tdl.TypeDefinition('non-mod-lex-item', conj)
>>> print(format(t))
non-mod-lex-item := lex-item &
[ SYNSEM.LOCAL.CAT.HEAD.MOD < > ].
"""
if isinstance(obj, TypeDefinition):
return _format_typedef(obj, indent)
elif isinstance(obj, Conjunction):
return _format_conjunction(obj, indent)
elif isinstance(obj, Term):
return _format_term(obj, indent)
elif isinstance(obj, _MorphSet):
return _format_morphset(obj, indent)
elif isinstance(obj, _Environment):
return _format_environment(obj, indent)
elif isinstance(obj, FileInclude):
return _format_include(obj, indent)
else:
raise ValueError('cannot format object as TDL: {!r}'.format(obj)) | [
"def",
"format",
"(",
"obj",
",",
"indent",
"=",
"0",
")",
":",
"if",
"isinstance",
"(",
"obj",
",",
"TypeDefinition",
")",
":",
"return",
"_format_typedef",
"(",
"obj",
",",
"indent",
")",
"elif",
"isinstance",
"(",
"obj",
",",
"Conjunction",
")",
":"... | Serialize TDL objects to strings.
Args:
obj: instance of :class:`Term`, :class:`Conjunction`, or
:class:`TypeDefinition` classes or subclasses
indent (int): number of spaces to indent the formatted object
Returns:
str: serialized form of *obj*
Example:
>>> conj = tdl.Conjunction([
... tdl.TypeIdentifier('lex-item'),
... tdl.AVM([('SYNSEM.LOCAL.CAT.HEAD.MOD',
... tdl.ConsList(end=tdl.EMPTY_LIST_TYPE))])
... ])
>>> t = tdl.TypeDefinition('non-mod-lex-item', conj)
>>> print(format(t))
non-mod-lex-item := lex-item &
[ SYNSEM.LOCAL.CAT.HEAD.MOD < > ]. | [
"Serialize",
"TDL",
"objects",
"to",
"strings",
"."
] | python | train |
proycon/pynlpl | pynlpl/formats/folia.py | https://github.com/proycon/pynlpl/blob/7707f69a91caaa6cde037f0d0379f1d42500a68b/pynlpl/formats/folia.py#L3336-L3341 | def xml(self, attribs = None,elements = None, skipchildren = False):
"""See :meth:`AbstractElement.xml`"""
if not attribs: attribs = {}
if self.idref:
attribs['id'] = self.idref
return super(AbstractTextMarkup,self).xml(attribs,elements, skipchildren) | [
"def",
"xml",
"(",
"self",
",",
"attribs",
"=",
"None",
",",
"elements",
"=",
"None",
",",
"skipchildren",
"=",
"False",
")",
":",
"if",
"not",
"attribs",
":",
"attribs",
"=",
"{",
"}",
"if",
"self",
".",
"idref",
":",
"attribs",
"[",
"'id'",
"]",
... | See :meth:`AbstractElement.xml` | [
"See",
":",
"meth",
":",
"AbstractElement",
".",
"xml"
] | python | train |
datacamp/pythonwhat | pythonwhat/checks/check_logic.py | https://github.com/datacamp/pythonwhat/blob/ffbf7f8436a51f77c22f3bed75ba3bc37a5c666f/pythonwhat/checks/check_logic.py#L233-L271 | def set_env(state, **kwargs):
"""Update/set environemnt variables for student and solution environments.
When ``has_equal_x()`` is used after this, the variables specified through this function will
be available in the student and solution process. Note that you will not see these variables
in the student process of the state produced by this function: the values are saved on the state
and are only added to the student and solution processes when ``has_equal_ast()`` is called.
:Example:
Student and Solution Code::
a = 1
if a > 4:
print('pretty large')
SCT::
# check if condition works with different values of a
Ex().check_if_else().check_test().multi(
set_env(a = 3).has_equal_value(),
set_env(a = 4).has_equal_value(),
set_env(a = 5).has_equal_value()
)
# equivalent SCT, by setting extra_env in has_equal_value()
Ex().check_if_else().check_test().\\
multi([has_equal_value(extra_env={'a': i}) for i in range(3, 6)])
"""
stu_crnt = state.student_env.context
sol_crnt = state.solution_env.context
stu_new = stu_crnt.update(kwargs)
sol_new = sol_crnt.update(kwargs)
return state.to_child(
student_env=stu_new, solution_env=sol_new, highlight=state.highlight
) | [
"def",
"set_env",
"(",
"state",
",",
"*",
"*",
"kwargs",
")",
":",
"stu_crnt",
"=",
"state",
".",
"student_env",
".",
"context",
"sol_crnt",
"=",
"state",
".",
"solution_env",
".",
"context",
"stu_new",
"=",
"stu_crnt",
".",
"update",
"(",
"kwargs",
")",... | Update/set environemnt variables for student and solution environments.
When ``has_equal_x()`` is used after this, the variables specified through this function will
be available in the student and solution process. Note that you will not see these variables
in the student process of the state produced by this function: the values are saved on the state
and are only added to the student and solution processes when ``has_equal_ast()`` is called.
:Example:
Student and Solution Code::
a = 1
if a > 4:
print('pretty large')
SCT::
# check if condition works with different values of a
Ex().check_if_else().check_test().multi(
set_env(a = 3).has_equal_value(),
set_env(a = 4).has_equal_value(),
set_env(a = 5).has_equal_value()
)
# equivalent SCT, by setting extra_env in has_equal_value()
Ex().check_if_else().check_test().\\
multi([has_equal_value(extra_env={'a': i}) for i in range(3, 6)]) | [
"Update",
"/",
"set",
"environemnt",
"variables",
"for",
"student",
"and",
"solution",
"environments",
"."
] | python | test |
amanusk/s-tui | s_tui/s_tui.py | https://github.com/amanusk/s-tui/blob/5e89d15081e716024db28ec03b1e3a7710330951/s_tui/s_tui.py#L387-L452 | def _generate_graph_controls(self):
""" Display sidebar controls. i.e. buttons, and controls"""
# setup mode radio buttons
stress_modes = self.controller.stress_conroller.get_modes()
group = []
for mode in stress_modes:
self.mode_buttons.append(radio_button(group, mode,
self.on_mode_button))
# Set default radio button to "Monitor" mode
self.mode_buttons[0].set_state(True, do_callback=False)
# Create list of buttons
control_options = list()
control_options.append(button('Graphs',
self.on_graphs_menu_open))
control_options.append(button('Summaries',
self.on_summary_menu_open))
if self.controller.stress_exe:
control_options.append(button('Stress Options',
self.on_stress_menu_open))
control_options.append(button("Reset", self.on_reset_button))
control_options.append(button('Help', self.on_help_menu_open))
control_options.append(button('About', self.on_about_menu_open))
control_options.append(button("Save Settings",
self.on_save_settings))
control_options.append(button("Quit", self.on_exit_program))
# Create the menu
animate_controls = urwid.GridFlow(control_options, 18, 2, 0, 'center')
# Create smooth graph selection button
default_smooth = self.controller.smooth_graph_mode
if urwid.get_encoding_mode() == "utf8":
unicode_checkbox = urwid.CheckBox(
"UTF-8", state=default_smooth,
on_state_change=self.on_unicode_checkbox)
# Init the state of the graph accoding to the selected mode
self.on_unicode_checkbox(state=default_smooth)
else:
unicode_checkbox = urwid.Text(
"[N/A] UTF-8")
install_stress_message = urwid.Text("")
if not self.controller.stress_exe:
install_stress_message = urwid.Text(
('button normal', u"(N/A) install stress"))
controls = [urwid.Text(('bold text', u"Modes"), align="center")]
controls += self.mode_buttons
controls += [
install_stress_message,
urwid.Text(('bold text', u"Stress Timer"), align="center"),
self.clock_view,
urwid.Divider(),
urwid.Text(('bold text', u"Control Options"), align="center"),
animate_controls,
urwid.Divider(),
urwid.Text(('bold text', u"Visual Options"), align="center"),
unicode_checkbox,
self.refresh_rate_ctrl,
urwid.Divider(),
urwid.Text(('bold text', u"Summaries"), align="center"),
]
return controls | [
"def",
"_generate_graph_controls",
"(",
"self",
")",
":",
"# setup mode radio buttons",
"stress_modes",
"=",
"self",
".",
"controller",
".",
"stress_conroller",
".",
"get_modes",
"(",
")",
"group",
"=",
"[",
"]",
"for",
"mode",
"in",
"stress_modes",
":",
"self",... | Display sidebar controls. i.e. buttons, and controls | [
"Display",
"sidebar",
"controls",
".",
"i",
".",
"e",
".",
"buttons",
"and",
"controls"
] | python | train |
pyroscope/pyrocore | src/pyrocore/util/osmagic.py | https://github.com/pyroscope/pyrocore/blob/89ad01346a570943d20311a0b488440975876612/src/pyrocore/util/osmagic.py#L41-L67 | def check_process(pidfile):
""" Read pid file and check process status.
Return (running, pid).
"""
# Check pid file
try:
handle = open(pidfile, 'r')
except IOError as exc:
if exc.errno == errno.ENOENT:
# pid file disappeared
return False, 0
raise
try:
pid = int(handle.read().strip(), 10)
except (TypeError, ValueError) as exc:
raise EnvironmentError("Invalid PID file '%s' (%s), won't start!" % (pidfile, exc))
finally:
handle.close()
# Check process
try:
os.kill(pid, 0)
except EnvironmentError as exc:
return False, pid
else:
return True, pid | [
"def",
"check_process",
"(",
"pidfile",
")",
":",
"# Check pid file",
"try",
":",
"handle",
"=",
"open",
"(",
"pidfile",
",",
"'r'",
")",
"except",
"IOError",
"as",
"exc",
":",
"if",
"exc",
".",
"errno",
"==",
"errno",
".",
"ENOENT",
":",
"# pid file dis... | Read pid file and check process status.
Return (running, pid). | [
"Read",
"pid",
"file",
"and",
"check",
"process",
"status",
".",
"Return",
"(",
"running",
"pid",
")",
"."
] | python | train |
ff0000/scarlet | scarlet/versioning/fields.py | https://github.com/ff0000/scarlet/blob/6c37befd810916a2d7ffff2cdb2dab57bcb6d12e/scarlet/versioning/fields.py#L42-L71 | def update_rel_to(self, klass):
"""
If we have a string for a model, see if we know about it yet,
if so use it directly otherwise take the lazy approach.
This check is needed because this is called before
the main M2M field contribute to class is called.
"""
if isinstance(self.remote_field.to, basestring):
relation = self.remote_field.to
try:
app_label, model_name = relation.split(".")
except ValueError:
# If we can't split, assume a model in current app
app_label = klass._meta.app_label
model_name = relation
model = None
try:
model = klass._meta.apps.get_registered_model(app_label, model_name)
# For django < 1.6
except AttributeError:
model = models.get_model(
app_label, model_name,
seed_cache=False, only_installed=False)
except LookupError:
pass
if model:
self.remote_field.model = model | [
"def",
"update_rel_to",
"(",
"self",
",",
"klass",
")",
":",
"if",
"isinstance",
"(",
"self",
".",
"remote_field",
".",
"to",
",",
"basestring",
")",
":",
"relation",
"=",
"self",
".",
"remote_field",
".",
"to",
"try",
":",
"app_label",
",",
"model_name"... | If we have a string for a model, see if we know about it yet,
if so use it directly otherwise take the lazy approach.
This check is needed because this is called before
the main M2M field contribute to class is called. | [
"If",
"we",
"have",
"a",
"string",
"for",
"a",
"model",
"see",
"if",
"we",
"know",
"about",
"it",
"yet",
"if",
"so",
"use",
"it",
"directly",
"otherwise",
"take",
"the",
"lazy",
"approach",
".",
"This",
"check",
"is",
"needed",
"because",
"this",
"is",... | python | train |
lesscpy/lesscpy | lesscpy/lessc/utility.py | https://github.com/lesscpy/lesscpy/blob/51e392fb4a3cd4ccfb6175e0e42ce7d2f6b78126/lesscpy/lessc/utility.py#L234-L243 | def split_unit(value):
""" Split a number from its unit
1px -> (q, 'px')
Args:
value (str): input
returns:
tuple
"""
r = re.search('^(\-?[\d\.]+)(.*)$', str(value))
return r.groups() if r else ('', '') | [
"def",
"split_unit",
"(",
"value",
")",
":",
"r",
"=",
"re",
".",
"search",
"(",
"'^(\\-?[\\d\\.]+)(.*)$'",
",",
"str",
"(",
"value",
")",
")",
"return",
"r",
".",
"groups",
"(",
")",
"if",
"r",
"else",
"(",
"''",
",",
"''",
")"
] | Split a number from its unit
1px -> (q, 'px')
Args:
value (str): input
returns:
tuple | [
"Split",
"a",
"number",
"from",
"its",
"unit",
"1px",
"-",
">",
"(",
"q",
"px",
")",
"Args",
":",
"value",
"(",
"str",
")",
":",
"input",
"returns",
":",
"tuple"
] | python | valid |
pantsbuild/pex | pex/vendor/_vendored/wheel/wheel/bdist_wheel.py | https://github.com/pantsbuild/pex/blob/87b2129d860250d3b9edce75b9cb62f9789ee521/pex/vendor/_vendored/wheel/wheel/bdist_wheel.py#L318-L375 | def egg2dist(self, egginfo_path, distinfo_path):
"""Convert an .egg-info directory into a .dist-info directory"""
def adios(p):
"""Appropriately delete directory, file or link."""
if os.path.exists(p) and not os.path.islink(p) and os.path.isdir(p):
shutil.rmtree(p)
elif os.path.exists(p):
os.unlink(p)
adios(distinfo_path)
if not os.path.exists(egginfo_path):
# There is no egg-info. This is probably because the egg-info
# file/directory is not named matching the distribution name used
# to name the archive file. Check for this case and report
# accordingly.
import glob
pat = os.path.join(os.path.dirname(egginfo_path), '*.egg-info')
possible = glob.glob(pat)
err = "Egg metadata expected at %s but not found" % (egginfo_path,)
if possible:
alt = os.path.basename(possible[0])
err += " (%s found - possible misnamed archive file?)" % (alt,)
raise ValueError(err)
if os.path.isfile(egginfo_path):
# .egg-info is a single file
pkginfo_path = egginfo_path
pkg_info = pkginfo_to_metadata(egginfo_path, egginfo_path)
os.mkdir(distinfo_path)
else:
# .egg-info is a directory
pkginfo_path = os.path.join(egginfo_path, 'PKG-INFO')
pkg_info = pkginfo_to_metadata(egginfo_path, pkginfo_path)
# ignore common egg metadata that is useless to wheel
shutil.copytree(egginfo_path, distinfo_path,
ignore=lambda x, y: {'PKG-INFO', 'requires.txt', 'SOURCES.txt',
'not-zip-safe'}
)
# delete dependency_links if it is only whitespace
dependency_links_path = os.path.join(distinfo_path, 'dependency_links.txt')
with open(dependency_links_path, 'r') as dependency_links_file:
dependency_links = dependency_links_file.read().strip()
if not dependency_links:
adios(dependency_links_path)
write_pkg_info(os.path.join(distinfo_path, 'METADATA'), pkg_info)
# XXX heuristically copy any LICENSE/LICENSE.txt?
license = self.license_file()
if license:
license_filename = 'LICENSE.txt'
shutil.copy(license, os.path.join(distinfo_path, license_filename))
adios(egginfo_path) | [
"def",
"egg2dist",
"(",
"self",
",",
"egginfo_path",
",",
"distinfo_path",
")",
":",
"def",
"adios",
"(",
"p",
")",
":",
"\"\"\"Appropriately delete directory, file or link.\"\"\"",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"p",
")",
"and",
"not",
"os",
"... | Convert an .egg-info directory into a .dist-info directory | [
"Convert",
"an",
".",
"egg",
"-",
"info",
"directory",
"into",
"a",
".",
"dist",
"-",
"info",
"directory"
] | python | train |
tBaxter/python-card-me | card_me/icalendar.py | https://github.com/tBaxter/python-card-me/blob/ffebc7fed44f83983b7438e57263dcda67207664/card_me/icalendar.py#L361-L471 | def getrruleset(self, addRDate=False):
"""
Get an rruleset created from self.
If addRDate is True, add an RDATE for dtstart if it's not included in
an RRULE, and count is decremented if it exists.
Note that for rules which don't match DTSTART, DTSTART may not appear
in list(rruleset), although it should. By default, an RDATE is not
created in these cases, and count isn't updated, so dateutil may list
a spurious occurrence.
"""
rruleset = None
for name in DATESANDRULES:
addfunc = None
for line in self.contents.get(name, ()):
# don't bother creating a rruleset unless there's a rule
if rruleset is None:
rruleset = rrule.rruleset()
if addfunc is None:
addfunc = getattr(rruleset, name)
if name in DATENAMES:
if type(line.value[0]) == datetime.datetime:
map(addfunc, line.value)
elif type(line.value[0]) == datetime.date:
for dt in line.value:
addfunc(datetime.datetime(dt.year, dt.month, dt.day))
else:
# ignore RDATEs with PERIOD values for now
pass
elif name in RULENAMES:
try:
dtstart = self.dtstart.value
except (AttributeError, KeyError):
# Special for VTODO - try DUE property instead
try:
if self.name == "VTODO":
dtstart = self.due.value
else:
# if there's no dtstart, just return None
print('failed to get dtstart with VTODO')
return None
except (AttributeError, KeyError):
# if there's no due, just return None
print('failed to find DUE at all.')
return None
# a Ruby iCalendar library escapes semi-colons in rrules,
# so also remove any backslashes
value = str_(line.value).replace('\\', '')
rule = rrule.rrulestr(value, dtstart=dtstart)
until = rule._until
if until is not None and isinstance(dtstart, datetime.datetime) and \
(until.tzinfo != dtstart.tzinfo):
# dateutil converts the UNTIL date to a datetime,
# check to see if the UNTIL parameter value was a date
vals = dict(pair.split('=') for pair in
line.value.upper().split(';'))
if len(vals.get('UNTIL', '')) == 8:
until = datetime.datetime.combine(until.date(), dtstart.time())
# While RFC2445 says UNTIL MUST be UTC, Chandler allows
# floating recurring events, and uses floating UNTIL values.
# Also, some odd floating UNTIL but timezoned DTSTART values
# have shown up in the wild, so put floating UNTIL values
# DTSTART's timezone
if until.tzinfo is None:
until = until.replace(tzinfo=dtstart.tzinfo)
if dtstart.tzinfo is not None:
until = until.astimezone(dtstart.tzinfo)
# RFC2445 actually states that UNTIL must be a UTC value. Whilst the
# changes above work OK, one problem case is if DTSTART is floating but
# UNTIL is properly specified as UTC (or with a TZID). In that case dateutil
# will fail datetime comparisons. There is no easy solution to this as
# there is no obvious timezone (at this point) to do proper floating time
# offset compisons. The best we can do is treat the UNTIL value as floating.
# This could mean incorrect determination of the last instance. The better
# solution here is to encourage clients to use COUNT rather than UNTIL
# when DTSTART is floating.
if dtstart.tzinfo is None:
until = until.replace(tzinfo=None)
rule._until = until
# add the rrule or exrule to the rruleset
addfunc(rule)
if name == 'rrule' and addRDate:
try:
# dateutils does not work with all-day (datetime.date) items
# so we need to convert to a datetime.datetime
# (which is what dateutils does internally)
if not isinstance(dtstart, datetime.datetime):
adddtstart = datetime.datetime.fromordinal(dtstart.toordinal())
else:
adddtstart = dtstart
if rruleset._rrule[-1][0] != adddtstart:
rruleset.rdate(adddtstart)
added = True
else:
added = False
except IndexError:
# it's conceivable that an rrule might have 0 datetimes
added = False
if added and rruleset._rrule[-1]._count is not None:
rruleset._rrule[-1]._count -= 1
return rruleset | [
"def",
"getrruleset",
"(",
"self",
",",
"addRDate",
"=",
"False",
")",
":",
"rruleset",
"=",
"None",
"for",
"name",
"in",
"DATESANDRULES",
":",
"addfunc",
"=",
"None",
"for",
"line",
"in",
"self",
".",
"contents",
".",
"get",
"(",
"name",
",",
"(",
"... | Get an rruleset created from self.
If addRDate is True, add an RDATE for dtstart if it's not included in
an RRULE, and count is decremented if it exists.
Note that for rules which don't match DTSTART, DTSTART may not appear
in list(rruleset), although it should. By default, an RDATE is not
created in these cases, and count isn't updated, so dateutil may list
a spurious occurrence. | [
"Get",
"an",
"rruleset",
"created",
"from",
"self",
"."
] | python | train |
kejbaly2/metrique | metrique/utils.py | https://github.com/kejbaly2/metrique/blob/a10b076097441b7dde687949139f702f5c1e1b35/metrique/utils.py#L197-L227 | def clear_stale_pids(pids, pid_dir='/tmp', prefix='', multi=False):
'check for and remove any pids which have no corresponding process'
if isinstance(pids, (int, float, long)):
pids = [pids]
pids = str2list(pids, map_=unicode)
procs = map(unicode, os.listdir('/proc'))
running = [pid for pid in pids if pid in procs]
logger.warn(
"Found %s pids running: %s" % (len(running),
running))
prefix = prefix.rstrip('.') if prefix else None
for pid in pids:
if prefix:
_prefix = prefix
else:
_prefix = unicode(pid)
# remove non-running procs
if pid in running:
continue
if multi:
pid_file = '%s%s.pid' % (_prefix, pid)
else:
pid_file = '%s.pid' % (_prefix)
path = os.path.join(pid_dir, pid_file)
if os.path.exists(path):
logger.debug("Removing pidfile: %s" % path)
try:
remove_file(path)
except OSError as e:
logger.warn(e)
return running | [
"def",
"clear_stale_pids",
"(",
"pids",
",",
"pid_dir",
"=",
"'/tmp'",
",",
"prefix",
"=",
"''",
",",
"multi",
"=",
"False",
")",
":",
"if",
"isinstance",
"(",
"pids",
",",
"(",
"int",
",",
"float",
",",
"long",
")",
")",
":",
"pids",
"=",
"[",
"... | check for and remove any pids which have no corresponding process | [
"check",
"for",
"and",
"remove",
"any",
"pids",
"which",
"have",
"no",
"corresponding",
"process"
] | python | train |
apple/turicreate | src/external/xgboost/python-package/xgboost/core.py | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/xgboost/python-package/xgboost/core.py#L688-L710 | def boost(self, dtrain, grad, hess):
"""
Boost the booster for one iteration, with customized gradient statistics.
Parameters
----------
dtrain : DMatrix
The training DMatrix.
grad : list
The first order of gradient.
hess : list
The second order of gradient.
"""
if len(grad) != len(hess):
raise ValueError('grad / hess length mismatch: {} / {}'.format(len(grad), len(hess)))
if not isinstance(dtrain, DMatrix):
raise TypeError('invalid training matrix: {}'.format(type(dtrain).__name__))
self._validate_features(dtrain)
_check_call(_LIB.XGBoosterBoostOneIter(self.handle, dtrain.handle,
c_array(ctypes.c_float, grad),
c_array(ctypes.c_float, hess),
len(grad))) | [
"def",
"boost",
"(",
"self",
",",
"dtrain",
",",
"grad",
",",
"hess",
")",
":",
"if",
"len",
"(",
"grad",
")",
"!=",
"len",
"(",
"hess",
")",
":",
"raise",
"ValueError",
"(",
"'grad / hess length mismatch: {} / {}'",
".",
"format",
"(",
"len",
"(",
"gr... | Boost the booster for one iteration, with customized gradient statistics.
Parameters
----------
dtrain : DMatrix
The training DMatrix.
grad : list
The first order of gradient.
hess : list
The second order of gradient. | [
"Boost",
"the",
"booster",
"for",
"one",
"iteration",
"with",
"customized",
"gradient",
"statistics",
"."
] | python | train |
ecell/ecell4 | ecell4/extra/azure_batch.py | https://github.com/ecell/ecell4/blob/a4a1229661c39b2059adbbacae9090e5ba664e01/ecell4/extra/azure_batch.py#L254-L302 | def add_tasks(batch_service_client, job_id, loads,
output_container_name, output_container_sas_token,
task_file, acount_name):
"""Adds a task for each input file in the collection to the specified job.
:param batch_service_client: A Batch service client.
:type batch_service_client: `azure.batch.BatchServiceClient`
:param str job_id: The ID of the job to which to add the tasks.
:param list input_files: A collection of input files. One task will be
created for each input file.
:param output_container_name: The ID of an Azure Blob storage container to
which the tasks will upload their results.
:param output_container_sas_token: A SAS token granting write access to
the specified Azure Blob storage container.
:param str task_file: A file name of the script
:param str account_name: A storage account
"""
_log.info('Adding {} tasks to job [{}]...'.format(len(loads), job_id))
# _log.info('Adding {} tasks to job [{}]...'.format(len(input_files), job_id))
tasks = list()
for (input_file, output_file, i, j) in loads:
command = ['python $AZ_BATCH_NODE_SHARED_DIR/{} '
'--filepath {} --output {} --storageaccount {} '
'--task_id {} --job_id {} '
'--storagecontainer {} --sastoken "{}"'.format(
os.path.basename(task_file),
input_file.file_path,
output_file,
acount_name,
i, j,
output_container_name,
output_container_sas_token)]
_log.debug('CMD : "{}"'.format(command[0]))
tasks.append(batch.models.TaskAddParameter(
id='topNtask{}-{}'.format(i, j),
command_line=command,
resource_files=[input_file]
)
)
batch_service_client.task.add_collection(job_id, tasks)
task_ids = [task.id for task in tasks]
_log.info('{} tasks were added.'.format(len(task_ids)))
return task_ids | [
"def",
"add_tasks",
"(",
"batch_service_client",
",",
"job_id",
",",
"loads",
",",
"output_container_name",
",",
"output_container_sas_token",
",",
"task_file",
",",
"acount_name",
")",
":",
"_log",
".",
"info",
"(",
"'Adding {} tasks to job [{}]...'",
".",
"format",
... | Adds a task for each input file in the collection to the specified job.
:param batch_service_client: A Batch service client.
:type batch_service_client: `azure.batch.BatchServiceClient`
:param str job_id: The ID of the job to which to add the tasks.
:param list input_files: A collection of input files. One task will be
created for each input file.
:param output_container_name: The ID of an Azure Blob storage container to
which the tasks will upload their results.
:param output_container_sas_token: A SAS token granting write access to
the specified Azure Blob storage container.
:param str task_file: A file name of the script
:param str account_name: A storage account | [
"Adds",
"a",
"task",
"for",
"each",
"input",
"file",
"in",
"the",
"collection",
"to",
"the",
"specified",
"job",
"."
] | python | train |
viatoriche/microservices | examples/http/basic.py | https://github.com/viatoriche/microservices/blob/3510563edd15dc6131b8a948d6062856cd904ac7/examples/http/basic.py#L51-L59 | def second_params_two(test, two):
"""Second resource
* POST: return [test, two, request data]
* GET: return [test, two]
"""
if request.method == 'POST':
return [test, two, request.data]
return {'result': [test, two]} | [
"def",
"second_params_two",
"(",
"test",
",",
"two",
")",
":",
"if",
"request",
".",
"method",
"==",
"'POST'",
":",
"return",
"[",
"test",
",",
"two",
",",
"request",
".",
"data",
"]",
"return",
"{",
"'result'",
":",
"[",
"test",
",",
"two",
"]",
"... | Second resource
* POST: return [test, two, request data]
* GET: return [test, two] | [
"Second",
"resource"
] | python | train |
pantsbuild/pants | src/python/pants/option/parser.py | https://github.com/pantsbuild/pants/blob/b72e650da0df685824ffdcc71988b8c282d0962d/src/python/pants/option/parser.py#L300-L331 | def register(self, *args, **kwargs):
"""Register an option."""
if self._frozen:
raise FrozenRegistration(self.scope, args[0])
# Prevent further registration in enclosing scopes.
ancestor = self._parent_parser
while ancestor:
ancestor._freeze()
ancestor = ancestor._parent_parser
if kwargs.get('type') == bool:
default = kwargs.get('default')
if default is None:
# Unless a tri-state bool is explicitly opted into with the `UnsetBool` default value,
# boolean options always have an implicit boolean-typed default. We make that default
# explicit here.
kwargs['default'] = not self._ensure_bool(kwargs.get('implicit_value', True))
elif default is UnsetBool:
kwargs['default'] = None
# Record the args. We'll do the underlying parsing on-demand.
self._option_registrations.append((args, kwargs))
if self._parent_parser:
for arg in args:
existing_scope = self._parent_parser._existing_scope(arg)
if existing_scope is not None:
raise Shadowing(self.scope, arg, outer_scope=self._scope_str(existing_scope))
for arg in args:
if arg in self._known_args:
raise OptionAlreadyRegistered(self.scope, arg)
self._known_args.update(args) | [
"def",
"register",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"self",
".",
"_frozen",
":",
"raise",
"FrozenRegistration",
"(",
"self",
".",
"scope",
",",
"args",
"[",
"0",
"]",
")",
"# Prevent further registration in enclosing ... | Register an option. | [
"Register",
"an",
"option",
"."
] | python | train |
LonamiWebs/Telethon | telethon/helpers.py | https://github.com/LonamiWebs/Telethon/blob/1ead9757d366b58c1e0567cddb0196e20f1a445f/telethon/helpers.py#L11-L13 | def generate_random_long(signed=True):
"""Generates a random long integer (8 bytes), which is optionally signed"""
return int.from_bytes(os.urandom(8), signed=signed, byteorder='little') | [
"def",
"generate_random_long",
"(",
"signed",
"=",
"True",
")",
":",
"return",
"int",
".",
"from_bytes",
"(",
"os",
".",
"urandom",
"(",
"8",
")",
",",
"signed",
"=",
"signed",
",",
"byteorder",
"=",
"'little'",
")"
] | Generates a random long integer (8 bytes), which is optionally signed | [
"Generates",
"a",
"random",
"long",
"integer",
"(",
"8",
"bytes",
")",
"which",
"is",
"optionally",
"signed"
] | python | train |
qiniu/python-sdk | qiniu/auth.py | https://github.com/qiniu/python-sdk/blob/a69fbef4e3e6ea1ebe09f4610a5b18bb2c17de59/qiniu/auth.py#L127-L160 | def upload_token(
self,
bucket,
key=None,
expires=3600,
policy=None,
strict_policy=True):
"""生成上传凭证
Args:
bucket: 上传的空间名
key: 上传的文件名,默认为空
expires: 上传凭证的过期时间,默认为3600s
policy: 上传策略,默认为空
Returns:
上传凭证
"""
if bucket is None or bucket == '':
raise ValueError('invalid bucket name')
scope = bucket
if key is not None:
scope = '{0}:{1}'.format(bucket, key)
args = dict(
scope=scope,
deadline=int(time.time()) + expires,
)
if policy is not None:
self.__copy_policy(policy, args, strict_policy)
return self.__upload_token(args) | [
"def",
"upload_token",
"(",
"self",
",",
"bucket",
",",
"key",
"=",
"None",
",",
"expires",
"=",
"3600",
",",
"policy",
"=",
"None",
",",
"strict_policy",
"=",
"True",
")",
":",
"if",
"bucket",
"is",
"None",
"or",
"bucket",
"==",
"''",
":",
"raise",
... | 生成上传凭证
Args:
bucket: 上传的空间名
key: 上传的文件名,默认为空
expires: 上传凭证的过期时间,默认为3600s
policy: 上传策略,默认为空
Returns:
上传凭证 | [
"生成上传凭证"
] | python | train |
romanz/trezor-agent | libagent/gpg/__init__.py | https://github.com/romanz/trezor-agent/blob/513b1259c4d7aca5f88cd958edc11828d0712f1b/libagent/gpg/__init__.py#L279-L324 | def main(device_type):
"""Parse command-line arguments."""
epilog = ('See https://github.com/romanz/trezor-agent/blob/master/'
'doc/README-GPG.md for usage examples.')
parser = argparse.ArgumentParser(epilog=epilog)
agent_package = device_type.package_name()
resources_map = {r.key: r for r in pkg_resources.require(agent_package)}
resources = [resources_map[agent_package], resources_map['libagent']]
versions = '\n'.join('{}={}'.format(r.key, r.version) for r in resources)
parser.add_argument('--version', help='print the version info',
action='version', version=versions)
subparsers = parser.add_subparsers(title='Action', dest='action')
subparsers.required = True
p = subparsers.add_parser('init',
help='initialize hardware-based GnuPG identity')
p.add_argument('user_id')
p.add_argument('-e', '--ecdsa-curve', default='nist256p1')
p.add_argument('-t', '--time', type=int, default=int(time.time()))
p.add_argument('-v', '--verbose', default=0, action='count')
p.add_argument('-s', '--subkey', default=False, action='store_true')
p.add_argument('--homedir', type=str, default=os.environ.get('GNUPGHOME'),
help='Customize GnuPG home directory for the new identity.')
p.add_argument('--pin-entry-binary', type=str, default='pinentry',
help='Path to PIN entry UI helper.')
p.add_argument('--passphrase-entry-binary', type=str, default='pinentry',
help='Path to passphrase entry UI helper.')
p.add_argument('--cache-expiry-seconds', type=float, default=float('inf'),
help='Expire passphrase from cache after this duration.')
p.set_defaults(func=run_init)
p = subparsers.add_parser('unlock', help='unlock the hardware device')
p.add_argument('-v', '--verbose', default=0, action='count')
p.set_defaults(func=run_unlock)
args = parser.parse_args()
device_type.ui = device.ui.UI(device_type=device_type, config=vars(args))
device_type.ui.cached_passphrase_ack = util.ExpiringCache(
seconds=float(args.cache_expiry_seconds))
return args.func(device_type=device_type, args=args) | [
"def",
"main",
"(",
"device_type",
")",
":",
"epilog",
"=",
"(",
"'See https://github.com/romanz/trezor-agent/blob/master/'",
"'doc/README-GPG.md for usage examples.'",
")",
"parser",
"=",
"argparse",
".",
"ArgumentParser",
"(",
"epilog",
"=",
"epilog",
")",
"agent_packag... | Parse command-line arguments. | [
"Parse",
"command",
"-",
"line",
"arguments",
"."
] | python | train |
erdewit/ib_insync | ib_insync/ib.py | https://github.com/erdewit/ib_insync/blob/d0646a482590f5cb7bfddbd1f0870f8c4bc1df80/ib_insync/ib.py#L1193-L1207 | def cancelTickByTickData(self, contract: Contract, tickType: str):
"""
Unsubscribe from tick-by-tick data
Args:
contract: The exact contract object that was used to
subscribe with.
"""
ticker = self.ticker(contract)
reqId = self.wrapper.endTicker(ticker, tickType)
if reqId:
self.client.cancelTickByTickData(reqId)
else:
self._logger.error(
f'cancelMktData: No reqId found for contract {contract}') | [
"def",
"cancelTickByTickData",
"(",
"self",
",",
"contract",
":",
"Contract",
",",
"tickType",
":",
"str",
")",
":",
"ticker",
"=",
"self",
".",
"ticker",
"(",
"contract",
")",
"reqId",
"=",
"self",
".",
"wrapper",
".",
"endTicker",
"(",
"ticker",
",",
... | Unsubscribe from tick-by-tick data
Args:
contract: The exact contract object that was used to
subscribe with. | [
"Unsubscribe",
"from",
"tick",
"-",
"by",
"-",
"tick",
"data"
] | python | train |
mlperf/training | reinforcement/tensorflow/minigo/oneoffs/heatmap.py | https://github.com/mlperf/training/blob/1c6ae725a81d15437a2b2df05cac0673fde5c3a4/reinforcement/tensorflow/minigo/oneoffs/heatmap.py#L45-L85 | def eval_policy(eval_positions):
"""Evaluate all positions with all models save the policy heatmaps as CSVs
CSV name is "heatmap-<position_name>-<model-index>.csv"
CSV format is: model number, value network output, policy network outputs
position_name is taken from the SGF file
Policy network outputs (19x19) are saved in flat order (see coord.from_flat)
"""
model_paths = oneoff_utils.get_model_paths(fsdb.models_dir())
idx_start = FLAGS.idx_start
eval_every = FLAGS.eval_every
print("Evaluating models {}-{}, eval_every={}".format(
idx_start, len(model_paths), eval_every))
player = None
for i, idx in enumerate(tqdm(range(idx_start, len(model_paths), eval_every))):
if player and i % 20 == 0:
player.network.sess.close()
tf.reset_default_graph()
player = None
if not player:
player = oneoff_utils.load_player(model_paths[idx])
else:
oneoff_utils.restore_params(model_paths[idx], player)
pos_names, positions = zip(*eval_positions)
# This should be batched at somepoint.
eval_probs, eval_values = player.network.run_many(positions)
for pos_name, probs, value in zip(pos_names, eval_probs, eval_values):
save_file = os.path.join(
FLAGS.data_dir, "heatmap-{}-{}.csv".format(pos_name, idx))
with open(save_file, "w") as data:
data.write("{}, {}, {}\n".format(
idx, value, ",".join(map(str, probs)))) | [
"def",
"eval_policy",
"(",
"eval_positions",
")",
":",
"model_paths",
"=",
"oneoff_utils",
".",
"get_model_paths",
"(",
"fsdb",
".",
"models_dir",
"(",
")",
")",
"idx_start",
"=",
"FLAGS",
".",
"idx_start",
"eval_every",
"=",
"FLAGS",
".",
"eval_every",
"print... | Evaluate all positions with all models save the policy heatmaps as CSVs
CSV name is "heatmap-<position_name>-<model-index>.csv"
CSV format is: model number, value network output, policy network outputs
position_name is taken from the SGF file
Policy network outputs (19x19) are saved in flat order (see coord.from_flat) | [
"Evaluate",
"all",
"positions",
"with",
"all",
"models",
"save",
"the",
"policy",
"heatmaps",
"as",
"CSVs"
] | python | train |
KeithSSmith/switcheo-python | switcheo/neo/signatures.py | https://github.com/KeithSSmith/switcheo-python/blob/22f943dea1ad7d692b2bfcd9f0822ec80f4641a6/switcheo/neo/signatures.py#L200-L230 | def sign_create_withdrawal(withdrawal_params, key_pair):
"""
Function to create the withdrawal request by signing the parameters necessary for withdrawal.
Execution of this function is as follows::
sign_create_withdrawal(withdrawal_params=signable_params, private_key=eth_private_key)
The expected return result for this function is as follows::
{
'blockchain': 'neo',
'asset_id': 'SWTH',
'amount': '100',
'timestamp': 1542090737236,
'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82',
'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59',
'signature': 'f66d604c0a80940bf70ce9e13c0fd47bc79de....'
}
:param withdrawal_params: Dictionary specifications for withdrawal from the Switcheo Smart Contract.
:type withdrawal_params: dict
:param key_pair: The NEO key pair to be used to sign messages for the NEO Blockchain.
:type key_pair: KeyPair
:return: Dictionary of parameters to be sent to the Switcheo API
"""
encoded_message = encode_message(withdrawal_params)
create_params = withdrawal_params.copy()
create_params['address'] = neo_get_scripthash_from_private_key(private_key=key_pair.PrivateKey).ToString()
create_params['signature'] = sign_message(encoded_message=encoded_message,
private_key_hex=private_key_to_hex(key_pair=key_pair))
return create_params | [
"def",
"sign_create_withdrawal",
"(",
"withdrawal_params",
",",
"key_pair",
")",
":",
"encoded_message",
"=",
"encode_message",
"(",
"withdrawal_params",
")",
"create_params",
"=",
"withdrawal_params",
".",
"copy",
"(",
")",
"create_params",
"[",
"'address'",
"]",
"... | Function to create the withdrawal request by signing the parameters necessary for withdrawal.
Execution of this function is as follows::
sign_create_withdrawal(withdrawal_params=signable_params, private_key=eth_private_key)
The expected return result for this function is as follows::
{
'blockchain': 'neo',
'asset_id': 'SWTH',
'amount': '100',
'timestamp': 1542090737236,
'contract_hash': 'a195c1549e7da61b8da315765a790ac7e7633b82',
'address': 'fea2b883725ef2d194c9060f606cd0a0468a2c59',
'signature': 'f66d604c0a80940bf70ce9e13c0fd47bc79de....'
}
:param withdrawal_params: Dictionary specifications for withdrawal from the Switcheo Smart Contract.
:type withdrawal_params: dict
:param key_pair: The NEO key pair to be used to sign messages for the NEO Blockchain.
:type key_pair: KeyPair
:return: Dictionary of parameters to be sent to the Switcheo API | [
"Function",
"to",
"create",
"the",
"withdrawal",
"request",
"by",
"signing",
"the",
"parameters",
"necessary",
"for",
"withdrawal",
".",
"Execution",
"of",
"this",
"function",
"is",
"as",
"follows",
"::"
] | python | train |
thespacedoctor/frankenstein | frankenstein/electric.py | https://github.com/thespacedoctor/frankenstein/blob/48d943d9757e92dfa9ea7407628fa2d633c840fb/frankenstein/electric.py#L135-L158 | def _join_all_filenames_and_text(
self):
"""
*join all file names, driectory names and text content together*
"""
self.log.info('starting the ``_join_all_filenames_and_text`` method')
contentString = u""
for i in self.directoryContents:
contentString += u"%(i)s\n" % locals()
if os.path.isfile(os.path.join(i)):
if i[-4:] in [".png", ".jpg", ".gif"]:
continue
readFile = codecs.open(i, encoding='ISO-8859-1', mode='r')
if ".DS_Store" in i:
continue
data = readFile.read()
contentString += u"%(data)s\n" % locals()
readFile.close()
self.contentString = contentString
self.log.info('completed the ``_join_all_filenames_and_text`` method')
return None | [
"def",
"_join_all_filenames_and_text",
"(",
"self",
")",
":",
"self",
".",
"log",
".",
"info",
"(",
"'starting the ``_join_all_filenames_and_text`` method'",
")",
"contentString",
"=",
"u\"\"",
"for",
"i",
"in",
"self",
".",
"directoryContents",
":",
"contentString",
... | *join all file names, driectory names and text content together* | [
"*",
"join",
"all",
"file",
"names",
"driectory",
"names",
"and",
"text",
"content",
"together",
"*"
] | python | train |
python-diamond/Diamond | src/collectors/ipvs/ipvs.py | https://github.com/python-diamond/Diamond/blob/0f3eb04327d6d3ed5e53a9967d6c9d2c09714a47/src/collectors/ipvs/ipvs.py#L45-L56 | def get_default_config(self):
"""
Returns the default collector settings
"""
config = super(IPVSCollector, self).get_default_config()
config.update({
'bin': '/usr/sbin/ipvsadm',
'use_sudo': True,
'sudo_cmd': '/usr/bin/sudo',
'path': 'ipvs'
})
return config | [
"def",
"get_default_config",
"(",
"self",
")",
":",
"config",
"=",
"super",
"(",
"IPVSCollector",
",",
"self",
")",
".",
"get_default_config",
"(",
")",
"config",
".",
"update",
"(",
"{",
"'bin'",
":",
"'/usr/sbin/ipvsadm'",
",",
"'use_sudo'",
":",
"True",
... | Returns the default collector settings | [
"Returns",
"the",
"default",
"collector",
"settings"
] | python | train |
cirruscluster/cirruscluster | cirruscluster/ext/ansible/utils/__init__.py | https://github.com/cirruscluster/cirruscluster/blob/977409929dd81322d886425cdced10608117d5d7/cirruscluster/ext/ansible/utils/__init__.py#L325-L359 | def _gitinfo():
''' returns a string containing git branch, commit id and commit date '''
result = None
repo_path = os.path.join(os.path.dirname(__file__), '..', '..', '..', '.git')
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
gitdir = yaml.load(open(repo_path)).get('gitdir')
# There is a posibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path.split('.git')[0], gitdir)
except (IOError, AttributeError):
return ''
f = open(os.path.join(repo_path, "HEAD"))
branch = f.readline().split('/')[-1].rstrip("\n")
f.close()
branch_path = os.path.join(repo_path, "refs", "heads", branch)
if os.path.exists(branch_path):
f = open(branch_path)
commit = f.readline()[:10]
f.close()
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit,
time.strftime("%Y/%m/%d %H:%M:%S", date), offset / -36)
else:
result = ''
return result | [
"def",
"_gitinfo",
"(",
")",
":",
"result",
"=",
"None",
"repo_path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"__file__",
")",
",",
"'..'",
",",
"'..'",
",",
"'..'",
",",
"'.git'",
")",
"if",
"os",
".",
... | returns a string containing git branch, commit id and commit date | [
"returns",
"a",
"string",
"containing",
"git",
"branch",
"commit",
"id",
"and",
"commit",
"date"
] | python | train |
edx/edx-drf-extensions | edx_rest_framework_extensions/middleware.py | https://github.com/edx/edx-drf-extensions/blob/2f4c1682b8471bf894ea566a43fd9f91ba219f83/edx_rest_framework_extensions/middleware.py#L78-L102 | def _set_request_auth_type_metric(self, request):
"""
Add metric 'request_auth_type' for the authentication type used.
NOTE: This is a best guess at this point. Possible values include:
no-user
unauthenticated
jwt/bearer/other-token-type
session-or-unknown (catch all)
"""
if 'HTTP_AUTHORIZATION' in request.META and request.META['HTTP_AUTHORIZATION']:
token_parts = request.META['HTTP_AUTHORIZATION'].split()
# Example: "JWT eyJhbGciO..."
if len(token_parts) == 2:
auth_type = token_parts[0].lower() # 'jwt' or 'bearer' (for example)
else:
auth_type = 'other-token-type'
elif not hasattr(request, 'user') or not request.user:
auth_type = 'no-user'
elif not request.user.is_authenticated:
auth_type = 'unauthenticated'
else:
auth_type = 'session-or-unknown'
monitoring.set_custom_metric('request_auth_type', auth_type) | [
"def",
"_set_request_auth_type_metric",
"(",
"self",
",",
"request",
")",
":",
"if",
"'HTTP_AUTHORIZATION'",
"in",
"request",
".",
"META",
"and",
"request",
".",
"META",
"[",
"'HTTP_AUTHORIZATION'",
"]",
":",
"token_parts",
"=",
"request",
".",
"META",
"[",
"'... | Add metric 'request_auth_type' for the authentication type used.
NOTE: This is a best guess at this point. Possible values include:
no-user
unauthenticated
jwt/bearer/other-token-type
session-or-unknown (catch all) | [
"Add",
"metric",
"request_auth_type",
"for",
"the",
"authentication",
"type",
"used",
"."
] | python | train |
solocompt/plugs-mail | plugs_mail/mail.py | https://github.com/solocompt/plugs-mail/blob/6139fa79ddb437562db1769d03bd3098c25a06fa/plugs_mail/mail.py#L33-L45 | def validate_context(self):
"""
Make sure there are no duplicate context objects
or we might end up with switched data
Converting the tuple to a set gets rid of the
eventual duplicate objects, comparing the length
of the original tuple and set tells us if we
have duplicates in the tuple or not
"""
if self.context and len(self.context) != len(set(self.context)):
LOGGER.error('Cannot have duplicated context objects')
raise Exception('Cannot have duplicated context objects.') | [
"def",
"validate_context",
"(",
"self",
")",
":",
"if",
"self",
".",
"context",
"and",
"len",
"(",
"self",
".",
"context",
")",
"!=",
"len",
"(",
"set",
"(",
"self",
".",
"context",
")",
")",
":",
"LOGGER",
".",
"error",
"(",
"'Cannot have duplicated c... | Make sure there are no duplicate context objects
or we might end up with switched data
Converting the tuple to a set gets rid of the
eventual duplicate objects, comparing the length
of the original tuple and set tells us if we
have duplicates in the tuple or not | [
"Make",
"sure",
"there",
"are",
"no",
"duplicate",
"context",
"objects",
"or",
"we",
"might",
"end",
"up",
"with",
"switched",
"data"
] | python | train |
markovmodel/msmtools | msmtools/estimation/api.py | https://github.com/markovmodel/msmtools/blob/54dc76dd2113a0e8f3d15d5316abab41402941be/msmtools/estimation/api.py#L402-L455 | def connected_sets(C, directed=True):
r"""Compute connected sets of microstates.
Connected components for a directed graph with edge-weights
given by the count matrix.
Parameters
----------
C : scipy.sparse matrix
Count matrix specifying edge weights.
directed : bool, optional
Whether to compute connected components for a directed or
undirected graph. Default is True.
Returns
-------
cc : list of arrays of integers
Each entry is an array containing all vertices (states) in the
corresponding connected component. The list is sorted
according to the size of the individual components. The
largest connected set is the first entry in the list, lcc=cc[0].
Notes
-----
Viewing the count matrix as the adjacency matrix of a (directed) graph
the connected components are given by the connected components of that
graph. Connected components of a graph can be efficiently computed
using Tarjan's algorithm.
References
----------
.. [1] Tarjan, R E. 1972. Depth-first search and linear graph
algorithms. SIAM Journal on Computing 1 (2): 146-160.
Examples
--------
>>> import numpy as np
>>> from msmtools.estimation import connected_sets
>>> C = np.array([[10, 1, 0], [2, 0, 3], [0, 0, 4]])
>>> cc_directed = connected_sets(C)
>>> cc_directed
[array([0, 1]), array([2])]
>>> cc_undirected = connected_sets(C, directed=False)
>>> cc_undirected
[array([0, 1, 2])]
"""
if isdense(C):
return sparse.connectivity.connected_sets(csr_matrix(C), directed=directed)
else:
return sparse.connectivity.connected_sets(C, directed=directed) | [
"def",
"connected_sets",
"(",
"C",
",",
"directed",
"=",
"True",
")",
":",
"if",
"isdense",
"(",
"C",
")",
":",
"return",
"sparse",
".",
"connectivity",
".",
"connected_sets",
"(",
"csr_matrix",
"(",
"C",
")",
",",
"directed",
"=",
"directed",
")",
"el... | r"""Compute connected sets of microstates.
Connected components for a directed graph with edge-weights
given by the count matrix.
Parameters
----------
C : scipy.sparse matrix
Count matrix specifying edge weights.
directed : bool, optional
Whether to compute connected components for a directed or
undirected graph. Default is True.
Returns
-------
cc : list of arrays of integers
Each entry is an array containing all vertices (states) in the
corresponding connected component. The list is sorted
according to the size of the individual components. The
largest connected set is the first entry in the list, lcc=cc[0].
Notes
-----
Viewing the count matrix as the adjacency matrix of a (directed) graph
the connected components are given by the connected components of that
graph. Connected components of a graph can be efficiently computed
using Tarjan's algorithm.
References
----------
.. [1] Tarjan, R E. 1972. Depth-first search and linear graph
algorithms. SIAM Journal on Computing 1 (2): 146-160.
Examples
--------
>>> import numpy as np
>>> from msmtools.estimation import connected_sets
>>> C = np.array([[10, 1, 0], [2, 0, 3], [0, 0, 4]])
>>> cc_directed = connected_sets(C)
>>> cc_directed
[array([0, 1]), array([2])]
>>> cc_undirected = connected_sets(C, directed=False)
>>> cc_undirected
[array([0, 1, 2])] | [
"r",
"Compute",
"connected",
"sets",
"of",
"microstates",
"."
] | python | train |
aliyun/aliyun-log-python-sdk | aliyun/log/logclient.py | https://github.com/aliyun/aliyun-log-python-sdk/blob/ac383db0a16abf1e5ef7df36074374184b43516e/aliyun/log/logclient.py#L2106-L2123 | def list_consumer_group(self, project, logstore):
""" List consumer group
:type project: string
:param project: project name
:type logstore: string
:param logstore: logstore name
:return: ListConsumerGroupResponse
"""
resource = "/logstores/" + logstore + "/consumergroups"
params = {}
headers = {}
(resp, header) = self._send("GET", project, None, resource, params, headers)
return ListConsumerGroupResponse(resp, header) | [
"def",
"list_consumer_group",
"(",
"self",
",",
"project",
",",
"logstore",
")",
":",
"resource",
"=",
"\"/logstores/\"",
"+",
"logstore",
"+",
"\"/consumergroups\"",
"params",
"=",
"{",
"}",
"headers",
"=",
"{",
"}",
"(",
"resp",
",",
"header",
")",
"=",
... | List consumer group
:type project: string
:param project: project name
:type logstore: string
:param logstore: logstore name
:return: ListConsumerGroupResponse | [
"List",
"consumer",
"group",
":",
"type",
"project",
":",
"string",
":",
"param",
"project",
":",
"project",
"name",
":",
"type",
"logstore",
":",
"string",
":",
"param",
"logstore",
":",
"logstore",
"name",
":",
"return",
":",
"ListConsumerGroupResponse"
] | python | train |
quantumlib/Cirq | cirq/sim/wave_function.py | https://github.com/quantumlib/Cirq/blob/0827da80dd7880e5b923eb69407e980ed9bc0bd2/cirq/sim/wave_function.py#L118-L138 | def bloch_vector_of(self, qubit: ops.Qid) -> np.ndarray:
"""Returns the bloch vector of a qubit in the state.
Calculates the bloch vector of the given qubit
in the state given by self.state_vector(), given that
self.state_vector() follows the standard Kronecker convention of
numpy.kron.
Args:
qubit: qubit who's bloch vector we want to find.
Returns:
A length 3 numpy array representing the qubit's bloch vector.
Raises:
ValueError: if the size of the state represents more than 25 qubits.
IndexError: if index is out of range for the number of qubits
corresponding to the state.
"""
return bloch_vector_from_state_vector(self.state_vector(),
self.qubit_map[qubit]) | [
"def",
"bloch_vector_of",
"(",
"self",
",",
"qubit",
":",
"ops",
".",
"Qid",
")",
"->",
"np",
".",
"ndarray",
":",
"return",
"bloch_vector_from_state_vector",
"(",
"self",
".",
"state_vector",
"(",
")",
",",
"self",
".",
"qubit_map",
"[",
"qubit",
"]",
"... | Returns the bloch vector of a qubit in the state.
Calculates the bloch vector of the given qubit
in the state given by self.state_vector(), given that
self.state_vector() follows the standard Kronecker convention of
numpy.kron.
Args:
qubit: qubit who's bloch vector we want to find.
Returns:
A length 3 numpy array representing the qubit's bloch vector.
Raises:
ValueError: if the size of the state represents more than 25 qubits.
IndexError: if index is out of range for the number of qubits
corresponding to the state. | [
"Returns",
"the",
"bloch",
"vector",
"of",
"a",
"qubit",
"in",
"the",
"state",
"."
] | python | train |
aws/sagemaker-python-sdk | src/sagemaker/model.py | https://github.com/aws/sagemaker-python-sdk/blob/a9e724c7d3f5572b68c3903548c792a59d99799a/src/sagemaker/model.py#L97-L118 | def _create_sagemaker_model(self, instance_type, accelerator_type=None, tags=None):
"""Create a SageMaker Model Entity
Args:
instance_type (str): The EC2 instance type that this Model will be used for, this is only
used to determine if the image needs GPU support or not.
accelerator_type (str): Type of Elastic Inference accelerator to attach to an endpoint for model loading
and inference, for example, 'ml.eia1.medium'. If not specified, no Elastic Inference accelerator
will be attached to the endpoint.
tags(List[dict[str, str]]): Optional. The list of tags to add to the model. Example:
>>> tags = [{'Key': 'tagname', 'Value': 'tagvalue'}]
For more information about tags, see https://boto3.amazonaws.com/v1/documentation\
/api/latest/reference/services/sagemaker.html#SageMaker.Client.add_tags
"""
container_def = self.prepare_container_def(instance_type, accelerator_type=accelerator_type)
self.name = self.name or utils.name_from_image(container_def['Image'])
enable_network_isolation = self.enable_network_isolation()
self.sagemaker_session.create_model(self.name, self.role,
container_def, vpc_config=self.vpc_config,
enable_network_isolation=enable_network_isolation,
tags=tags) | [
"def",
"_create_sagemaker_model",
"(",
"self",
",",
"instance_type",
",",
"accelerator_type",
"=",
"None",
",",
"tags",
"=",
"None",
")",
":",
"container_def",
"=",
"self",
".",
"prepare_container_def",
"(",
"instance_type",
",",
"accelerator_type",
"=",
"accelera... | Create a SageMaker Model Entity
Args:
instance_type (str): The EC2 instance type that this Model will be used for, this is only
used to determine if the image needs GPU support or not.
accelerator_type (str): Type of Elastic Inference accelerator to attach to an endpoint for model loading
and inference, for example, 'ml.eia1.medium'. If not specified, no Elastic Inference accelerator
will be attached to the endpoint.
tags(List[dict[str, str]]): Optional. The list of tags to add to the model. Example:
>>> tags = [{'Key': 'tagname', 'Value': 'tagvalue'}]
For more information about tags, see https://boto3.amazonaws.com/v1/documentation\
/api/latest/reference/services/sagemaker.html#SageMaker.Client.add_tags | [
"Create",
"a",
"SageMaker",
"Model",
"Entity"
] | python | train |
hyperledger/indy-plenum | plenum/server/node.py | https://github.com/hyperledger/indy-plenum/blob/dcd144e238af7f17a869ffc9412f13dc488b7020/plenum/server/node.py#L2423-L2435 | def applyReq(self, request: Request, cons_time: int):
"""
Apply request to appropriate ledger and state. `cons_time` is the
UTC epoch at which consensus was reached.
"""
self.execute_hook(NodeHooks.PRE_REQUEST_APPLICATION, request=request,
cons_time=cons_time)
req_handler = self.get_req_handler(txn_type=request.operation[TXN_TYPE])
seq_no, txn = req_handler.apply(request, cons_time)
ledger_id = self.ledger_id_for_request(request)
self.execute_hook(NodeHooks.POST_REQUEST_APPLICATION, request=request,
cons_time=cons_time, ledger_id=ledger_id,
seq_no=seq_no, txn=txn) | [
"def",
"applyReq",
"(",
"self",
",",
"request",
":",
"Request",
",",
"cons_time",
":",
"int",
")",
":",
"self",
".",
"execute_hook",
"(",
"NodeHooks",
".",
"PRE_REQUEST_APPLICATION",
",",
"request",
"=",
"request",
",",
"cons_time",
"=",
"cons_time",
")",
... | Apply request to appropriate ledger and state. `cons_time` is the
UTC epoch at which consensus was reached. | [
"Apply",
"request",
"to",
"appropriate",
"ledger",
"and",
"state",
".",
"cons_time",
"is",
"the",
"UTC",
"epoch",
"at",
"which",
"consensus",
"was",
"reached",
"."
] | python | train |
PyCQA/astroid | astroid/node_classes.py | https://github.com/PyCQA/astroid/blob/e0a298df55b15abcb77c2a93253f5ab7be52d0fb/astroid/node_classes.py#L638-L658 | def _fixed_source_line(self):
"""Attempt to find the line that this node appears on.
We need this method since not all nodes have :attr:`lineno` set.
:returns: The line number of this node,
or None if this could not be determined.
:rtype: int or None
"""
line = self.lineno
_node = self
try:
while line is None:
_node = next(_node.get_children())
line = _node.lineno
except StopIteration:
_node = self.parent
while _node and line is None:
line = _node.lineno
_node = _node.parent
return line | [
"def",
"_fixed_source_line",
"(",
"self",
")",
":",
"line",
"=",
"self",
".",
"lineno",
"_node",
"=",
"self",
"try",
":",
"while",
"line",
"is",
"None",
":",
"_node",
"=",
"next",
"(",
"_node",
".",
"get_children",
"(",
")",
")",
"line",
"=",
"_node"... | Attempt to find the line that this node appears on.
We need this method since not all nodes have :attr:`lineno` set.
:returns: The line number of this node,
or None if this could not be determined.
:rtype: int or None | [
"Attempt",
"to",
"find",
"the",
"line",
"that",
"this",
"node",
"appears",
"on",
"."
] | python | train |
raamana/hiwenet | hiwenet/pairwise_dist.py | https://github.com/raamana/hiwenet/blob/b12699b3722fd0a6a835e7d7ca4baf58fb181809/hiwenet/pairwise_dist.py#L90-L307 | def extract(features, groups,
weight_method=default_weight_method,
num_bins=default_num_bins,
edge_range=default_edge_range,
trim_outliers=default_trim_behaviour,
trim_percentile=default_trim_percentile,
use_original_distribution=False,
relative_to_all=False,
asymmetric=False,
return_networkx_graph=default_return_networkx_graph,
out_weights_path=default_out_weights_path):
"""
Extracts the histogram-distance weighted adjacency matrix.
Parameters
----------
features : ndarray or str
1d array of scalar values, either provided directly as a 1d numpy array,
or as a path to a file containing these values
groups : ndarray or str
Membership array of same length as `features`, each value specifying which group that particular node belongs to.
Input can be either provided directly as a 1d numpy array,or as a path to a file containing these values.
For example, if you have cortical thickness values for 1000 vertices (`features` is ndarray of length 1000),
belonging to 100 patches, the groups array (of length 1000) could have numbers 1 to 100 (number of unique values)
specifying which element belongs to which cortical patch.
Grouping with numerical values (contiguous from 1 to num_patches) is strongly recommended for simplicity,
but this could also be a list of strings of length p, in which case a tuple is returned,
identifying which weight belongs to which pair of patches.
weight_method : string or callable, optional
Type of distance (or metric) to compute between the pair of histograms.
It can either be a string identifying one of the weights implemented below, or a valid callable.
If a string, it must be one of the following methods:
- 'chebyshev'
- 'chebyshev_neg'
- 'chi_square'
- 'correlate'
- 'correlate_1'
- 'cosine'
- 'cosine_1'
- 'cosine_2'
- 'cosine_alt'
- 'euclidean'
- 'fidelity_based'
- 'histogram_intersection'
- 'histogram_intersection_1'
- 'jensen_shannon'
- 'kullback_leibler'
- 'manhattan'
- 'minowski'
- 'noelle_1'
- 'noelle_2'
- 'noelle_3'
- 'noelle_4'
- 'noelle_5'
- 'relative_bin_deviation'
- 'relative_deviation'
Note only the following are *metrics*:
- 'manhattan'
- 'minowski'
- 'euclidean'
- 'noelle_2'
- 'noelle_4'
- 'noelle_5'
The following are *semi- or quasi-metrics*:
- 'kullback_leibler'
- 'jensen_shannon'
- 'chi_square'
- 'chebyshev'
- 'cosine_1'
- 'chebyshev_neg'
- 'correlate_1'
- 'histogram_intersection_1'
- 'relative_deviation'
- 'relative_bin_deviation'
- 'noelle_1'
- 'noelle_3'
The following are classified to be similarity functions:
- 'histogram_intersection'
- 'correlate'
- 'cosine'
- 'cosine_2'
- 'cosine_alt'
- 'fidelity_based'
*Default* choice: 'minowski'.
The method can also be one of the following identifying metrics that operate on the original data directly -
e.g. difference in the medians coming from the distributions of the pair of ROIs.
- 'diff_medians'
- 'diff_means'
- 'diff_medians_abs'
- 'diff_means_abs'
Please note this can lead to adjacency matrices that may not be symmetric
e.g. difference metric on two scalars is not symmetric).
In this case, be sure to use the flag: allow_non_symmetric=True
If weight_method is a callable, it must two accept two arrays as input and return one scalar as output.
Example: ``diff_in_skew = lambda x, y: abs(scipy.stats.skew(x)-scipy.stats.skew(y))``
NOTE: this method will be applied to histograms (not the original distribution of features from group/ROI).
In order to apply this callable directly on the original distribution (without trimming and histogram binning),
use ``use_original_distribution=True``.
num_bins : scalar, optional
Number of bins to use when computing histogram within each patch/group.
Note:
1) Please ensure same number of bins are used across different subjects
2) histogram shape can vary widely with number of bins (esp with fewer bins in the range of 3-20), and hence the features extracted based on them vary also.
3) It is recommended to study the impact of this parameter on the final results of the experiment.
This could also be optimized within an inner cross-validation loop if desired.
edge_range : tuple or None
The range of edges within which to bin the given values.
This can be helpful to ensure correspondence across multiple invocations of hiwenet (for different subjects),
in terms of range across all bins as well as individual bin edges.
Default is to automatically compute from the given values.
Accepted format:
- tuple of finite values: (range_min, range_max)
- None, triggering automatic calculation (default)
Notes : when controlling the ``edge_range``, it is not possible trim the tails (e.g. using the parameters
``trim_outliers`` and ``trim_percentile``) for the current set of features using its own range.
trim_outliers : bool, optional
Whether to trim a small percentile of outliers at the edges of feature range,
when features are expected to contain extreme outliers (like 0 or eps or Inf).
This is important to avoid numerical problems and also to stabilize the weight estimates.
trim_percentile : float
Small value specifying the percentile of outliers to trim.
Default: 5 (5%). Must be in open interval (0, 100).
use_original_distribution : bool, optional
When using a user-defined callable, this flag
1) allows skipping of pre-processing (trimming outliers) and histogram construction,
2) enables the application of arbitrary callable (user-defined) on the original distributions coming from the two groups/ROIs/nodes directly.
Example: ``diff_in_medians = lambda x, y: abs(np.median(x)-np.median(y))``
This option is valid only when weight_method is a valid callable,
which must take two inputs (possibly of different lengths) and return a single scalar.
relative_to_all : bool
Flag to instruct the computation of a grand histogram (distribution pooled from values in all ROIs),
and compute distances (based on distance specified by ``weight_method``) by from each ROI to the grand mean.
This would result in only N distances for N ROIs, instead of the usual N*(N-1) pair-wise distances.
asymmetric : bool
Flag to identify resulting adjacency matrix is expected to be non-symmetric.
Note: this results in twice the computation time!
Default: False , for histogram metrics implemented here are symmetric.
return_networkx_graph : bool, optional
Specifies the need for a networkx graph populated with weights computed. Default: False.
out_weights_path : str, optional
Where to save the extracted weight matrix. If networkx output is returned, it would be saved in GraphML format.
Default: nothing saved unless instructed.
Returns
-------
edge_weights : ndarray
numpy 2d array of pair-wise edge-weights (of size: num_groups x num_groups),
wherein num_groups is determined by the total number of unique values in `groups`.
**Note**:
- Only the upper triangular matrix is filled as the distance between node i and j would be the same as j and i.
- The edge weights from the upper triangular matrix can easily be obtained by
.. code-block:: python
weights_array = edge_weights[ np.triu_indices_from(edge_weights, 1) ]
"""
# parameter check
features, groups, num_bins, edge_range, group_ids, num_groups, num_links = check_params(
features, groups, num_bins, edge_range, trim_outliers, trim_percentile)
weight_func, use_orig_distr, non_symmetric = check_weight_method(weight_method,
use_original_distribution, asymmetric)
# using the same bin edges for all nodes/groups to ensure correspondence
# NOTE: common bin edges is important for the disances to be any meaningful
edges = compute_bin_edges(features, num_bins, edge_range,
trim_outliers, trim_percentile, use_orig_distr)
# handling special
if relative_to_all:
result = non_pairwise.relative_to_all(features, groups, edges, weight_func,
use_orig_distr, group_ids, num_groups,
return_networkx_graph, out_weights_path)
else:
result = pairwise_extract(features, groups, edges, weight_func, use_orig_distr,
group_ids, num_groups, num_links,
non_symmetric, return_networkx_graph, out_weights_path)
# this can be a networkx graph or numpy array depending on request
return result | [
"def",
"extract",
"(",
"features",
",",
"groups",
",",
"weight_method",
"=",
"default_weight_method",
",",
"num_bins",
"=",
"default_num_bins",
",",
"edge_range",
"=",
"default_edge_range",
",",
"trim_outliers",
"=",
"default_trim_behaviour",
",",
"trim_percentile",
"... | Extracts the histogram-distance weighted adjacency matrix.
Parameters
----------
features : ndarray or str
1d array of scalar values, either provided directly as a 1d numpy array,
or as a path to a file containing these values
groups : ndarray or str
Membership array of same length as `features`, each value specifying which group that particular node belongs to.
Input can be either provided directly as a 1d numpy array,or as a path to a file containing these values.
For example, if you have cortical thickness values for 1000 vertices (`features` is ndarray of length 1000),
belonging to 100 patches, the groups array (of length 1000) could have numbers 1 to 100 (number of unique values)
specifying which element belongs to which cortical patch.
Grouping with numerical values (contiguous from 1 to num_patches) is strongly recommended for simplicity,
but this could also be a list of strings of length p, in which case a tuple is returned,
identifying which weight belongs to which pair of patches.
weight_method : string or callable, optional
Type of distance (or metric) to compute between the pair of histograms.
It can either be a string identifying one of the weights implemented below, or a valid callable.
If a string, it must be one of the following methods:
- 'chebyshev'
- 'chebyshev_neg'
- 'chi_square'
- 'correlate'
- 'correlate_1'
- 'cosine'
- 'cosine_1'
- 'cosine_2'
- 'cosine_alt'
- 'euclidean'
- 'fidelity_based'
- 'histogram_intersection'
- 'histogram_intersection_1'
- 'jensen_shannon'
- 'kullback_leibler'
- 'manhattan'
- 'minowski'
- 'noelle_1'
- 'noelle_2'
- 'noelle_3'
- 'noelle_4'
- 'noelle_5'
- 'relative_bin_deviation'
- 'relative_deviation'
Note only the following are *metrics*:
- 'manhattan'
- 'minowski'
- 'euclidean'
- 'noelle_2'
- 'noelle_4'
- 'noelle_5'
The following are *semi- or quasi-metrics*:
- 'kullback_leibler'
- 'jensen_shannon'
- 'chi_square'
- 'chebyshev'
- 'cosine_1'
- 'chebyshev_neg'
- 'correlate_1'
- 'histogram_intersection_1'
- 'relative_deviation'
- 'relative_bin_deviation'
- 'noelle_1'
- 'noelle_3'
The following are classified to be similarity functions:
- 'histogram_intersection'
- 'correlate'
- 'cosine'
- 'cosine_2'
- 'cosine_alt'
- 'fidelity_based'
*Default* choice: 'minowski'.
The method can also be one of the following identifying metrics that operate on the original data directly -
e.g. difference in the medians coming from the distributions of the pair of ROIs.
- 'diff_medians'
- 'diff_means'
- 'diff_medians_abs'
- 'diff_means_abs'
Please note this can lead to adjacency matrices that may not be symmetric
e.g. difference metric on two scalars is not symmetric).
In this case, be sure to use the flag: allow_non_symmetric=True
If weight_method is a callable, it must two accept two arrays as input and return one scalar as output.
Example: ``diff_in_skew = lambda x, y: abs(scipy.stats.skew(x)-scipy.stats.skew(y))``
NOTE: this method will be applied to histograms (not the original distribution of features from group/ROI).
In order to apply this callable directly on the original distribution (without trimming and histogram binning),
use ``use_original_distribution=True``.
num_bins : scalar, optional
Number of bins to use when computing histogram within each patch/group.
Note:
1) Please ensure same number of bins are used across different subjects
2) histogram shape can vary widely with number of bins (esp with fewer bins in the range of 3-20), and hence the features extracted based on them vary also.
3) It is recommended to study the impact of this parameter on the final results of the experiment.
This could also be optimized within an inner cross-validation loop if desired.
edge_range : tuple or None
The range of edges within which to bin the given values.
This can be helpful to ensure correspondence across multiple invocations of hiwenet (for different subjects),
in terms of range across all bins as well as individual bin edges.
Default is to automatically compute from the given values.
Accepted format:
- tuple of finite values: (range_min, range_max)
- None, triggering automatic calculation (default)
Notes : when controlling the ``edge_range``, it is not possible trim the tails (e.g. using the parameters
``trim_outliers`` and ``trim_percentile``) for the current set of features using its own range.
trim_outliers : bool, optional
Whether to trim a small percentile of outliers at the edges of feature range,
when features are expected to contain extreme outliers (like 0 or eps or Inf).
This is important to avoid numerical problems and also to stabilize the weight estimates.
trim_percentile : float
Small value specifying the percentile of outliers to trim.
Default: 5 (5%). Must be in open interval (0, 100).
use_original_distribution : bool, optional
When using a user-defined callable, this flag
1) allows skipping of pre-processing (trimming outliers) and histogram construction,
2) enables the application of arbitrary callable (user-defined) on the original distributions coming from the two groups/ROIs/nodes directly.
Example: ``diff_in_medians = lambda x, y: abs(np.median(x)-np.median(y))``
This option is valid only when weight_method is a valid callable,
which must take two inputs (possibly of different lengths) and return a single scalar.
relative_to_all : bool
Flag to instruct the computation of a grand histogram (distribution pooled from values in all ROIs),
and compute distances (based on distance specified by ``weight_method``) by from each ROI to the grand mean.
This would result in only N distances for N ROIs, instead of the usual N*(N-1) pair-wise distances.
asymmetric : bool
Flag to identify resulting adjacency matrix is expected to be non-symmetric.
Note: this results in twice the computation time!
Default: False , for histogram metrics implemented here are symmetric.
return_networkx_graph : bool, optional
Specifies the need for a networkx graph populated with weights computed. Default: False.
out_weights_path : str, optional
Where to save the extracted weight matrix. If networkx output is returned, it would be saved in GraphML format.
Default: nothing saved unless instructed.
Returns
-------
edge_weights : ndarray
numpy 2d array of pair-wise edge-weights (of size: num_groups x num_groups),
wherein num_groups is determined by the total number of unique values in `groups`.
**Note**:
- Only the upper triangular matrix is filled as the distance between node i and j would be the same as j and i.
- The edge weights from the upper triangular matrix can easily be obtained by
.. code-block:: python
weights_array = edge_weights[ np.triu_indices_from(edge_weights, 1) ] | [
"Extracts",
"the",
"histogram",
"-",
"distance",
"weighted",
"adjacency",
"matrix",
"."
] | python | train |
alpha-xone/xbbg | xbbg/blp.py | https://github.com/alpha-xone/xbbg/blob/70226eb19a72a08144b5d8cea9db4913200f7bc5/xbbg/blp.py#L260-L347 | def bdib(ticker, dt, typ='TRADE', **kwargs) -> pd.DataFrame:
"""
Bloomberg intraday bar data
Args:
ticker: ticker name
dt: date to download
typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]
**kwargs:
batch: whether is batch process to download data
log: level of logs
Returns:
pd.DataFrame
"""
from xbbg.core import missing
logger = logs.get_logger(bdib, level=kwargs.pop('log', logs.LOG_LEVEL))
t_1 = pd.Timestamp('today').date() - pd.Timedelta('1D')
whole_day = pd.Timestamp(dt).date() < t_1
batch = kwargs.pop('batch', False)
if (not whole_day) and batch:
logger.warning(f'querying date {t_1} is too close, ignoring download ...')
return pd.DataFrame()
cur_dt = pd.Timestamp(dt).strftime('%Y-%m-%d')
asset = ticker.split()[-1]
info_log = f'{ticker} / {cur_dt} / {typ}'
if asset in ['Equity', 'Curncy', 'Index', 'Comdty']:
exch = const.exch_info(ticker=ticker)
if exch.empty: return pd.DataFrame()
else:
logger.error(f'unknown asset type: {asset}')
return pd.DataFrame()
time_fmt = '%Y-%m-%dT%H:%M:%S'
time_idx = pd.DatetimeIndex([
f'{cur_dt} {exch.allday[0]}', f'{cur_dt} {exch.allday[-1]}']
).tz_localize(exch.tz).tz_convert(DEFAULT_TZ).tz_convert('UTC')
if time_idx[0] > time_idx[1]: time_idx -= pd.TimedeltaIndex(['1D', '0D'])
q_tckr = ticker
if exch.get('is_fut', False):
if 'freq' not in exch:
logger.error(f'[freq] missing in info for {info_log} ...')
is_sprd = exch.get('has_sprd', False) and (len(ticker[:-1]) != exch['tickers'][0])
if not is_sprd:
q_tckr = fut_ticker(gen_ticker=ticker, dt=dt, freq=exch['freq'])
if q_tckr == '':
logger.error(f'cannot find futures ticker for {ticker} ...')
return pd.DataFrame()
info_log = f'{q_tckr} / {cur_dt} / {typ}'
miss_kw = dict(ticker=ticker, dt=dt, typ=typ, func='bdib')
cur_miss = missing.current_missing(**miss_kw)
if cur_miss >= 2:
if batch: return pd.DataFrame()
logger.info(f'{cur_miss} trials with no data {info_log}')
return pd.DataFrame()
logger.info(f'loading data from Bloomberg: {info_log} ...')
con, _ = create_connection()
try:
data = con.bdib(
ticker=q_tckr, event_type=typ, interval=1,
start_datetime=time_idx[0].strftime(time_fmt),
end_datetime=time_idx[1].strftime(time_fmt),
)
except KeyError:
# Ignores missing data errors from pdblp library
# Warning msg will be displayed later
data = pd.DataFrame()
if not isinstance(data, pd.DataFrame):
raise ValueError(f'unknown output format: {type(data)}')
if data.empty:
logger.warning(f'no data for {info_log} ...')
missing.update_missing(**miss_kw)
return pd.DataFrame()
data = data.tz_localize('UTC').tz_convert(exch.tz)
storage.save_intraday(data=data, ticker=ticker, dt=dt, typ=typ)
return pd.DataFrame() if batch else assist.format_intraday(data=data, ticker=ticker) | [
"def",
"bdib",
"(",
"ticker",
",",
"dt",
",",
"typ",
"=",
"'TRADE'",
",",
"*",
"*",
"kwargs",
")",
"->",
"pd",
".",
"DataFrame",
":",
"from",
"xbbg",
".",
"core",
"import",
"missing",
"logger",
"=",
"logs",
".",
"get_logger",
"(",
"bdib",
",",
"lev... | Bloomberg intraday bar data
Args:
ticker: ticker name
dt: date to download
typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]
**kwargs:
batch: whether is batch process to download data
log: level of logs
Returns:
pd.DataFrame | [
"Bloomberg",
"intraday",
"bar",
"data"
] | python | valid |
Synerty/peek-plugin-base | peek_plugin_base/client/PeekPlatformDesktopHttpHookABC.py | https://github.com/Synerty/peek-plugin-base/blob/276101d028e1ee0678af514c761b74cce5a5cda9/peek_plugin_base/client/PeekPlatformDesktopHttpHookABC.py#L31-L42 | def addDesktopResource(self, pluginSubPath: bytes, resource: BasicResource) -> None:
""" Add Site Resource
Add a cusotom implementation of a served http resource.
:param pluginSubPath: The resource path where you want to serve this resource.
:param resource: The resource to serve.
:return: None
"""
pluginSubPath = pluginSubPath.strip(b'/')
self.__rootDesktopResource.putChild(pluginSubPath, resource) | [
"def",
"addDesktopResource",
"(",
"self",
",",
"pluginSubPath",
":",
"bytes",
",",
"resource",
":",
"BasicResource",
")",
"->",
"None",
":",
"pluginSubPath",
"=",
"pluginSubPath",
".",
"strip",
"(",
"b'/'",
")",
"self",
".",
"__rootDesktopResource",
".",
"putC... | Add Site Resource
Add a cusotom implementation of a served http resource.
:param pluginSubPath: The resource path where you want to serve this resource.
:param resource: The resource to serve.
:return: None | [
"Add",
"Site",
"Resource"
] | python | train |
crs4/pydoop | pydoop/hadut.py | https://github.com/crs4/pydoop/blob/f375be2a06f9c67eaae3ce6f605195dbca143b2b/pydoop/hadut.py#L194-L227 | def get_task_trackers(properties=None, hadoop_conf_dir=None, offline=False):
"""
Get the list of task trackers in the Hadoop cluster.
Each element in the returned list is in the ``(host, port)`` format.
All arguments are passed to :func:`run_class`.
If ``offline`` is :obj:`True`, try getting the list of task trackers from
the ``slaves`` file in Hadoop's configuration directory (no attempt is
made to contact the Hadoop daemons). In this case, ports are set to 0.
"""
if offline:
if not hadoop_conf_dir:
hadoop_conf_dir = pydoop.hadoop_conf()
slaves = os.path.join(hadoop_conf_dir, "slaves")
try:
with open(slaves) as f:
task_trackers = [(l.strip(), 0) for l in f]
except IOError:
task_trackers = []
else:
# run JobClient directly (avoids "hadoop job" deprecation)
stdout = run_class(
"org.apache.hadoop.mapred.JobClient", ["-list-active-trackers"],
properties=properties, hadoop_conf_dir=hadoop_conf_dir,
keep_streams=True
)
task_trackers = []
for line in stdout.splitlines():
if not line:
continue
line = line.split(":")
task_trackers.append((line[0].split("_")[1], int(line[-1])))
return task_trackers | [
"def",
"get_task_trackers",
"(",
"properties",
"=",
"None",
",",
"hadoop_conf_dir",
"=",
"None",
",",
"offline",
"=",
"False",
")",
":",
"if",
"offline",
":",
"if",
"not",
"hadoop_conf_dir",
":",
"hadoop_conf_dir",
"=",
"pydoop",
".",
"hadoop_conf",
"(",
")"... | Get the list of task trackers in the Hadoop cluster.
Each element in the returned list is in the ``(host, port)`` format.
All arguments are passed to :func:`run_class`.
If ``offline`` is :obj:`True`, try getting the list of task trackers from
the ``slaves`` file in Hadoop's configuration directory (no attempt is
made to contact the Hadoop daemons). In this case, ports are set to 0. | [
"Get",
"the",
"list",
"of",
"task",
"trackers",
"in",
"the",
"Hadoop",
"cluster",
"."
] | python | train |
ReFirmLabs/binwalk | src/binwalk/plugins/unpfs.py | https://github.com/ReFirmLabs/binwalk/blob/a0c5315fd2bae167e5c3d8469ce95d5defc743c2/src/binwalk/plugins/unpfs.py#L42-L45 | def _get_node(self):
"""Reads a chunk of meta data from file and returns a PFSNode."""
data = self.meta.read(self.node_size)
return PFSNode(data, self.endianness) | [
"def",
"_get_node",
"(",
"self",
")",
":",
"data",
"=",
"self",
".",
"meta",
".",
"read",
"(",
"self",
".",
"node_size",
")",
"return",
"PFSNode",
"(",
"data",
",",
"self",
".",
"endianness",
")"
] | Reads a chunk of meta data from file and returns a PFSNode. | [
"Reads",
"a",
"chunk",
"of",
"meta",
"data",
"from",
"file",
"and",
"returns",
"a",
"PFSNode",
"."
] | python | train |
HiPERCAM/hcam_widgets | hcam_widgets/widgets.py | https://github.com/HiPERCAM/hcam_widgets/blob/7219f0d96dd3a8ebe3139c7f542a72c02d02fce8/hcam_widgets/widgets.py#L3873-L3885 | def freeze(self):
"""
Freeze (disable) all settings
"""
for fields in zip(self.xsll, self.xsul, self.xslr, self.xsur,
self.ys, self.nx, self.ny):
for field in fields:
field.disable()
self.nquad.disable()
self.xbin.disable()
self.ybin.disable()
self.sbutt.disable()
self.frozen = True | [
"def",
"freeze",
"(",
"self",
")",
":",
"for",
"fields",
"in",
"zip",
"(",
"self",
".",
"xsll",
",",
"self",
".",
"xsul",
",",
"self",
".",
"xslr",
",",
"self",
".",
"xsur",
",",
"self",
".",
"ys",
",",
"self",
".",
"nx",
",",
"self",
".",
"n... | Freeze (disable) all settings | [
"Freeze",
"(",
"disable",
")",
"all",
"settings"
] | python | train |
tensorflow/probability | tensorflow_probability/python/vi/csiszar_divergence.py | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/vi/csiszar_divergence.py#L360-L389 | def pearson(logu, name=None):
"""The Pearson Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Pearson Csiszar-function is:
```none
f(u) = (u - 1)**2
```
Warning: this function makes non-log-space calculations and may therefore be
numerically unstable for `|logu| >> 0`.
Args:
logu: `float`-like `Tensor` representing `log(u)` from above.
name: Python `str` name prefixed to Ops created by this function.
Returns:
pearson_of_u: `float`-like `Tensor` of the Csiszar-function evaluated at
`u = exp(logu)`.
"""
with tf.compat.v1.name_scope(name, "pearson", [logu]):
logu = tf.convert_to_tensor(value=logu, name="logu")
return tf.square(tf.math.expm1(logu)) | [
"def",
"pearson",
"(",
"logu",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"compat",
".",
"v1",
".",
"name_scope",
"(",
"name",
",",
"\"pearson\"",
",",
"[",
"logu",
"]",
")",
":",
"logu",
"=",
"tf",
".",
"convert_to_tensor",
"(",
"value... | The Pearson Csiszar-function in log-space.
A Csiszar-function is a member of,
```none
F = { f:R_+ to R : f convex }.
```
The Pearson Csiszar-function is:
```none
f(u) = (u - 1)**2
```
Warning: this function makes non-log-space calculations and may therefore be
numerically unstable for `|logu| >> 0`.
Args:
logu: `float`-like `Tensor` representing `log(u)` from above.
name: Python `str` name prefixed to Ops created by this function.
Returns:
pearson_of_u: `float`-like `Tensor` of the Csiszar-function evaluated at
`u = exp(logu)`. | [
"The",
"Pearson",
"Csiszar",
"-",
"function",
"in",
"log",
"-",
"space",
"."
] | python | test |
developersociety/django-glitter | glitter/templatetags/glitter.py | https://github.com/developersociety/django-glitter/blob/2c0280ec83afee80deee94ee3934fc54239c2e87/glitter/templatetags/glitter.py#L25-L49 | def glitter_startbody(context):
"""
Template tag which renders the glitter overlay and sidebar. This is only
shown to users with permission to edit the page.
"""
user = context.get('user')
path_body = 'glitter/include/startbody.html'
path_plus = 'glitter/include/startbody_%s_%s.html'
rendered = ''
if user is not None and user.is_staff:
templates = [path_body]
# We've got a page with a glitter object:
# - May need a different startbody template
# - Check if user has permission to add
glitter = context.get('glitter')
if glitter is not None:
opts = glitter.obj._meta.app_label, glitter.obj._meta.model_name
template_path = path_plus % opts
templates.insert(0, template_path)
template = context.template.engine.select_template(templates)
rendered = template.render(context)
return rendered | [
"def",
"glitter_startbody",
"(",
"context",
")",
":",
"user",
"=",
"context",
".",
"get",
"(",
"'user'",
")",
"path_body",
"=",
"'glitter/include/startbody.html'",
"path_plus",
"=",
"'glitter/include/startbody_%s_%s.html'",
"rendered",
"=",
"''",
"if",
"user",
"is",... | Template tag which renders the glitter overlay and sidebar. This is only
shown to users with permission to edit the page. | [
"Template",
"tag",
"which",
"renders",
"the",
"glitter",
"overlay",
"and",
"sidebar",
".",
"This",
"is",
"only",
"shown",
"to",
"users",
"with",
"permission",
"to",
"edit",
"the",
"page",
"."
] | python | train |
zyga/python-glibc | tempfile_ext.py | https://github.com/zyga/python-glibc/blob/d6fdb306b123a995471584a5201155c60a34448a/tempfile_ext.py#L333-L355 | def _mkstemp_inner(dir, pre, suf, flags):
"""Code common to mkstemp, TemporaryFile, and NamedTemporaryFile."""
names = _get_candidate_names()
for seq in range(TMP_MAX):
name = next(names)
file = _os.path.join(dir, pre + name + suf)
try:
fd = _os.open(file, flags, 0o600)
return (fd, _os.path.abspath(file))
except FileExistsError:
continue # try again
except PermissionError:
# This exception is thrown when a directory with the chosen name
# already exists on windows.
if _os.name == 'nt':
continue
else:
raise
raise FileExistsError(_errno.EEXIST,
"No usable temporary file name found") | [
"def",
"_mkstemp_inner",
"(",
"dir",
",",
"pre",
",",
"suf",
",",
"flags",
")",
":",
"names",
"=",
"_get_candidate_names",
"(",
")",
"for",
"seq",
"in",
"range",
"(",
"TMP_MAX",
")",
":",
"name",
"=",
"next",
"(",
"names",
")",
"file",
"=",
"_os",
... | Code common to mkstemp, TemporaryFile, and NamedTemporaryFile. | [
"Code",
"common",
"to",
"mkstemp",
"TemporaryFile",
"and",
"NamedTemporaryFile",
"."
] | python | train |
evansde77/dockerstache | src/dockerstache/__main__.py | https://github.com/evansde77/dockerstache/blob/929c102e9fffde322dbf17f8e69533a00976aacb/src/dockerstache/__main__.py#L68-L86 | def main():
"""
_main_
Create a CLI parser and use that to run
the template rendering process
"""
options = build_parser()
try:
run(**options)
except RuntimeError as ex:
msg = (
"An error occurred running dockerstache: {} "
"please see logging info above for details"
).format(ex)
LOGGER.error(msg)
sys.exit(1) | [
"def",
"main",
"(",
")",
":",
"options",
"=",
"build_parser",
"(",
")",
"try",
":",
"run",
"(",
"*",
"*",
"options",
")",
"except",
"RuntimeError",
"as",
"ex",
":",
"msg",
"=",
"(",
"\"An error occurred running dockerstache: {} \"",
"\"please see logging info ab... | _main_
Create a CLI parser and use that to run
the template rendering process | [
"_main_"
] | python | train |
ixc/django-model-settings | model_settings/templatetags/model_settings_tags.py | https://github.com/ixc/django-model-settings/blob/654233bf7f13619e4531741f9158e7034eac031b/model_settings/templatetags/model_settings_tags.py#L102-L113 | def render_tag(self, context, name, nodelist):
"""
Returns the value of the named setting.
"""
# Use `try` and `except` instead of `setdefault()` so we can skip
# rendering the nodelist when the setting already exists.
settings = self.setting_model.objects.filter(name=name).as_dict()
try:
value = settings[name]
except KeyError:
value = settings[name] = nodelist.render(context)
return value | [
"def",
"render_tag",
"(",
"self",
",",
"context",
",",
"name",
",",
"nodelist",
")",
":",
"# Use `try` and `except` instead of `setdefault()` so we can skip",
"# rendering the nodelist when the setting already exists.",
"settings",
"=",
"self",
".",
"setting_model",
".",
"obj... | Returns the value of the named setting. | [
"Returns",
"the",
"value",
"of",
"the",
"named",
"setting",
"."
] | python | valid |
aiogram/aiogram | aiogram/dispatcher/dispatcher.py | https://github.com/aiogram/aiogram/blob/2af930149ce2482547721e2c8755c10307295e48/aiogram/dispatcher/dispatcher.py#L655-L677 | def chosen_inline_handler(self, *custom_filters, state=None, run_task=None, **kwargs):
"""
Decorator for chosen inline query handler
Example:
.. code-block:: python3
@dp.chosen_inline_handler(lambda chosen_inline_query: True)
async def some_chosen_inline_handler(chosen_inline_query: types.ChosenInlineResult)
:param state:
:param custom_filters:
:param run_task: run callback in task (no wait results)
:param kwargs:
:return:
"""
def decorator(callback):
self.register_chosen_inline_handler(callback, *custom_filters, state=state, run_task=run_task, **kwargs)
return callback
return decorator | [
"def",
"chosen_inline_handler",
"(",
"self",
",",
"*",
"custom_filters",
",",
"state",
"=",
"None",
",",
"run_task",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"def",
"decorator",
"(",
"callback",
")",
":",
"self",
".",
"register_chosen_inline_handler",... | Decorator for chosen inline query handler
Example:
.. code-block:: python3
@dp.chosen_inline_handler(lambda chosen_inline_query: True)
async def some_chosen_inline_handler(chosen_inline_query: types.ChosenInlineResult)
:param state:
:param custom_filters:
:param run_task: run callback in task (no wait results)
:param kwargs:
:return: | [
"Decorator",
"for",
"chosen",
"inline",
"query",
"handler"
] | python | train |
datastax/python-driver | cassandra/policies.py | https://github.com/datastax/python-driver/blob/30a80d0b798b1f45f8cb77163b1fa791f3e3ca29/cassandra/policies.py#L998-L1011 | def translate(self, addr):
"""
Reverse DNS the public broadcast_address, then lookup that hostname to get the AWS-resolved IP, which
will point to the private IP address within the same datacenter.
"""
# get family of this address so we translate to the same
family = socket.getaddrinfo(addr, 0, socket.AF_UNSPEC, socket.SOCK_STREAM)[0][0]
host = socket.getfqdn(addr)
for a in socket.getaddrinfo(host, 0, family, socket.SOCK_STREAM):
try:
return a[4][0]
except Exception:
pass
return addr | [
"def",
"translate",
"(",
"self",
",",
"addr",
")",
":",
"# get family of this address so we translate to the same",
"family",
"=",
"socket",
".",
"getaddrinfo",
"(",
"addr",
",",
"0",
",",
"socket",
".",
"AF_UNSPEC",
",",
"socket",
".",
"SOCK_STREAM",
")",
"[",
... | Reverse DNS the public broadcast_address, then lookup that hostname to get the AWS-resolved IP, which
will point to the private IP address within the same datacenter. | [
"Reverse",
"DNS",
"the",
"public",
"broadcast_address",
"then",
"lookup",
"that",
"hostname",
"to",
"get",
"the",
"AWS",
"-",
"resolved",
"IP",
"which",
"will",
"point",
"to",
"the",
"private",
"IP",
"address",
"within",
"the",
"same",
"datacenter",
"."
] | python | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.