repo stringlengths 7 55 | path stringlengths 4 127 | func_name stringlengths 1 88 | original_string stringlengths 75 19.8k | language stringclasses 1
value | code stringlengths 75 19.8k | code_tokens listlengths 20 707 | docstring stringlengths 3 17.3k | docstring_tokens listlengths 3 222 | sha stringlengths 40 40 | url stringlengths 87 242 | partition stringclasses 1
value | idx int64 0 252k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_to_exist | def expect_column_to_exist(
self, column, column_index=None, result_format=None, include_config=False,
catch_exceptions=None, meta=None
):
"""Expect the specified column to exist.
expect_column_to_exist is a :func:`expectation <great_expectations.data_asset.dataset.Dataset.expectation>`, not a \
`column_map_expectation` or `column_aggregate_expectation`.
Args:
column (str): \
The column name.
Other Parameters:
column_index (int or None): \
If not None, checks the order of the columns. The expectation will fail if the \
column is not in location column_index (zero-indexed).
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_column_to_exist(
self, column, column_index=None, result_format=None, include_config=False,
catch_exceptions=None, meta=None
):
"""Expect the specified column to exist.
expect_column_to_exist is a :func:`expectation <great_expectations.data_asset.dataset.Dataset.expectation>`, not a \
`column_map_expectation` or `column_aggregate_expectation`.
Args:
column (str): \
The column name.
Other Parameters:
column_index (int or None): \
If not None, checks the order of the columns. The expectation will fail if the \
column is not in location column_index (zero-indexed).
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_column_to_exist",
"(",
"self",
",",
"column",
",",
"column_index",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"None",
")",
":",
"raise",
"NotImpl... | Expect the specified column to exist.
expect_column_to_exist is a :func:`expectation <great_expectations.data_asset.dataset.Dataset.expectation>`, not a \
`column_map_expectation` or `column_aggregate_expectation`.
Args:
column (str): \
The column name.
Other Parameters:
column_index (int or None): \
If not None, checks the order of the columns. The expectation will fail if the \
column is not in location column_index (zero-indexed).
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"the",
"specified",
"column",
"to",
"exist",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L111-L149 | train | 216,900 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_table_row_count_to_be_between | def expect_table_row_count_to_be_between(self,
min_value=0,
max_value=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the number of rows to be between two values.
expect_table_row_count_to_be_between is a :func:`expectation <great_expectations.data_asset.dataset.Dataset.expectation>`, \
not a `column_map_expectation` or `column_aggregate_expectation`.
Keyword Args:
min_value (int or None): \
The minimum number of rows, inclusive.
max_value (int or None): \
The maximum number of rows, inclusive.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
* min_value and max_value are both inclusive.
* If min_value is None, then max_value is treated as an upper bound, and the number of acceptable rows has no minimum.
* If max_value is None, then min_value is treated as a lower bound, and the number of acceptable rows has no maximum.
See Also:
expect_table_row_count_to_equal
"""
raise NotImplementedError | python | def expect_table_row_count_to_be_between(self,
min_value=0,
max_value=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the number of rows to be between two values.
expect_table_row_count_to_be_between is a :func:`expectation <great_expectations.data_asset.dataset.Dataset.expectation>`, \
not a `column_map_expectation` or `column_aggregate_expectation`.
Keyword Args:
min_value (int or None): \
The minimum number of rows, inclusive.
max_value (int or None): \
The maximum number of rows, inclusive.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
* min_value and max_value are both inclusive.
* If min_value is None, then max_value is treated as an upper bound, and the number of acceptable rows has no minimum.
* If max_value is None, then min_value is treated as a lower bound, and the number of acceptable rows has no maximum.
See Also:
expect_table_row_count_to_equal
"""
raise NotImplementedError | [
"def",
"expect_table_row_count_to_be_between",
"(",
"self",
",",
"min_value",
"=",
"0",
",",
"max_value",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"None",
")",
... | Expect the number of rows to be between two values.
expect_table_row_count_to_be_between is a :func:`expectation <great_expectations.data_asset.dataset.Dataset.expectation>`, \
not a `column_map_expectation` or `column_aggregate_expectation`.
Keyword Args:
min_value (int or None): \
The minimum number of rows, inclusive.
max_value (int or None): \
The maximum number of rows, inclusive.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
* min_value and max_value are both inclusive.
* If min_value is None, then max_value is treated as an upper bound, and the number of acceptable rows has no minimum.
* If max_value is None, then min_value is treated as a lower bound, and the number of acceptable rows has no maximum.
See Also:
expect_table_row_count_to_equal | [
"Expect",
"the",
"number",
"of",
"rows",
"to",
"be",
"between",
"two",
"values",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L188-L232 | train | 216,901 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_be_of_type | def expect_column_values_to_be_of_type(
self,
column,
type_,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect each column entry to be a specified data type.
expect_column_values_to_be_of_type is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
type\_ (str): \
A string representing the data type that each column should have as entries.
For example, "double integer" refers to an integer with double precision.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Warning:
expect_column_values_to_be_of_type is slated for major changes in future versions of great_expectations.
As of v0.3, great_expectations is exclusively based on pandas, which handles typing in its own peculiar way.
Future versions of great_expectations will allow for Datasets in SQL, spark, etc.
When we make that change, we expect some breaking changes in parts of the codebase that are based strongly on pandas notions of typing.
See also:
expect_column_values_to_be_in_type_list
"""
raise NotImplementedError | python | def expect_column_values_to_be_of_type(
self,
column,
type_,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect each column entry to be a specified data type.
expect_column_values_to_be_of_type is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
type\_ (str): \
A string representing the data type that each column should have as entries.
For example, "double integer" refers to an integer with double precision.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Warning:
expect_column_values_to_be_of_type is slated for major changes in future versions of great_expectations.
As of v0.3, great_expectations is exclusively based on pandas, which handles typing in its own peculiar way.
Future versions of great_expectations will allow for Datasets in SQL, spark, etc.
When we make that change, we expect some breaking changes in parts of the codebase that are based strongly on pandas notions of typing.
See also:
expect_column_values_to_be_in_type_list
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_be_of_type",
"(",
"self",
",",
"column",
",",
"type_",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"None",
")",
":... | Expect each column entry to be a specified data type.
expect_column_values_to_be_of_type is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
type\_ (str): \
A string representing the data type that each column should have as entries.
For example, "double integer" refers to an integer with double precision.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Warning:
expect_column_values_to_be_of_type is slated for major changes in future versions of great_expectations.
As of v0.3, great_expectations is exclusively based on pandas, which handles typing in its own peculiar way.
Future versions of great_expectations will allow for Datasets in SQL, spark, etc.
When we make that change, we expect some breaking changes in parts of the codebase that are based strongly on pandas notions of typing.
See also:
expect_column_values_to_be_in_type_list | [
"Expect",
"each",
"column",
"entry",
"to",
"be",
"a",
"specified",
"data",
"type",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L409-L462 | train | 216,902 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_be_in_type_list | def expect_column_values_to_be_in_type_list(
self,
column,
type_list,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect each column entry to match a list of specified data types.
expect_column_values_to_be_in_type_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
type_list (list of str): \
A list of strings representing the data type that each column should have as entries.
For example, "double integer" refers to an integer with double precision.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Warning:
expect_column_values_to_be_in_type_list is slated for major changes in future versions of great_expectations.
As of v0.3, great_expectations is exclusively based on pandas, which handles typing in its own peculiar way.
Future versions of great_expectations will allow for Datasets in SQL, spark, etc.
When we make that change, we expect some breaking changes in parts of the codebase that are based strongly on pandas notions of typing.
See also:
expect_column_values_to_be_of_type
"""
raise NotImplementedError | python | def expect_column_values_to_be_in_type_list(
self,
column,
type_list,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect each column entry to match a list of specified data types.
expect_column_values_to_be_in_type_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
type_list (list of str): \
A list of strings representing the data type that each column should have as entries.
For example, "double integer" refers to an integer with double precision.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Warning:
expect_column_values_to_be_in_type_list is slated for major changes in future versions of great_expectations.
As of v0.3, great_expectations is exclusively based on pandas, which handles typing in its own peculiar way.
Future versions of great_expectations will allow for Datasets in SQL, spark, etc.
When we make that change, we expect some breaking changes in parts of the codebase that are based strongly on pandas notions of typing.
See also:
expect_column_values_to_be_of_type
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_be_in_type_list",
"(",
"self",
",",
"column",
",",
"type_list",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"None",
... | Expect each column entry to match a list of specified data types.
expect_column_values_to_be_in_type_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
type_list (list of str): \
A list of strings representing the data type that each column should have as entries.
For example, "double integer" refers to an integer with double precision.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Warning:
expect_column_values_to_be_in_type_list is slated for major changes in future versions of great_expectations.
As of v0.3, great_expectations is exclusively based on pandas, which handles typing in its own peculiar way.
Future versions of great_expectations will allow for Datasets in SQL, spark, etc.
When we make that change, we expect some breaking changes in parts of the codebase that are based strongly on pandas notions of typing.
See also:
expect_column_values_to_be_of_type | [
"Expect",
"each",
"column",
"entry",
"to",
"match",
"a",
"list",
"of",
"specified",
"data",
"types",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L464-L517 | train | 216,903 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_be_in_set | def expect_column_values_to_be_in_set(self,
column,
value_set,
mostly=None,
parse_strings_as_datetimes=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect each column value to be in a given set.
For example:
::
# my_df.my_col = [1,2,2,3,3,3]
>>> my_df.expect_column_values_to_be_in_set(
"my_col",
[2,3]
)
{
"success": false
"result": {
"unexpected_count": 1
"unexpected_percent": 0.16666666666666666,
"unexpected_percent_nonmissing": 0.16666666666666666,
"partial_unexpected_list": [
1
],
},
}
expect_column_values_to_be_in_set is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
value_set (set-like): \
A set of objects used for comparison.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
parse_strings_as_datetimes (boolean or None) : If True values provided in value_set will be parsed as \
datetimes before making comparisons.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_not_be_in_set
"""
raise NotImplementedError | python | def expect_column_values_to_be_in_set(self,
column,
value_set,
mostly=None,
parse_strings_as_datetimes=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect each column value to be in a given set.
For example:
::
# my_df.my_col = [1,2,2,3,3,3]
>>> my_df.expect_column_values_to_be_in_set(
"my_col",
[2,3]
)
{
"success": false
"result": {
"unexpected_count": 1
"unexpected_percent": 0.16666666666666666,
"unexpected_percent_nonmissing": 0.16666666666666666,
"partial_unexpected_list": [
1
],
},
}
expect_column_values_to_be_in_set is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
value_set (set-like): \
A set of objects used for comparison.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
parse_strings_as_datetimes (boolean or None) : If True values provided in value_set will be parsed as \
datetimes before making comparisons.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_not_be_in_set
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_be_in_set",
"(",
"self",
",",
"column",
",",
"value_set",
",",
"mostly",
"=",
"None",
",",
"parse_strings_as_datetimes",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"... | Expect each column value to be in a given set.
For example:
::
# my_df.my_col = [1,2,2,3,3,3]
>>> my_df.expect_column_values_to_be_in_set(
"my_col",
[2,3]
)
{
"success": false
"result": {
"unexpected_count": 1
"unexpected_percent": 0.16666666666666666,
"unexpected_percent_nonmissing": 0.16666666666666666,
"partial_unexpected_list": [
1
],
},
}
expect_column_values_to_be_in_set is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
value_set (set-like): \
A set of objects used for comparison.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
parse_strings_as_datetimes (boolean or None) : If True values provided in value_set will be parsed as \
datetimes before making comparisons.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_not_be_in_set | [
"Expect",
"each",
"column",
"value",
"to",
"be",
"in",
"a",
"given",
"set",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L521-L589 | train | 216,904 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_be_decreasing | def expect_column_values_to_be_decreasing(self,
column,
strictly=None,
parse_strings_as_datetimes=None,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column values to be decreasing.
By default, this expectation only works for numeric or datetime data.
When `parse_strings_as_datetimes=True`, it can also parse strings to datetimes.
If `strictly=True`, then this expectation is only satisfied if each consecutive value
is strictly decreasing--equal values are treated as failures.
expect_column_values_to_be_decreasing is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
strictly (Boolean or None): \
If True, values must be strictly greater than previous values
parse_strings_as_datetimes (boolean or None) : \
If True, all non-null column values to datetimes before making comparisons
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_be_increasing
"""
raise NotImplementedError | python | def expect_column_values_to_be_decreasing(self,
column,
strictly=None,
parse_strings_as_datetimes=None,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column values to be decreasing.
By default, this expectation only works for numeric or datetime data.
When `parse_strings_as_datetimes=True`, it can also parse strings to datetimes.
If `strictly=True`, then this expectation is only satisfied if each consecutive value
is strictly decreasing--equal values are treated as failures.
expect_column_values_to_be_decreasing is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
strictly (Boolean or None): \
If True, values must be strictly greater than previous values
parse_strings_as_datetimes (boolean or None) : \
If True, all non-null column values to datetimes before making comparisons
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_be_increasing
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_be_decreasing",
"(",
"self",
",",
"column",
",",
"strictly",
"=",
"None",
",",
"parse_strings_as_datetimes",
"=",
"None",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"ca... | Expect column values to be decreasing.
By default, this expectation only works for numeric or datetime data.
When `parse_strings_as_datetimes=True`, it can also parse strings to datetimes.
If `strictly=True`, then this expectation is only satisfied if each consecutive value
is strictly decreasing--equal values are treated as failures.
expect_column_values_to_be_decreasing is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
strictly (Boolean or None): \
If True, values must be strictly greater than previous values
parse_strings_as_datetimes (boolean or None) : \
If True, all non-null column values to datetimes before making comparisons
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_be_increasing | [
"Expect",
"column",
"values",
"to",
"be",
"decreasing",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L776-L830 | train | 216,905 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_match_regex_list | def expect_column_values_to_match_regex_list(self,
column,
regex_list,
match_on="any",
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the column entries to be strings that can be matched to either any of or all of a list of regular expressions.
Matches can be anywhere in the string.
expect_column_values_to_match_regex_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
regex_list (list): \
The list of regular expressions which the column entries should match
Keyword Args:
match_on= (string): \
"any" or "all".
Use "any" if the value should match at least one regular expression in the list.
Use "all" if it should match each regular expression in the list.
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_match_regex
expect_column_values_to_not_match_regex
"""
raise NotImplementedError | python | def expect_column_values_to_match_regex_list(self,
column,
regex_list,
match_on="any",
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the column entries to be strings that can be matched to either any of or all of a list of regular expressions.
Matches can be anywhere in the string.
expect_column_values_to_match_regex_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
regex_list (list): \
The list of regular expressions which the column entries should match
Keyword Args:
match_on= (string): \
"any" or "all".
Use "any" if the value should match at least one regular expression in the list.
Use "all" if it should match each regular expression in the list.
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_match_regex
expect_column_values_to_not_match_regex
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_match_regex_list",
"(",
"self",
",",
"column",
",",
"regex_list",
",",
"match_on",
"=",
"\"any\"",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"... | Expect the column entries to be strings that can be matched to either any of or all of a list of regular expressions.
Matches can be anywhere in the string.
expect_column_values_to_match_regex_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
regex_list (list): \
The list of regular expressions which the column entries should match
Keyword Args:
match_on= (string): \
"any" or "all".
Use "any" if the value should match at least one regular expression in the list.
Use "all" if it should match each regular expression in the list.
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_match_regex
expect_column_values_to_not_match_regex | [
"Expect",
"the",
"column",
"entries",
"to",
"be",
"strings",
"that",
"can",
"be",
"matched",
"to",
"either",
"any",
"of",
"or",
"all",
"of",
"a",
"list",
"of",
"regular",
"expressions",
".",
"Matches",
"can",
"be",
"anywhere",
"in",
"the",
"string",
"."
... | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1035-L1086 | train | 216,906 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_not_match_regex_list | def expect_column_values_to_not_match_regex_list(self, column, regex_list,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None):
"""Expect the column entries to be strings that do not match any of a list of regular expressions. Matches can \
be anywhere in the string.
expect_column_values_to_not_match_regex_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
regex_list (list): \
The list of regular expressions which the column entries should not match
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_match_regex_list
"""
raise NotImplementedError | python | def expect_column_values_to_not_match_regex_list(self, column, regex_list,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None):
"""Expect the column entries to be strings that do not match any of a list of regular expressions. Matches can \
be anywhere in the string.
expect_column_values_to_not_match_regex_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
regex_list (list): \
The list of regular expressions which the column entries should not match
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_match_regex_list
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_not_match_regex_list",
"(",
"self",
",",
"column",
",",
"regex_list",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"Non... | Expect the column entries to be strings that do not match any of a list of regular expressions. Matches can \
be anywhere in the string.
expect_column_values_to_not_match_regex_list is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
regex_list (list): \
The list of regular expressions which the column entries should not match
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_match_regex_list | [
"Expect",
"the",
"column",
"entries",
"to",
"be",
"strings",
"that",
"do",
"not",
"match",
"any",
"of",
"a",
"list",
"of",
"regular",
"expressions",
".",
"Matches",
"can",
"\\",
"be",
"anywhere",
"in",
"the",
"string",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1088-L1131 | train | 216,907 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_match_strftime_format | def expect_column_values_to_match_strftime_format(self,
column,
strftime_format,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column entries to be strings representing a date or time with a given format.
expect_column_values_to_match_strftime_format is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
strftime_format (str): \
A strftime format string to use for matching
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_column_values_to_match_strftime_format(self,
column,
strftime_format,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column entries to be strings representing a date or time with a given format.
expect_column_values_to_match_strftime_format is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
strftime_format (str): \
A strftime format string to use for matching
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_match_strftime_format",
"(",
"self",
",",
"column",
",",
"strftime_format",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
... | Expect column entries to be strings representing a date or time with a given format.
expect_column_values_to_match_strftime_format is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
strftime_format (str): \
A strftime format string to use for matching
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"column",
"entries",
"to",
"be",
"strings",
"representing",
"a",
"date",
"or",
"time",
"with",
"a",
"given",
"format",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1135-L1177 | train | 216,908 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_be_dateutil_parseable | def expect_column_values_to_be_dateutil_parseable(self,
column,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column entries to be parseable using dateutil.
expect_column_values_to_be_dateutil_parseable is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_column_values_to_be_dateutil_parseable(self,
column,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column entries to be parseable using dateutil.
expect_column_values_to_be_dateutil_parseable is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_be_dateutil_parseable",
"(",
"self",
",",
"column",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"None",
")",
":",
"r... | Expect column entries to be parseable using dateutil.
expect_column_values_to_be_dateutil_parseable is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"column",
"entries",
"to",
"be",
"parseable",
"using",
"dateutil",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1179-L1217 | train | 216,909 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_values_to_match_json_schema | def expect_column_values_to_match_json_schema(self,
column,
json_schema,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column entries to be JSON objects matching a given JSON schema.
expect_column_values_to_match_json_schema is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_be_json_parseable
The JSON-schema docs at: http://json-schema.org/
"""
raise NotImplementedError | python | def expect_column_values_to_match_json_schema(self,
column,
json_schema,
mostly=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect column entries to be JSON objects matching a given JSON schema.
expect_column_values_to_match_json_schema is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_be_json_parseable
The JSON-schema docs at: http://json-schema.org/
"""
raise NotImplementedError | [
"def",
"expect_column_values_to_match_json_schema",
"(",
"self",
",",
"column",
",",
"json_schema",
",",
"mostly",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"None"... | Expect column entries to be JSON objects matching a given JSON schema.
expect_column_values_to_match_json_schema is a :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>`.
Args:
column (str): \
The column name.
Keyword Args:
mostly (None or a float between 0 and 1): \
Return `"success": True` if at least mostly percent of values match the expectation. \
For more detail, see :ref:`mostly`.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
See Also:
expect_column_values_to_be_json_parseable
The JSON-schema docs at: http://json-schema.org/ | [
"Expect",
"column",
"entries",
"to",
"be",
"JSON",
"objects",
"matching",
"a",
"given",
"JSON",
"schema",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1262-L1306 | train | 216,910 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_most_common_value_to_be_in_set | def expect_column_most_common_value_to_be_in_set(self,
column,
value_set,
ties_okay=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the most common value to be within the designated value set
expect_column_most_common_value_to_be_in_set is a :func:`column_aggregate_expectation <great_expectations.data_asset.dataset.Dataset.column_aggregate_expectation>`.
Args:
column (str): \
The column name
value_set (set-like): \
A list of potential values to match
Keyword Args:
ties_okay (boolean or None): \
If True, then the expectation will still succeed if values outside the designated set are as common (but not more common) than designated values
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
These fields in the result object are customized for this expectation:
::
{
"observed_value": (list) The most common values in the column
}
`observed_value` contains a list of the most common values.
Often, this will just be a single element. But if there's a tie for most common among multiple values,
`observed_value` will contain a single copy of each most common value.
"""
raise NotImplementedError | python | def expect_column_most_common_value_to_be_in_set(self,
column,
value_set,
ties_okay=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the most common value to be within the designated value set
expect_column_most_common_value_to_be_in_set is a :func:`column_aggregate_expectation <great_expectations.data_asset.dataset.Dataset.column_aggregate_expectation>`.
Args:
column (str): \
The column name
value_set (set-like): \
A list of potential values to match
Keyword Args:
ties_okay (boolean or None): \
If True, then the expectation will still succeed if values outside the designated set are as common (but not more common) than designated values
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
These fields in the result object are customized for this expectation:
::
{
"observed_value": (list) The most common values in the column
}
`observed_value` contains a list of the most common values.
Often, this will just be a single element. But if there's a tie for most common among multiple values,
`observed_value` will contain a single copy of each most common value.
"""
raise NotImplementedError | [
"def",
"expect_column_most_common_value_to_be_in_set",
"(",
"self",
",",
"column",
",",
"value_set",
",",
"ties_okay",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
"N... | Expect the most common value to be within the designated value set
expect_column_most_common_value_to_be_in_set is a :func:`column_aggregate_expectation <great_expectations.data_asset.dataset.Dataset.column_aggregate_expectation>`.
Args:
column (str): \
The column name
value_set (set-like): \
A list of potential values to match
Keyword Args:
ties_okay (boolean or None): \
If True, then the expectation will still succeed if values outside the designated set are as common (but not more common) than designated values
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
These fields in the result object are customized for this expectation:
::
{
"observed_value": (list) The most common values in the column
}
`observed_value` contains a list of the most common values.
Often, this will just be a single element. But if there's a tie for most common among multiple values,
`observed_value` will contain a single copy of each most common value. | [
"Expect",
"the",
"most",
"common",
"value",
"to",
"be",
"within",
"the",
"designated",
"value",
"set"
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1666-L1719 | train | 216,911 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_min_to_be_between | def expect_column_min_to_be_between(self,
column,
min_value=None,
max_value=None,
parse_strings_as_datetimes=None,
output_strftime_format=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the column to sum to be between an min and max value
expect_column_min_to_be_between is a :func:`column_aggregate_expectation <great_expectations.data_asset.dataset.Dataset.column_aggregate_expectation>`.
Args:
column (str): \
The column name
min_value (comparable type or None): \
The minimum number of unique values allowed.
max_value (comparable type or None): \
The maximum number of unique values allowed.
Keyword Args:
parse_strings_as_datetimes (Boolean or None): \
If True, parse min_value, max_values, and all non-null column values to datetimes before making comparisons.
output_strftime_format (str or None): \
A valid strfime format for datetime output. Only used if parse_strings_as_datetimes=True.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
These fields in the result object are customized for this expectation:
::
{
"observed_value": (list) The actual column min
}
* min_value and max_value are both inclusive.
* If min_value is None, then max_value is treated as an upper bound
* If max_value is None, then min_value is treated as a lower bound
"""
raise NotImplementedError | python | def expect_column_min_to_be_between(self,
column,
min_value=None,
max_value=None,
parse_strings_as_datetimes=None,
output_strftime_format=None,
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""Expect the column to sum to be between an min and max value
expect_column_min_to_be_between is a :func:`column_aggregate_expectation <great_expectations.data_asset.dataset.Dataset.column_aggregate_expectation>`.
Args:
column (str): \
The column name
min_value (comparable type or None): \
The minimum number of unique values allowed.
max_value (comparable type or None): \
The maximum number of unique values allowed.
Keyword Args:
parse_strings_as_datetimes (Boolean or None): \
If True, parse min_value, max_values, and all non-null column values to datetimes before making comparisons.
output_strftime_format (str or None): \
A valid strfime format for datetime output. Only used if parse_strings_as_datetimes=True.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
These fields in the result object are customized for this expectation:
::
{
"observed_value": (list) The actual column min
}
* min_value and max_value are both inclusive.
* If min_value is None, then max_value is treated as an upper bound
* If max_value is None, then min_value is treated as a lower bound
"""
raise NotImplementedError | [
"def",
"expect_column_min_to_be_between",
"(",
"self",
",",
"column",
",",
"min_value",
"=",
"None",
",",
"max_value",
"=",
"None",
",",
"parse_strings_as_datetimes",
"=",
"None",
",",
"output_strftime_format",
"=",
"None",
",",
"result_format",
"=",
"None",
",",
... | Expect the column to sum to be between an min and max value
expect_column_min_to_be_between is a :func:`column_aggregate_expectation <great_expectations.data_asset.dataset.Dataset.column_aggregate_expectation>`.
Args:
column (str): \
The column name
min_value (comparable type or None): \
The minimum number of unique values allowed.
max_value (comparable type or None): \
The maximum number of unique values allowed.
Keyword Args:
parse_strings_as_datetimes (Boolean or None): \
If True, parse min_value, max_values, and all non-null column values to datetimes before making comparisons.
output_strftime_format (str or None): \
A valid strfime format for datetime output. Only used if parse_strings_as_datetimes=True.
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
Notes:
These fields in the result object are customized for this expectation:
::
{
"observed_value": (list) The actual column min
}
* min_value and max_value are both inclusive.
* If min_value is None, then max_value is treated as an upper bound
* If max_value is None, then min_value is treated as a lower bound | [
"Expect",
"the",
"column",
"to",
"sum",
"to",
"be",
"between",
"an",
"min",
"and",
"max",
"value"
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L1775-L1835 | train | 216,912 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_pair_values_to_be_equal | def expect_column_pair_values_to_be_equal(self,
column_A,
column_B,
ignore_row_if="both_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect the values in column A to be the same as column B.
Args:
column_A (str): The first column name
column_B (str): The second column name
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "neither"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_column_pair_values_to_be_equal(self,
column_A,
column_B,
ignore_row_if="both_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect the values in column A to be the same as column B.
Args:
column_A (str): The first column name
column_B (str): The second column name
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "neither"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_column_pair_values_to_be_equal",
"(",
"self",
",",
"column_A",
",",
"column_B",
",",
"ignore_row_if",
"=",
"\"both_values_are_missing\"",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
... | Expect the values in column A to be the same as column B.
Args:
column_A (str): The first column name
column_B (str): The second column name
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "neither"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"the",
"values",
"in",
"column",
"A",
"to",
"be",
"the",
"same",
"as",
"column",
"B",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L2165-L2202 | train | 216,913 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_pair_values_A_to_be_greater_than_B | def expect_column_pair_values_A_to_be_greater_than_B(self,
column_A,
column_B,
or_equal=None,
parse_strings_as_datetimes=None,
allow_cross_type_comparisons=None,
ignore_row_if="both_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect values in column A to be greater than column B.
Args:
column_A (str): The first column name
column_B (str): The second column name
or_equal (boolean or None): If True, then values can be equal, not strictly greater
Keyword Args:
allow_cross_type_comparisons (boolean or None) : If True, allow comparisons between types (e.g. integer and\
string). Otherwise, attempting such comparisons will raise an exception.
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "neither
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_column_pair_values_A_to_be_greater_than_B(self,
column_A,
column_B,
or_equal=None,
parse_strings_as_datetimes=None,
allow_cross_type_comparisons=None,
ignore_row_if="both_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect values in column A to be greater than column B.
Args:
column_A (str): The first column name
column_B (str): The second column name
or_equal (boolean or None): If True, then values can be equal, not strictly greater
Keyword Args:
allow_cross_type_comparisons (boolean or None) : If True, allow comparisons between types (e.g. integer and\
string). Otherwise, attempting such comparisons will raise an exception.
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "neither
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_column_pair_values_A_to_be_greater_than_B",
"(",
"self",
",",
"column_A",
",",
"column_B",
",",
"or_equal",
"=",
"None",
",",
"parse_strings_as_datetimes",
"=",
"None",
",",
"allow_cross_type_comparisons",
"=",
"None",
",",
"ignore_row_if",
"=",
"\"both_v... | Expect values in column A to be greater than column B.
Args:
column_A (str): The first column name
column_B (str): The second column name
or_equal (boolean or None): If True, then values can be equal, not strictly greater
Keyword Args:
allow_cross_type_comparisons (boolean or None) : If True, allow comparisons between types (e.g. integer and\
string). Otherwise, attempting such comparisons will raise an exception.
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "neither
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"values",
"in",
"column",
"A",
"to",
"be",
"greater",
"than",
"column",
"B",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L2204-L2249 | train | 216,914 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_column_pair_values_to_be_in_set | def expect_column_pair_values_to_be_in_set(self,
column_A,
column_B,
value_pairs_set,
ignore_row_if="both_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect paired values from columns A and B to belong to a set of valid pairs.
Args:
column_A (str): The first column name
column_B (str): The second column name
value_pairs_set (list of tuples): All the valid pairs to be matched
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "never"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_column_pair_values_to_be_in_set(self,
column_A,
column_B,
value_pairs_set,
ignore_row_if="both_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect paired values from columns A and B to belong to a set of valid pairs.
Args:
column_A (str): The first column name
column_B (str): The second column name
value_pairs_set (list of tuples): All the valid pairs to be matched
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "never"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_column_pair_values_to_be_in_set",
"(",
"self",
",",
"column_A",
",",
"column_B",
",",
"value_pairs_set",
",",
"ignore_row_if",
"=",
"\"both_values_are_missing\"",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptio... | Expect paired values from columns A and B to belong to a set of valid pairs.
Args:
column_A (str): The first column name
column_B (str): The second column name
value_pairs_set (list of tuples): All the valid pairs to be matched
Keyword Args:
ignore_row_if (str): "both_values_are_missing", "either_value_is_missing", "never"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"paired",
"values",
"from",
"columns",
"A",
"and",
"B",
"to",
"belong",
"to",
"a",
"set",
"of",
"valid",
"pairs",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L2251-L2290 | train | 216,915 |
great-expectations/great_expectations | great_expectations/dataset/dataset.py | Dataset.expect_multicolumn_values_to_be_unique | def expect_multicolumn_values_to_be_unique(self,
column_list,
ignore_row_if="all_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect the values for each row to be unique across the columns listed.
Args:
column_list (tuple or list): The first column name
Keyword Args:
ignore_row_if (str): "all_values_are_missing", "any_value_is_missing", "never"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | python | def expect_multicolumn_values_to_be_unique(self,
column_list,
ignore_row_if="all_values_are_missing",
result_format=None, include_config=False, catch_exceptions=None, meta=None
):
"""
Expect the values for each row to be unique across the columns listed.
Args:
column_list (tuple or list): The first column name
Keyword Args:
ignore_row_if (str): "all_values_are_missing", "any_value_is_missing", "never"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`.
"""
raise NotImplementedError | [
"def",
"expect_multicolumn_values_to_be_unique",
"(",
"self",
",",
"column_list",
",",
"ignore_row_if",
"=",
"\"all_values_are_missing\"",
",",
"result_format",
"=",
"None",
",",
"include_config",
"=",
"False",
",",
"catch_exceptions",
"=",
"None",
",",
"meta",
"=",
... | Expect the values for each row to be unique across the columns listed.
Args:
column_list (tuple or list): The first column name
Keyword Args:
ignore_row_if (str): "all_values_are_missing", "any_value_is_missing", "never"
Other Parameters:
result_format (str or None): \
Which output mode to use: `BOOLEAN_ONLY`, `BASIC`, `COMPLETE`, or `SUMMARY`.
For more detail, see :ref:`result_format <result_format>`.
include_config (boolean): \
If True, then include the expectation config as part of the result object. \
For more detail, see :ref:`include_config`.
catch_exceptions (boolean or None): \
If True, then catch exceptions and include them as part of the result object. \
For more detail, see :ref:`catch_exceptions`.
meta (dict or None): \
A JSON-serializable dictionary (nesting allowed) that will be included in the output without modification. \
For more detail, see :ref:`meta`.
Returns:
A JSON-serializable expectation result object.
Exact fields vary depending on the values passed to :ref:`result_format <result_format>` and
:ref:`include_config`, :ref:`catch_exceptions`, and :ref:`meta`. | [
"Expect",
"the",
"values",
"for",
"each",
"row",
"to",
"be",
"unique",
"across",
"the",
"columns",
"listed",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/dataset.py#L2294-L2329 | train | 216,916 |
great-expectations/great_expectations | great_expectations/dataset/pandas_dataset.py | MetaPandasDataset.column_map_expectation | def column_map_expectation(cls, func):
"""Constructs an expectation using column-map semantics.
The MetaPandasDataset implementation replaces the "column" parameter supplied by the user with a pandas Series
object containing the actual column from the relevant pandas dataframe. This simplifies the implementing expectation
logic while preserving the standard Dataset signature and expected behavior.
See :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>` \
for full documentation of this function.
"""
if PY3:
argspec = inspect.getfullargspec(func)[0][1:]
else:
argspec = inspect.getargspec(func)[0][1:]
@cls.expectation(argspec)
@wraps(func)
def inner_wrapper(self, column, mostly=None, result_format=None, *args, **kwargs):
if result_format is None:
result_format = self.default_expectation_args["result_format"]
result_format = parse_result_format(result_format)
# FIXME temporary fix for missing/ignored value
ignore_values = [None, np.nan]
if func.__name__ in ['expect_column_values_to_not_be_null', 'expect_column_values_to_be_null']:
ignore_values = []
result_format['partial_unexpected_count'] = 0 # Optimization to avoid meaningless computation for these expectations
series = self[column]
# FIXME rename to mapped_ignore_values?
if len(ignore_values) == 0:
boolean_mapped_null_values = np.array(
[False for value in series])
else:
boolean_mapped_null_values = np.array([True if (value in ignore_values) or (pd.isnull(value)) else False
for value in series])
element_count = int(len(series))
# FIXME rename nonnull to non_ignored?
nonnull_values = series[boolean_mapped_null_values == False]
nonnull_count = int((boolean_mapped_null_values == False).sum())
boolean_mapped_success_values = func(
self, nonnull_values, *args, **kwargs)
success_count = np.count_nonzero(boolean_mapped_success_values)
unexpected_list = list(
nonnull_values[boolean_mapped_success_values == False])
unexpected_index_list = list(
nonnull_values[boolean_mapped_success_values == False].index)
success, percent_success = self._calc_map_expectation_success(
success_count, nonnull_count, mostly)
return_obj = self._format_map_output(
result_format, success,
element_count, nonnull_count,
len(unexpected_list),
unexpected_list, unexpected_index_list
)
# FIXME Temp fix for result format
if func.__name__ in ['expect_column_values_to_not_be_null', 'expect_column_values_to_be_null']:
del return_obj['result']['unexpected_percent_nonmissing']
try:
del return_obj['result']['partial_unexpected_counts']
except KeyError:
pass
return return_obj
inner_wrapper.__name__ = func.__name__
inner_wrapper.__doc__ = func.__doc__
return inner_wrapper | python | def column_map_expectation(cls, func):
"""Constructs an expectation using column-map semantics.
The MetaPandasDataset implementation replaces the "column" parameter supplied by the user with a pandas Series
object containing the actual column from the relevant pandas dataframe. This simplifies the implementing expectation
logic while preserving the standard Dataset signature and expected behavior.
See :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>` \
for full documentation of this function.
"""
if PY3:
argspec = inspect.getfullargspec(func)[0][1:]
else:
argspec = inspect.getargspec(func)[0][1:]
@cls.expectation(argspec)
@wraps(func)
def inner_wrapper(self, column, mostly=None, result_format=None, *args, **kwargs):
if result_format is None:
result_format = self.default_expectation_args["result_format"]
result_format = parse_result_format(result_format)
# FIXME temporary fix for missing/ignored value
ignore_values = [None, np.nan]
if func.__name__ in ['expect_column_values_to_not_be_null', 'expect_column_values_to_be_null']:
ignore_values = []
result_format['partial_unexpected_count'] = 0 # Optimization to avoid meaningless computation for these expectations
series = self[column]
# FIXME rename to mapped_ignore_values?
if len(ignore_values) == 0:
boolean_mapped_null_values = np.array(
[False for value in series])
else:
boolean_mapped_null_values = np.array([True if (value in ignore_values) or (pd.isnull(value)) else False
for value in series])
element_count = int(len(series))
# FIXME rename nonnull to non_ignored?
nonnull_values = series[boolean_mapped_null_values == False]
nonnull_count = int((boolean_mapped_null_values == False).sum())
boolean_mapped_success_values = func(
self, nonnull_values, *args, **kwargs)
success_count = np.count_nonzero(boolean_mapped_success_values)
unexpected_list = list(
nonnull_values[boolean_mapped_success_values == False])
unexpected_index_list = list(
nonnull_values[boolean_mapped_success_values == False].index)
success, percent_success = self._calc_map_expectation_success(
success_count, nonnull_count, mostly)
return_obj = self._format_map_output(
result_format, success,
element_count, nonnull_count,
len(unexpected_list),
unexpected_list, unexpected_index_list
)
# FIXME Temp fix for result format
if func.__name__ in ['expect_column_values_to_not_be_null', 'expect_column_values_to_be_null']:
del return_obj['result']['unexpected_percent_nonmissing']
try:
del return_obj['result']['partial_unexpected_counts']
except KeyError:
pass
return return_obj
inner_wrapper.__name__ = func.__name__
inner_wrapper.__doc__ = func.__doc__
return inner_wrapper | [
"def",
"column_map_expectation",
"(",
"cls",
",",
"func",
")",
":",
"if",
"PY3",
":",
"argspec",
"=",
"inspect",
".",
"getfullargspec",
"(",
"func",
")",
"[",
"0",
"]",
"[",
"1",
":",
"]",
"else",
":",
"argspec",
"=",
"inspect",
".",
"getargspec",
"(... | Constructs an expectation using column-map semantics.
The MetaPandasDataset implementation replaces the "column" parameter supplied by the user with a pandas Series
object containing the actual column from the relevant pandas dataframe. This simplifies the implementing expectation
logic while preserving the standard Dataset signature and expected behavior.
See :func:`column_map_expectation <great_expectations.data_asset.dataset.Dataset.column_map_expectation>` \
for full documentation of this function. | [
"Constructs",
"an",
"expectation",
"using",
"column",
"-",
"map",
"semantics",
"."
] | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/pandas_dataset.py#L43-L122 | train | 216,917 |
great-expectations/great_expectations | great_expectations/dataset/pandas_dataset.py | MetaPandasDataset.column_pair_map_expectation | def column_pair_map_expectation(cls, func):
"""
The column_pair_map_expectation decorator handles boilerplate issues surrounding the common pattern of evaluating
truthiness of some condition on a per row basis across a pair of columns.
"""
if PY3:
argspec = inspect.getfullargspec(func)[0][1:]
else:
argspec = inspect.getargspec(func)[0][1:]
@cls.expectation(argspec)
@wraps(func)
def inner_wrapper(self, column_A, column_B, mostly=None, ignore_row_if="both_values_are_missing", result_format=None, *args, **kwargs):
if result_format is None:
result_format = self.default_expectation_args["result_format"]
series_A = self[column_A]
series_B = self[column_B]
if ignore_row_if == "both_values_are_missing":
boolean_mapped_null_values = series_A.isnull() & series_B.isnull()
elif ignore_row_if == "either_value_is_missing":
boolean_mapped_null_values = series_A.isnull() | series_B.isnull()
elif ignore_row_if == "never":
boolean_mapped_null_values = series_A.map(lambda x: False)
else:
raise ValueError(
"Unknown value of ignore_row_if: %s", (ignore_row_if,))
assert len(series_A) == len(
series_B), "Series A and B must be the same length"
# This next bit only works if series_A and _B are the same length
element_count = int(len(series_A))
nonnull_count = (boolean_mapped_null_values == False).sum()
nonnull_values_A = series_A[boolean_mapped_null_values == False]
nonnull_values_B = series_B[boolean_mapped_null_values == False]
nonnull_values = [value_pair for value_pair in zip(
list(nonnull_values_A),
list(nonnull_values_B)
)]
boolean_mapped_success_values = func(
self, nonnull_values_A, nonnull_values_B, *args, **kwargs)
success_count = boolean_mapped_success_values.sum()
unexpected_list = [value_pair for value_pair in zip(
list(series_A[(boolean_mapped_success_values == False) & (
boolean_mapped_null_values == False)]),
list(series_B[(boolean_mapped_success_values == False) & (
boolean_mapped_null_values == False)])
)]
unexpected_index_list = list(series_A[(boolean_mapped_success_values == False) & (
boolean_mapped_null_values == False)].index)
success, percent_success = self._calc_map_expectation_success(
success_count, nonnull_count, mostly)
return_obj = self._format_map_output(
result_format, success,
element_count, nonnull_count,
len(unexpected_list),
unexpected_list, unexpected_index_list
)
return return_obj
inner_wrapper.__name__ = func.__name__
inner_wrapper.__doc__ = func.__doc__
return inner_wrapper | python | def column_pair_map_expectation(cls, func):
"""
The column_pair_map_expectation decorator handles boilerplate issues surrounding the common pattern of evaluating
truthiness of some condition on a per row basis across a pair of columns.
"""
if PY3:
argspec = inspect.getfullargspec(func)[0][1:]
else:
argspec = inspect.getargspec(func)[0][1:]
@cls.expectation(argspec)
@wraps(func)
def inner_wrapper(self, column_A, column_B, mostly=None, ignore_row_if="both_values_are_missing", result_format=None, *args, **kwargs):
if result_format is None:
result_format = self.default_expectation_args["result_format"]
series_A = self[column_A]
series_B = self[column_B]
if ignore_row_if == "both_values_are_missing":
boolean_mapped_null_values = series_A.isnull() & series_B.isnull()
elif ignore_row_if == "either_value_is_missing":
boolean_mapped_null_values = series_A.isnull() | series_B.isnull()
elif ignore_row_if == "never":
boolean_mapped_null_values = series_A.map(lambda x: False)
else:
raise ValueError(
"Unknown value of ignore_row_if: %s", (ignore_row_if,))
assert len(series_A) == len(
series_B), "Series A and B must be the same length"
# This next bit only works if series_A and _B are the same length
element_count = int(len(series_A))
nonnull_count = (boolean_mapped_null_values == False).sum()
nonnull_values_A = series_A[boolean_mapped_null_values == False]
nonnull_values_B = series_B[boolean_mapped_null_values == False]
nonnull_values = [value_pair for value_pair in zip(
list(nonnull_values_A),
list(nonnull_values_B)
)]
boolean_mapped_success_values = func(
self, nonnull_values_A, nonnull_values_B, *args, **kwargs)
success_count = boolean_mapped_success_values.sum()
unexpected_list = [value_pair for value_pair in zip(
list(series_A[(boolean_mapped_success_values == False) & (
boolean_mapped_null_values == False)]),
list(series_B[(boolean_mapped_success_values == False) & (
boolean_mapped_null_values == False)])
)]
unexpected_index_list = list(series_A[(boolean_mapped_success_values == False) & (
boolean_mapped_null_values == False)].index)
success, percent_success = self._calc_map_expectation_success(
success_count, nonnull_count, mostly)
return_obj = self._format_map_output(
result_format, success,
element_count, nonnull_count,
len(unexpected_list),
unexpected_list, unexpected_index_list
)
return return_obj
inner_wrapper.__name__ = func.__name__
inner_wrapper.__doc__ = func.__doc__
return inner_wrapper | [
"def",
"column_pair_map_expectation",
"(",
"cls",
",",
"func",
")",
":",
"if",
"PY3",
":",
"argspec",
"=",
"inspect",
".",
"getfullargspec",
"(",
"func",
")",
"[",
"0",
"]",
"[",
"1",
":",
"]",
"else",
":",
"argspec",
"=",
"inspect",
".",
"getargspec",... | The column_pair_map_expectation decorator handles boilerplate issues surrounding the common pattern of evaluating
truthiness of some condition on a per row basis across a pair of columns. | [
"The",
"column_pair_map_expectation",
"decorator",
"handles",
"boilerplate",
"issues",
"surrounding",
"the",
"common",
"pattern",
"of",
"evaluating",
"truthiness",
"of",
"some",
"condition",
"on",
"a",
"per",
"row",
"basis",
"across",
"a",
"pair",
"of",
"columns",
... | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/pandas_dataset.py#L125-L196 | train | 216,918 |
great-expectations/great_expectations | great_expectations/dataset/pandas_dataset.py | MetaPandasDataset.multicolumn_map_expectation | def multicolumn_map_expectation(cls, func):
"""
The multicolumn_map_expectation decorator handles boilerplate issues surrounding the common pattern of
evaluating truthiness of some condition on a per row basis across a set of columns.
"""
if PY3:
argspec = inspect.getfullargspec(func)[0][1:]
else:
argspec = inspect.getargspec(func)[0][1:]
@cls.expectation(argspec)
@wraps(func)
def inner_wrapper(self, column_list, mostly=None, ignore_row_if="all_values_are_missing",
result_format=None, *args, **kwargs):
if result_format is None:
result_format = self.default_expectation_args["result_format"]
test_df = self[column_list]
if ignore_row_if == "all_values_are_missing":
boolean_mapped_skip_values = test_df.isnull().all(axis=1)
elif ignore_row_if == "any_value_is_missing":
boolean_mapped_skip_values = test_df.isnull().any(axis=1)
elif ignore_row_if == "never":
boolean_mapped_skip_values = pd.Series([False] * len(test_df))
else:
raise ValueError(
"Unknown value of ignore_row_if: %s", (ignore_row_if,))
boolean_mapped_success_values = func(
self, test_df[boolean_mapped_skip_values == False], *args, **kwargs)
success_count = boolean_mapped_success_values.sum()
nonnull_count = (~boolean_mapped_skip_values).sum()
element_count = len(test_df)
unexpected_list = test_df[(boolean_mapped_skip_values == False) & (boolean_mapped_success_values == False)]
unexpected_index_list = list(unexpected_list.index)
success, percent_success = self._calc_map_expectation_success(
success_count, nonnull_count, mostly)
return_obj = self._format_map_output(
result_format, success,
element_count, nonnull_count,
len(unexpected_list),
unexpected_list.to_dict(orient='records'), unexpected_index_list
)
return return_obj
inner_wrapper.__name__ = func.__name__
inner_wrapper.__doc__ = func.__doc__
return inner_wrapper | python | def multicolumn_map_expectation(cls, func):
"""
The multicolumn_map_expectation decorator handles boilerplate issues surrounding the common pattern of
evaluating truthiness of some condition on a per row basis across a set of columns.
"""
if PY3:
argspec = inspect.getfullargspec(func)[0][1:]
else:
argspec = inspect.getargspec(func)[0][1:]
@cls.expectation(argspec)
@wraps(func)
def inner_wrapper(self, column_list, mostly=None, ignore_row_if="all_values_are_missing",
result_format=None, *args, **kwargs):
if result_format is None:
result_format = self.default_expectation_args["result_format"]
test_df = self[column_list]
if ignore_row_if == "all_values_are_missing":
boolean_mapped_skip_values = test_df.isnull().all(axis=1)
elif ignore_row_if == "any_value_is_missing":
boolean_mapped_skip_values = test_df.isnull().any(axis=1)
elif ignore_row_if == "never":
boolean_mapped_skip_values = pd.Series([False] * len(test_df))
else:
raise ValueError(
"Unknown value of ignore_row_if: %s", (ignore_row_if,))
boolean_mapped_success_values = func(
self, test_df[boolean_mapped_skip_values == False], *args, **kwargs)
success_count = boolean_mapped_success_values.sum()
nonnull_count = (~boolean_mapped_skip_values).sum()
element_count = len(test_df)
unexpected_list = test_df[(boolean_mapped_skip_values == False) & (boolean_mapped_success_values == False)]
unexpected_index_list = list(unexpected_list.index)
success, percent_success = self._calc_map_expectation_success(
success_count, nonnull_count, mostly)
return_obj = self._format_map_output(
result_format, success,
element_count, nonnull_count,
len(unexpected_list),
unexpected_list.to_dict(orient='records'), unexpected_index_list
)
return return_obj
inner_wrapper.__name__ = func.__name__
inner_wrapper.__doc__ = func.__doc__
return inner_wrapper | [
"def",
"multicolumn_map_expectation",
"(",
"cls",
",",
"func",
")",
":",
"if",
"PY3",
":",
"argspec",
"=",
"inspect",
".",
"getfullargspec",
"(",
"func",
")",
"[",
"0",
"]",
"[",
"1",
":",
"]",
"else",
":",
"argspec",
"=",
"inspect",
".",
"getargspec",... | The multicolumn_map_expectation decorator handles boilerplate issues surrounding the common pattern of
evaluating truthiness of some condition on a per row basis across a set of columns. | [
"The",
"multicolumn_map_expectation",
"decorator",
"handles",
"boilerplate",
"issues",
"surrounding",
"the",
"common",
"pattern",
"of",
"evaluating",
"truthiness",
"of",
"some",
"condition",
"on",
"a",
"per",
"row",
"basis",
"across",
"a",
"set",
"of",
"columns",
... | 08385c40529d4f14a1c46916788aecc47f33ee9d | https://github.com/great-expectations/great_expectations/blob/08385c40529d4f14a1c46916788aecc47f33ee9d/great_expectations/dataset/pandas_dataset.py#L199-L252 | train | 216,919 |
splunk/splunk-sdk-python | splunklib/searchcommands/search_command.py | dispatch | def dispatch(command_class, argv=sys.argv, input_file=sys.stdin, output_file=sys.stdout, module_name=None):
""" Instantiates and executes a search command class
This function implements a `conditional script stanza <https://docs.python.org/2/library/__main__.html>`_ based on the value of
:code:`module_name`::
if module_name is None or module_name == '__main__':
# execute command
Call this function at module scope with :code:`module_name=__name__`, if you would like your module to act as either
a reusable module or a standalone program. Otherwise, if you wish this function to unconditionally instantiate and
execute :code:`command_class`, pass :const:`None` as the value of :code:`module_name`.
:param command_class: Search command class to instantiate and execute.
:type command_class: type
:param argv: List of arguments to the command.
:type argv: list or tuple
:param input_file: File from which the command will read data.
:type input_file: :code:`file`
:param output_file: File to which the command will write data.
:type output_file: :code:`file`
:param module_name: Name of the module calling :code:`dispatch` or :const:`None`.
:type module_name: :code:`basestring`
:returns: :const:`None`
**Example**
.. code-block:: python
:linenos:
#!/usr/bin/env python
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators
@Configuration()
class SomeStreamingCommand(StreamingCommand):
...
def stream(records):
...
dispatch(SomeStreamingCommand, module_name=__name__)
Dispatches the :code:`SomeStreamingCommand`, if and only if :code:`__name__` is equal to :code:`'__main__'`.
**Example**
.. code-block:: python
:linenos:
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators
@Configuration()
class SomeStreamingCommand(StreamingCommand):
...
def stream(records):
...
dispatch(SomeStreamingCommand)
Unconditionally dispatches :code:`SomeStreamingCommand`.
"""
assert issubclass(command_class, SearchCommand)
if module_name is None or module_name == '__main__':
command_class().process(argv, input_file, output_file) | python | def dispatch(command_class, argv=sys.argv, input_file=sys.stdin, output_file=sys.stdout, module_name=None):
""" Instantiates and executes a search command class
This function implements a `conditional script stanza <https://docs.python.org/2/library/__main__.html>`_ based on the value of
:code:`module_name`::
if module_name is None or module_name == '__main__':
# execute command
Call this function at module scope with :code:`module_name=__name__`, if you would like your module to act as either
a reusable module or a standalone program. Otherwise, if you wish this function to unconditionally instantiate and
execute :code:`command_class`, pass :const:`None` as the value of :code:`module_name`.
:param command_class: Search command class to instantiate and execute.
:type command_class: type
:param argv: List of arguments to the command.
:type argv: list or tuple
:param input_file: File from which the command will read data.
:type input_file: :code:`file`
:param output_file: File to which the command will write data.
:type output_file: :code:`file`
:param module_name: Name of the module calling :code:`dispatch` or :const:`None`.
:type module_name: :code:`basestring`
:returns: :const:`None`
**Example**
.. code-block:: python
:linenos:
#!/usr/bin/env python
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators
@Configuration()
class SomeStreamingCommand(StreamingCommand):
...
def stream(records):
...
dispatch(SomeStreamingCommand, module_name=__name__)
Dispatches the :code:`SomeStreamingCommand`, if and only if :code:`__name__` is equal to :code:`'__main__'`.
**Example**
.. code-block:: python
:linenos:
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators
@Configuration()
class SomeStreamingCommand(StreamingCommand):
...
def stream(records):
...
dispatch(SomeStreamingCommand)
Unconditionally dispatches :code:`SomeStreamingCommand`.
"""
assert issubclass(command_class, SearchCommand)
if module_name is None or module_name == '__main__':
command_class().process(argv, input_file, output_file) | [
"def",
"dispatch",
"(",
"command_class",
",",
"argv",
"=",
"sys",
".",
"argv",
",",
"input_file",
"=",
"sys",
".",
"stdin",
",",
"output_file",
"=",
"sys",
".",
"stdout",
",",
"module_name",
"=",
"None",
")",
":",
"assert",
"issubclass",
"(",
"command_cl... | Instantiates and executes a search command class
This function implements a `conditional script stanza <https://docs.python.org/2/library/__main__.html>`_ based on the value of
:code:`module_name`::
if module_name is None or module_name == '__main__':
# execute command
Call this function at module scope with :code:`module_name=__name__`, if you would like your module to act as either
a reusable module or a standalone program. Otherwise, if you wish this function to unconditionally instantiate and
execute :code:`command_class`, pass :const:`None` as the value of :code:`module_name`.
:param command_class: Search command class to instantiate and execute.
:type command_class: type
:param argv: List of arguments to the command.
:type argv: list or tuple
:param input_file: File from which the command will read data.
:type input_file: :code:`file`
:param output_file: File to which the command will write data.
:type output_file: :code:`file`
:param module_name: Name of the module calling :code:`dispatch` or :const:`None`.
:type module_name: :code:`basestring`
:returns: :const:`None`
**Example**
.. code-block:: python
:linenos:
#!/usr/bin/env python
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators
@Configuration()
class SomeStreamingCommand(StreamingCommand):
...
def stream(records):
...
dispatch(SomeStreamingCommand, module_name=__name__)
Dispatches the :code:`SomeStreamingCommand`, if and only if :code:`__name__` is equal to :code:`'__main__'`.
**Example**
.. code-block:: python
:linenos:
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators
@Configuration()
class SomeStreamingCommand(StreamingCommand):
...
def stream(records):
...
dispatch(SomeStreamingCommand)
Unconditionally dispatches :code:`SomeStreamingCommand`. | [
"Instantiates",
"and",
"executes",
"a",
"search",
"command",
"class"
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/searchcommands/search_command.py#L1056-L1116 | train | 216,920 |
splunk/splunk-sdk-python | splunklib/searchcommands/search_command.py | SearchCommand.search_results_info | def search_results_info(self):
""" Returns the search results info for this command invocation.
The search results info object is created from the search results info file associated with the command
invocation.
:return: Search results info:const:`None`, if the search results info file associated with the command
invocation is inaccessible.
:rtype: SearchResultsInfo or NoneType
"""
if self._search_results_info is not None:
return self._search_results_info
if self._protocol_version == 1:
try:
path = self._input_header['infoPath']
except KeyError:
return None
else:
assert self._protocol_version == 2
try:
dispatch_dir = self._metadata.searchinfo.dispatch_dir
except AttributeError:
return None
path = os.path.join(dispatch_dir, 'info.csv')
try:
with io.open(path, 'r') as f:
reader = csv.reader(f, dialect=CsvDialect)
fields = next(reader)
values = next(reader)
except IOError as error:
if error.errno == 2:
self.logger.error('Search results info file {} does not exist.'.format(json_encode_string(path)))
return
raise
def convert_field(field):
return (field[1:] if field[0] == '_' else field).replace('.', '_')
decode = MetadataDecoder().decode
def convert_value(value):
try:
return decode(value) if len(value) > 0 else value
except ValueError:
return value
info = ObjectView(dict(imap(lambda f_v: (convert_field(f_v[0]), convert_value(f_v[1])), izip(fields, values))))
try:
count_map = info.countMap
except AttributeError:
pass
else:
count_map = count_map.split(';')
n = len(count_map)
info.countMap = dict(izip(islice(count_map, 0, n, 2), islice(count_map, 1, n, 2)))
try:
msg_type = info.msgType
msg_text = info.msg
except AttributeError:
pass
else:
messages = ifilter(lambda t_m: t_m[0] or t_m[1], izip(msg_type.split('\n'), msg_text.split('\n')))
info.msg = [Message(message) for message in messages]
del info.msgType
try:
info.vix_families = ElementTree.fromstring(info.vix_families)
except AttributeError:
pass
self._search_results_info = info
return info | python | def search_results_info(self):
""" Returns the search results info for this command invocation.
The search results info object is created from the search results info file associated with the command
invocation.
:return: Search results info:const:`None`, if the search results info file associated with the command
invocation is inaccessible.
:rtype: SearchResultsInfo or NoneType
"""
if self._search_results_info is not None:
return self._search_results_info
if self._protocol_version == 1:
try:
path = self._input_header['infoPath']
except KeyError:
return None
else:
assert self._protocol_version == 2
try:
dispatch_dir = self._metadata.searchinfo.dispatch_dir
except AttributeError:
return None
path = os.path.join(dispatch_dir, 'info.csv')
try:
with io.open(path, 'r') as f:
reader = csv.reader(f, dialect=CsvDialect)
fields = next(reader)
values = next(reader)
except IOError as error:
if error.errno == 2:
self.logger.error('Search results info file {} does not exist.'.format(json_encode_string(path)))
return
raise
def convert_field(field):
return (field[1:] if field[0] == '_' else field).replace('.', '_')
decode = MetadataDecoder().decode
def convert_value(value):
try:
return decode(value) if len(value) > 0 else value
except ValueError:
return value
info = ObjectView(dict(imap(lambda f_v: (convert_field(f_v[0]), convert_value(f_v[1])), izip(fields, values))))
try:
count_map = info.countMap
except AttributeError:
pass
else:
count_map = count_map.split(';')
n = len(count_map)
info.countMap = dict(izip(islice(count_map, 0, n, 2), islice(count_map, 1, n, 2)))
try:
msg_type = info.msgType
msg_text = info.msg
except AttributeError:
pass
else:
messages = ifilter(lambda t_m: t_m[0] or t_m[1], izip(msg_type.split('\n'), msg_text.split('\n')))
info.msg = [Message(message) for message in messages]
del info.msgType
try:
info.vix_families = ElementTree.fromstring(info.vix_families)
except AttributeError:
pass
self._search_results_info = info
return info | [
"def",
"search_results_info",
"(",
"self",
")",
":",
"if",
"self",
".",
"_search_results_info",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_search_results_info",
"if",
"self",
".",
"_protocol_version",
"==",
"1",
":",
"try",
":",
"path",
"=",
"self",
... | Returns the search results info for this command invocation.
The search results info object is created from the search results info file associated with the command
invocation.
:return: Search results info:const:`None`, if the search results info file associated with the command
invocation is inaccessible.
:rtype: SearchResultsInfo or NoneType | [
"Returns",
"the",
"search",
"results",
"info",
"for",
"this",
"command",
"invocation",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/searchcommands/search_command.py#L252-L330 | train | 216,921 |
splunk/splunk-sdk-python | splunklib/searchcommands/search_command.py | SearchCommand.process | def process(self, argv=sys.argv, ifile=sys.stdin, ofile=sys.stdout):
""" Process data.
:param argv: Command line arguments.
:type argv: list or tuple
:param ifile: Input data file.
:type ifile: file
:param ofile: Output data file.
:type ofile: file
:return: :const:`None`
:rtype: NoneType
"""
if len(argv) > 1:
self._process_protocol_v1(argv, ifile, ofile)
else:
self._process_protocol_v2(argv, ifile, ofile) | python | def process(self, argv=sys.argv, ifile=sys.stdin, ofile=sys.stdout):
""" Process data.
:param argv: Command line arguments.
:type argv: list or tuple
:param ifile: Input data file.
:type ifile: file
:param ofile: Output data file.
:type ofile: file
:return: :const:`None`
:rtype: NoneType
"""
if len(argv) > 1:
self._process_protocol_v1(argv, ifile, ofile)
else:
self._process_protocol_v2(argv, ifile, ofile) | [
"def",
"process",
"(",
"self",
",",
"argv",
"=",
"sys",
".",
"argv",
",",
"ifile",
"=",
"sys",
".",
"stdin",
",",
"ofile",
"=",
"sys",
".",
"stdout",
")",
":",
"if",
"len",
"(",
"argv",
")",
">",
"1",
":",
"self",
".",
"_process_protocol_v1",
"("... | Process data.
:param argv: Command line arguments.
:type argv: list or tuple
:param ifile: Input data file.
:type ifile: file
:param ofile: Output data file.
:type ofile: file
:return: :const:`None`
:rtype: NoneType | [
"Process",
"data",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/searchcommands/search_command.py#L415-L434 | train | 216,922 |
splunk/splunk-sdk-python | splunklib/searchcommands/search_command.py | SearchCommand._execute | def _execute(self, ifile, process):
""" Default processing loop
:param ifile: Input file object.
:type ifile: file
:param process: Bound method to call in processing loop.
:type process: instancemethod
:return: :const:`None`.
:rtype: NoneType
"""
self._record_writer.write_records(process(self._records(ifile)))
self.finish() | python | def _execute(self, ifile, process):
""" Default processing loop
:param ifile: Input file object.
:type ifile: file
:param process: Bound method to call in processing loop.
:type process: instancemethod
:return: :const:`None`.
:rtype: NoneType
"""
self._record_writer.write_records(process(self._records(ifile)))
self.finish() | [
"def",
"_execute",
"(",
"self",
",",
"ifile",
",",
"process",
")",
":",
"self",
".",
"_record_writer",
".",
"write_records",
"(",
"process",
"(",
"self",
".",
"_records",
"(",
"ifile",
")",
")",
")",
"self",
".",
"finish",
"(",
")"
] | Default processing loop
:param ifile: Input file object.
:type ifile: file
:param process: Bound method to call in processing loop.
:type process: instancemethod
:return: :const:`None`.
:rtype: NoneType | [
"Default",
"processing",
"loop"
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/searchcommands/search_command.py#L835-L849 | train | 216,923 |
splunk/splunk-sdk-python | utils/__init__.py | parse | def parse(argv, rules=None, config=None, **kwargs):
"""Parse the given arg vector with the default Splunk command rules."""
parser_ = parser(rules, **kwargs)
if config is not None: parser_.loadrc(config)
return parser_.parse(argv).result | python | def parse(argv, rules=None, config=None, **kwargs):
"""Parse the given arg vector with the default Splunk command rules."""
parser_ = parser(rules, **kwargs)
if config is not None: parser_.loadrc(config)
return parser_.parse(argv).result | [
"def",
"parse",
"(",
"argv",
",",
"rules",
"=",
"None",
",",
"config",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"parser_",
"=",
"parser",
"(",
"rules",
",",
"*",
"*",
"kwargs",
")",
"if",
"config",
"is",
"not",
"None",
":",
"parser_",
".",... | Parse the given arg vector with the default Splunk command rules. | [
"Parse",
"the",
"given",
"arg",
"vector",
"with",
"the",
"default",
"Splunk",
"command",
"rules",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/utils/__init__.py#L99-L103 | train | 216,924 |
splunk/splunk-sdk-python | utils/__init__.py | parser | def parser(rules=None, **kwargs):
"""Instantiate a parser with the default Splunk command rules."""
rules = RULES_SPLUNK if rules is None else dict(RULES_SPLUNK, **rules)
return Parser(rules, **kwargs) | python | def parser(rules=None, **kwargs):
"""Instantiate a parser with the default Splunk command rules."""
rules = RULES_SPLUNK if rules is None else dict(RULES_SPLUNK, **rules)
return Parser(rules, **kwargs) | [
"def",
"parser",
"(",
"rules",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"rules",
"=",
"RULES_SPLUNK",
"if",
"rules",
"is",
"None",
"else",
"dict",
"(",
"RULES_SPLUNK",
",",
"*",
"*",
"rules",
")",
"return",
"Parser",
"(",
"rules",
",",
"*",
... | Instantiate a parser with the default Splunk command rules. | [
"Instantiate",
"a",
"parser",
"with",
"the",
"default",
"Splunk",
"command",
"rules",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/utils/__init__.py#L105-L108 | train | 216,925 |
splunk/splunk-sdk-python | utils/cmdopts.py | cmdline | def cmdline(argv, rules=None, config=None, **kwargs):
"""Simplified cmdopts interface that does not default any parsing rules
and that does not allow compounding calls to the parser."""
parser = Parser(rules, **kwargs)
if config is not None: parser.loadrc(config)
return parser.parse(argv).result | python | def cmdline(argv, rules=None, config=None, **kwargs):
"""Simplified cmdopts interface that does not default any parsing rules
and that does not allow compounding calls to the parser."""
parser = Parser(rules, **kwargs)
if config is not None: parser.loadrc(config)
return parser.parse(argv).result | [
"def",
"cmdline",
"(",
"argv",
",",
"rules",
"=",
"None",
",",
"config",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"parser",
"=",
"Parser",
"(",
"rules",
",",
"*",
"*",
"kwargs",
")",
"if",
"config",
"is",
"not",
"None",
":",
"parser",
".",... | Simplified cmdopts interface that does not default any parsing rules
and that does not allow compounding calls to the parser. | [
"Simplified",
"cmdopts",
"interface",
"that",
"does",
"not",
"default",
"any",
"parsing",
"rules",
"and",
"that",
"does",
"not",
"allow",
"compounding",
"calls",
"to",
"the",
"parser",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/utils/cmdopts.py#L113-L118 | train | 216,926 |
splunk/splunk-sdk-python | utils/cmdopts.py | Parser.init | def init(self, rules):
"""Initialize the parser with the given command rules."""
# Initialize the option parser
for dest in rules.keys():
rule = rules[dest]
# Assign defaults ourselves here, instead of in the option parser
# itself in order to allow for multiple calls to parse (dont want
# subsequent calls to override previous values with default vals).
if 'default' in rule:
self.result['kwargs'][dest] = rule['default']
flags = rule['flags']
kwargs = { 'action': rule.get('action', "store") }
# NOTE: Don't provision the parser with defaults here, per above.
for key in ['callback', 'help', 'metavar', 'type']:
if key in rule: kwargs[key] = rule[key]
self.add_option(*flags, dest=dest, **kwargs)
# Remember the dest vars that we see, so that we can merge results
self.dests.add(dest) | python | def init(self, rules):
"""Initialize the parser with the given command rules."""
# Initialize the option parser
for dest in rules.keys():
rule = rules[dest]
# Assign defaults ourselves here, instead of in the option parser
# itself in order to allow for multiple calls to parse (dont want
# subsequent calls to override previous values with default vals).
if 'default' in rule:
self.result['kwargs'][dest] = rule['default']
flags = rule['flags']
kwargs = { 'action': rule.get('action', "store") }
# NOTE: Don't provision the parser with defaults here, per above.
for key in ['callback', 'help', 'metavar', 'type']:
if key in rule: kwargs[key] = rule[key]
self.add_option(*flags, dest=dest, **kwargs)
# Remember the dest vars that we see, so that we can merge results
self.dests.add(dest) | [
"def",
"init",
"(",
"self",
",",
"rules",
")",
":",
"# Initialize the option parser",
"for",
"dest",
"in",
"rules",
".",
"keys",
"(",
")",
":",
"rule",
"=",
"rules",
"[",
"dest",
"]",
"# Assign defaults ourselves here, instead of in the option parser",
"# itself in ... | Initialize the parser with the given command rules. | [
"Initialize",
"the",
"parser",
"with",
"the",
"given",
"command",
"rules",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/utils/cmdopts.py#L49-L69 | train | 216,927 |
splunk/splunk-sdk-python | utils/cmdopts.py | Parser.loadif | def loadif(self, filepath):
"""Load the given filepath if it exists, otherwise ignore."""
if path.isfile(filepath): self.load(filepath)
return self | python | def loadif(self, filepath):
"""Load the given filepath if it exists, otherwise ignore."""
if path.isfile(filepath): self.load(filepath)
return self | [
"def",
"loadif",
"(",
"self",
",",
"filepath",
")",
":",
"if",
"path",
".",
"isfile",
"(",
"filepath",
")",
":",
"self",
".",
"load",
"(",
"filepath",
")",
"return",
"self"
] | Load the given filepath if it exists, otherwise ignore. | [
"Load",
"the",
"given",
"filepath",
"if",
"it",
"exists",
"otherwise",
"ignore",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/utils/cmdopts.py#L88-L91 | train | 216,928 |
splunk/splunk-sdk-python | utils/cmdopts.py | Parser.parse | def parse(self, argv):
"""Parse the given argument vector."""
kwargs, args = self.parse_args(argv)
self.result['args'] += args
# Annoying that parse_args doesn't just return a dict
for dest in self.dests:
value = getattr(kwargs, dest)
if value is not None:
self.result['kwargs'][dest] = value
return self | python | def parse(self, argv):
"""Parse the given argument vector."""
kwargs, args = self.parse_args(argv)
self.result['args'] += args
# Annoying that parse_args doesn't just return a dict
for dest in self.dests:
value = getattr(kwargs, dest)
if value is not None:
self.result['kwargs'][dest] = value
return self | [
"def",
"parse",
"(",
"self",
",",
"argv",
")",
":",
"kwargs",
",",
"args",
"=",
"self",
".",
"parse_args",
"(",
"argv",
")",
"self",
".",
"result",
"[",
"'args'",
"]",
"+=",
"args",
"# Annoying that parse_args doesn't just return a dict",
"for",
"dest",
"in"... | Parse the given argument vector. | [
"Parse",
"the",
"given",
"argument",
"vector",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/utils/cmdopts.py#L98-L107 | train | 216,929 |
splunk/splunk-sdk-python | examples/genevents.py | feed_index | def feed_index(service, opts):
"""Feed the named index in a specific manner."""
indexname = opts.args[0]
itype = opts.kwargs['ingest']
# get index handle
try:
index = service.indexes[indexname]
except KeyError:
print("Index %s not found" % indexname)
return
if itype in ["stream", "submit"]:
stream = index.attach()
else:
# create a tcp input if one doesn't exist
input_host = opts.kwargs.get("inputhost", SPLUNK_HOST)
input_port = int(opts.kwargs.get("inputport", SPLUNK_PORT))
input_name = "tcp:%s" % (input_port)
if input_name not in service.inputs.list():
service.inputs.create("tcp", input_port, index=indexname)
# connect to socket
ingest = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ingest.connect((input_host, input_port))
count = 0
lastevent = ""
try:
for i in range(0, 10):
for j in range(0, 5000):
lastevent = "%s: event bunch %d, number %d\n" % \
(datetime.datetime.now().isoformat(), i, j)
if itype == "stream":
stream.write(lastevent + "\n")
elif itype == "submit":
index.submit(lastevent + "\n")
else:
ingest.send(lastevent + "\n")
count = count + 1
print("submitted %d events, sleeping 1 second" % count)
time.sleep(1)
except KeyboardInterrupt:
print("^C detected, last event written:")
print(lastevent) | python | def feed_index(service, opts):
"""Feed the named index in a specific manner."""
indexname = opts.args[0]
itype = opts.kwargs['ingest']
# get index handle
try:
index = service.indexes[indexname]
except KeyError:
print("Index %s not found" % indexname)
return
if itype in ["stream", "submit"]:
stream = index.attach()
else:
# create a tcp input if one doesn't exist
input_host = opts.kwargs.get("inputhost", SPLUNK_HOST)
input_port = int(opts.kwargs.get("inputport", SPLUNK_PORT))
input_name = "tcp:%s" % (input_port)
if input_name not in service.inputs.list():
service.inputs.create("tcp", input_port, index=indexname)
# connect to socket
ingest = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ingest.connect((input_host, input_port))
count = 0
lastevent = ""
try:
for i in range(0, 10):
for j in range(0, 5000):
lastevent = "%s: event bunch %d, number %d\n" % \
(datetime.datetime.now().isoformat(), i, j)
if itype == "stream":
stream.write(lastevent + "\n")
elif itype == "submit":
index.submit(lastevent + "\n")
else:
ingest.send(lastevent + "\n")
count = count + 1
print("submitted %d events, sleeping 1 second" % count)
time.sleep(1)
except KeyboardInterrupt:
print("^C detected, last event written:")
print(lastevent) | [
"def",
"feed_index",
"(",
"service",
",",
"opts",
")",
":",
"indexname",
"=",
"opts",
".",
"args",
"[",
"0",
"]",
"itype",
"=",
"opts",
".",
"kwargs",
"[",
"'ingest'",
"]",
"# get index handle",
"try",
":",
"index",
"=",
"service",
".",
"indexes",
"[",... | Feed the named index in a specific manner. | [
"Feed",
"the",
"named",
"index",
"in",
"a",
"specific",
"manner",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/genevents.py#L58-L106 | train | 216,930 |
splunk/splunk-sdk-python | splunklib/modularinput/utils.py | xml_compare | def xml_compare(expected, found):
"""Checks equality of two ``ElementTree`` objects.
:param expected: An ``ElementTree`` object.
:param found: An ``ElementTree`` object.
:return: ``Boolean``, whether the two objects are equal.
"""
# if comparing the same ET object
if expected == found:
return True
# compare element attributes, ignoring order
if set(expected.items()) != set(found.items()):
return False
# check for equal number of children
expected_children = list(expected)
found_children = list(found)
if len(expected_children) != len(found_children):
return False
# compare children
if not all([xml_compare(a, b) for a, b in zip(expected_children, found_children)]):
return False
# compare elements, if there is no text node, return True
if (expected.text is None or expected.text.strip() == "") \
and (found.text is None or found.text.strip() == ""):
return True
else:
return expected.tag == found.tag and expected.text == found.text \
and expected.attrib == found.attrib | python | def xml_compare(expected, found):
"""Checks equality of two ``ElementTree`` objects.
:param expected: An ``ElementTree`` object.
:param found: An ``ElementTree`` object.
:return: ``Boolean``, whether the two objects are equal.
"""
# if comparing the same ET object
if expected == found:
return True
# compare element attributes, ignoring order
if set(expected.items()) != set(found.items()):
return False
# check for equal number of children
expected_children = list(expected)
found_children = list(found)
if len(expected_children) != len(found_children):
return False
# compare children
if not all([xml_compare(a, b) for a, b in zip(expected_children, found_children)]):
return False
# compare elements, if there is no text node, return True
if (expected.text is None or expected.text.strip() == "") \
and (found.text is None or found.text.strip() == ""):
return True
else:
return expected.tag == found.tag and expected.text == found.text \
and expected.attrib == found.attrib | [
"def",
"xml_compare",
"(",
"expected",
",",
"found",
")",
":",
"# if comparing the same ET object",
"if",
"expected",
"==",
"found",
":",
"return",
"True",
"# compare element attributes, ignoring order",
"if",
"set",
"(",
"expected",
".",
"items",
"(",
")",
")",
"... | Checks equality of two ``ElementTree`` objects.
:param expected: An ``ElementTree`` object.
:param found: An ``ElementTree`` object.
:return: ``Boolean``, whether the two objects are equal. | [
"Checks",
"equality",
"of",
"two",
"ElementTree",
"objects",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/modularinput/utils.py#L19-L51 | train | 216,931 |
splunk/splunk-sdk-python | examples/job.py | cmdline | def cmdline(argv, flags):
"""A cmdopts wrapper that takes a list of flags and builds the
corresponding cmdopts rules to match those flags."""
rules = dict([(flag, {'flags': ["--%s" % flag]}) for flag in flags])
return parse(argv, rules) | python | def cmdline(argv, flags):
"""A cmdopts wrapper that takes a list of flags and builds the
corresponding cmdopts rules to match those flags."""
rules = dict([(flag, {'flags': ["--%s" % flag]}) for flag in flags])
return parse(argv, rules) | [
"def",
"cmdline",
"(",
"argv",
",",
"flags",
")",
":",
"rules",
"=",
"dict",
"(",
"[",
"(",
"flag",
",",
"{",
"'flags'",
":",
"[",
"\"--%s\"",
"%",
"flag",
"]",
"}",
")",
"for",
"flag",
"in",
"flags",
"]",
")",
"return",
"parse",
"(",
"argv",
"... | A cmdopts wrapper that takes a list of flags and builds the
corresponding cmdopts rules to match those flags. | [
"A",
"cmdopts",
"wrapper",
"that",
"takes",
"a",
"list",
"of",
"flags",
"and",
"builds",
"the",
"corresponding",
"cmdopts",
"rules",
"to",
"match",
"those",
"flags",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L107-L111 | train | 216,932 |
splunk/splunk-sdk-python | examples/job.py | output | def output(stream):
"""Write the contents of the given stream to stdout."""
while True:
content = stream.read(1024)
if len(content) == 0: break
sys.stdout.write(content) | python | def output(stream):
"""Write the contents of the given stream to stdout."""
while True:
content = stream.read(1024)
if len(content) == 0: break
sys.stdout.write(content) | [
"def",
"output",
"(",
"stream",
")",
":",
"while",
"True",
":",
"content",
"=",
"stream",
".",
"read",
"(",
"1024",
")",
"if",
"len",
"(",
"content",
")",
"==",
"0",
":",
"break",
"sys",
".",
"stdout",
".",
"write",
"(",
"content",
")"
] | Write the contents of the given stream to stdout. | [
"Write",
"the",
"contents",
"of",
"the",
"given",
"stream",
"to",
"stdout",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L113-L118 | train | 216,933 |
splunk/splunk-sdk-python | examples/job.py | Program.create | def create(self, argv):
"""Create a search job."""
opts = cmdline(argv, FLAGS_CREATE)
if len(opts.args) != 1:
error("Command requires a search expression", 2)
query = opts.args[0]
job = self.service.jobs.create(opts.args[0], **opts.kwargs)
print(job.sid) | python | def create(self, argv):
"""Create a search job."""
opts = cmdline(argv, FLAGS_CREATE)
if len(opts.args) != 1:
error("Command requires a search expression", 2)
query = opts.args[0]
job = self.service.jobs.create(opts.args[0], **opts.kwargs)
print(job.sid) | [
"def",
"create",
"(",
"self",
",",
"argv",
")",
":",
"opts",
"=",
"cmdline",
"(",
"argv",
",",
"FLAGS_CREATE",
")",
"if",
"len",
"(",
"opts",
".",
"args",
")",
"!=",
"1",
":",
"error",
"(",
"\"Command requires a search expression\"",
",",
"2",
")",
"qu... | Create a search job. | [
"Create",
"a",
"search",
"job",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L127-L134 | train | 216,934 |
splunk/splunk-sdk-python | examples/job.py | Program.events | def events(self, argv):
"""Retrieve events for the specified search jobs."""
opts = cmdline(argv, FLAGS_EVENTS)
self.foreach(opts.args, lambda job:
output(job.events(**opts.kwargs))) | python | def events(self, argv):
"""Retrieve events for the specified search jobs."""
opts = cmdline(argv, FLAGS_EVENTS)
self.foreach(opts.args, lambda job:
output(job.events(**opts.kwargs))) | [
"def",
"events",
"(",
"self",
",",
"argv",
")",
":",
"opts",
"=",
"cmdline",
"(",
"argv",
",",
"FLAGS_EVENTS",
")",
"self",
".",
"foreach",
"(",
"opts",
".",
"args",
",",
"lambda",
"job",
":",
"output",
"(",
"job",
".",
"events",
"(",
"*",
"*",
"... | Retrieve events for the specified search jobs. | [
"Retrieve",
"events",
"for",
"the",
"specified",
"search",
"jobs",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L136-L140 | train | 216,935 |
splunk/splunk-sdk-python | examples/job.py | Program.foreach | def foreach(self, argv, func):
"""Apply the function to each job specified in the argument vector."""
if len(argv) == 0:
error("Command requires a search specifier.", 2)
for item in argv:
job = self.lookup(item)
if job is None:
error("Search job '%s' does not exist" % item, 2)
func(job) | python | def foreach(self, argv, func):
"""Apply the function to each job specified in the argument vector."""
if len(argv) == 0:
error("Command requires a search specifier.", 2)
for item in argv:
job = self.lookup(item)
if job is None:
error("Search job '%s' does not exist" % item, 2)
func(job) | [
"def",
"foreach",
"(",
"self",
",",
"argv",
",",
"func",
")",
":",
"if",
"len",
"(",
"argv",
")",
"==",
"0",
":",
"error",
"(",
"\"Command requires a search specifier.\"",
",",
"2",
")",
"for",
"item",
"in",
"argv",
":",
"job",
"=",
"self",
".",
"loo... | Apply the function to each job specified in the argument vector. | [
"Apply",
"the",
"function",
"to",
"each",
"job",
"specified",
"in",
"the",
"argument",
"vector",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L146-L154 | train | 216,936 |
splunk/splunk-sdk-python | examples/job.py | Program.list | def list(self, argv):
"""List all current search jobs if no jobs specified, otherwise
list the properties of the specified jobs."""
def read(job):
for key in sorted(job.content.keys()):
# Ignore some fields that make the output hard to read and
# that are available via other commands.
if key in ["performance"]: continue
print("%s: %s" % (key, job.content[key]))
if len(argv) == 0:
index = 0
for job in self.service.jobs:
print("@%d : %s" % (index, job.sid))
index += 1
return
self.foreach(argv, read) | python | def list(self, argv):
"""List all current search jobs if no jobs specified, otherwise
list the properties of the specified jobs."""
def read(job):
for key in sorted(job.content.keys()):
# Ignore some fields that make the output hard to read and
# that are available via other commands.
if key in ["performance"]: continue
print("%s: %s" % (key, job.content[key]))
if len(argv) == 0:
index = 0
for job in self.service.jobs:
print("@%d : %s" % (index, job.sid))
index += 1
return
self.foreach(argv, read) | [
"def",
"list",
"(",
"self",
",",
"argv",
")",
":",
"def",
"read",
"(",
"job",
")",
":",
"for",
"key",
"in",
"sorted",
"(",
"job",
".",
"content",
".",
"keys",
"(",
")",
")",
":",
"# Ignore some fields that make the output hard to read and",
"# that are avail... | List all current search jobs if no jobs specified, otherwise
list the properties of the specified jobs. | [
"List",
"all",
"current",
"search",
"jobs",
"if",
"no",
"jobs",
"specified",
"otherwise",
"list",
"the",
"properties",
"of",
"the",
"specified",
"jobs",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L156-L174 | train | 216,937 |
splunk/splunk-sdk-python | examples/job.py | Program.preview | def preview(self, argv):
"""Retrieve the preview for the specified search jobs."""
opts = cmdline(argv, FLAGS_RESULTS)
self.foreach(opts.args, lambda job:
output(job.preview(**opts.kwargs))) | python | def preview(self, argv):
"""Retrieve the preview for the specified search jobs."""
opts = cmdline(argv, FLAGS_RESULTS)
self.foreach(opts.args, lambda job:
output(job.preview(**opts.kwargs))) | [
"def",
"preview",
"(",
"self",
",",
"argv",
")",
":",
"opts",
"=",
"cmdline",
"(",
"argv",
",",
"FLAGS_RESULTS",
")",
"self",
".",
"foreach",
"(",
"opts",
".",
"args",
",",
"lambda",
"job",
":",
"output",
"(",
"job",
".",
"preview",
"(",
"*",
"*",
... | Retrieve the preview for the specified search jobs. | [
"Retrieve",
"the",
"preview",
"for",
"the",
"specified",
"search",
"jobs",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L176-L180 | train | 216,938 |
splunk/splunk-sdk-python | examples/job.py | Program.run | def run(self, argv):
"""Dispatch the given command."""
command = argv[0]
handlers = {
'cancel': self.cancel,
'create': self.create,
'events': self.events,
'finalize': self.finalize,
'list': self.list,
'pause': self.pause,
'preview': self.preview,
'results': self.results,
'searchlog': self.searchlog,
'summary': self.summary,
'perf': self.perf,
'timeline': self.timeline,
'touch': self.touch,
'unpause': self.unpause,
}
handler = handlers.get(command, None)
if handler is None:
error("Unrecognized command: %s" % command, 2)
handler(argv[1:]) | python | def run(self, argv):
"""Dispatch the given command."""
command = argv[0]
handlers = {
'cancel': self.cancel,
'create': self.create,
'events': self.events,
'finalize': self.finalize,
'list': self.list,
'pause': self.pause,
'preview': self.preview,
'results': self.results,
'searchlog': self.searchlog,
'summary': self.summary,
'perf': self.perf,
'timeline': self.timeline,
'touch': self.touch,
'unpause': self.unpause,
}
handler = handlers.get(command, None)
if handler is None:
error("Unrecognized command: %s" % command, 2)
handler(argv[1:]) | [
"def",
"run",
"(",
"self",
",",
"argv",
")",
":",
"command",
"=",
"argv",
"[",
"0",
"]",
"handlers",
"=",
"{",
"'cancel'",
":",
"self",
".",
"cancel",
",",
"'create'",
":",
"self",
".",
"create",
",",
"'events'",
":",
"self",
".",
"events",
",",
... | Dispatch the given command. | [
"Dispatch",
"the",
"given",
"command",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L209-L231 | train | 216,939 |
splunk/splunk-sdk-python | examples/job.py | Program.searchlog | def searchlog(self, argv):
"""Retrieve the searchlog for the specified search jobs."""
opts = cmdline(argv, FLAGS_SEARCHLOG)
self.foreach(opts.args, lambda job:
output(job.searchlog(**opts.kwargs))) | python | def searchlog(self, argv):
"""Retrieve the searchlog for the specified search jobs."""
opts = cmdline(argv, FLAGS_SEARCHLOG)
self.foreach(opts.args, lambda job:
output(job.searchlog(**opts.kwargs))) | [
"def",
"searchlog",
"(",
"self",
",",
"argv",
")",
":",
"opts",
"=",
"cmdline",
"(",
"argv",
",",
"FLAGS_SEARCHLOG",
")",
"self",
".",
"foreach",
"(",
"opts",
".",
"args",
",",
"lambda",
"job",
":",
"output",
"(",
"job",
".",
"searchlog",
"(",
"*",
... | Retrieve the searchlog for the specified search jobs. | [
"Retrieve",
"the",
"searchlog",
"for",
"the",
"specified",
"search",
"jobs",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/job.py#L233-L237 | train | 216,940 |
splunk/splunk-sdk-python | examples/handlers/handler_certs.py | handler | def handler(ca_file=None):
"""Returns an HTTP request handler configured with the given ca_file."""
def request(url, message, **kwargs):
scheme, host, port, path = spliturl(url)
if scheme != "https":
ValueError("unsupported scheme: %s" % scheme)
connection = HTTPSConnection(host, port, ca_file)
try:
body = message.get('body', "")
headers = dict(message.get('headers', []))
connection.request(message['method'], path, body, headers)
response = connection.getresponse()
finally:
connection.close()
return {
'status': response.status,
'reason': response.reason,
'headers': response.getheaders(),
'body': BytesIO(response.read())
}
return request | python | def handler(ca_file=None):
"""Returns an HTTP request handler configured with the given ca_file."""
def request(url, message, **kwargs):
scheme, host, port, path = spliturl(url)
if scheme != "https":
ValueError("unsupported scheme: %s" % scheme)
connection = HTTPSConnection(host, port, ca_file)
try:
body = message.get('body', "")
headers = dict(message.get('headers', []))
connection.request(message['method'], path, body, headers)
response = connection.getresponse()
finally:
connection.close()
return {
'status': response.status,
'reason': response.reason,
'headers': response.getheaders(),
'body': BytesIO(response.read())
}
return request | [
"def",
"handler",
"(",
"ca_file",
"=",
"None",
")",
":",
"def",
"request",
"(",
"url",
",",
"message",
",",
"*",
"*",
"kwargs",
")",
":",
"scheme",
",",
"host",
",",
"port",
",",
"path",
"=",
"spliturl",
"(",
"url",
")",
"if",
"scheme",
"!=",
"\"... | Returns an HTTP request handler configured with the given ca_file. | [
"Returns",
"an",
"HTTP",
"request",
"handler",
"configured",
"with",
"the",
"given",
"ca_file",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/handlers/handler_certs.py#L90-L115 | train | 216,941 |
splunk/splunk-sdk-python | examples/index.py | Program.list | def list(self, argv):
"""List available indexes if no names provided, otherwise list the
properties of the named indexes."""
def read(index):
print(index.name)
for key in sorted(index.content.keys()):
value = index.content[key]
print(" %s: %s" % (key, value))
if len(argv) == 0:
for index in self.service.indexes:
count = index['totalEventCount']
print("%s (%s)" % (index.name, count))
else:
self.foreach(argv, read) | python | def list(self, argv):
"""List available indexes if no names provided, otherwise list the
properties of the named indexes."""
def read(index):
print(index.name)
for key in sorted(index.content.keys()):
value = index.content[key]
print(" %s: %s" % (key, value))
if len(argv) == 0:
for index in self.service.indexes:
count = index['totalEventCount']
print("%s (%s)" % (index.name, count))
else:
self.foreach(argv, read) | [
"def",
"list",
"(",
"self",
",",
"argv",
")",
":",
"def",
"read",
"(",
"index",
")",
":",
"print",
"(",
"index",
".",
"name",
")",
"for",
"key",
"in",
"sorted",
"(",
"index",
".",
"content",
".",
"keys",
"(",
")",
")",
":",
"value",
"=",
"index... | List available indexes if no names provided, otherwise list the
properties of the named indexes. | [
"List",
"available",
"indexes",
"if",
"no",
"names",
"provided",
"otherwise",
"list",
"the",
"properties",
"of",
"the",
"named",
"indexes",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/index.py#L101-L116 | train | 216,942 |
splunk/splunk-sdk-python | examples/index.py | Program.foreach | def foreach(self, argv, func):
"""Apply the function to each index named in the argument vector."""
opts = cmdline(argv)
if len(opts.args) == 0:
error("Command requires an index name", 2)
for name in opts.args:
if name not in self.service.indexes:
error("Index '%s' does not exist" % name, 2)
index = self.service.indexes[name]
func(index) | python | def foreach(self, argv, func):
"""Apply the function to each index named in the argument vector."""
opts = cmdline(argv)
if len(opts.args) == 0:
error("Command requires an index name", 2)
for name in opts.args:
if name not in self.service.indexes:
error("Index '%s' does not exist" % name, 2)
index = self.service.indexes[name]
func(index) | [
"def",
"foreach",
"(",
"self",
",",
"argv",
",",
"func",
")",
":",
"opts",
"=",
"cmdline",
"(",
"argv",
")",
"if",
"len",
"(",
"opts",
".",
"args",
")",
"==",
"0",
":",
"error",
"(",
"\"Command requires an index name\"",
",",
"2",
")",
"for",
"name",... | Apply the function to each index named in the argument vector. | [
"Apply",
"the",
"function",
"to",
"each",
"index",
"named",
"in",
"the",
"argument",
"vector",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/index.py#L134-L143 | train | 216,943 |
splunk/splunk-sdk-python | examples/index.py | Program.update | def update(self, argv):
"""Update an index according to the given argument vector."""
if len(argv) == 0:
error("Command requires an index name", 2)
name = argv[0]
if name not in self.service.indexes:
error("Index '%s' does not exist" % name, 2)
index = self.service.indexes[name]
# Read index metadata and construct command line parser rules that
# correspond to each editable field.
# Request editable fields
fields = self.service.indexes.itemmeta().fields.optional
# Build parser rules
rules = dict([(field, {'flags': ["--%s" % field]}) for field in fields])
# Parse the argument vector
opts = cmdline(argv, rules)
# Execute the edit request
index.update(**opts.kwargs) | python | def update(self, argv):
"""Update an index according to the given argument vector."""
if len(argv) == 0:
error("Command requires an index name", 2)
name = argv[0]
if name not in self.service.indexes:
error("Index '%s' does not exist" % name, 2)
index = self.service.indexes[name]
# Read index metadata and construct command line parser rules that
# correspond to each editable field.
# Request editable fields
fields = self.service.indexes.itemmeta().fields.optional
# Build parser rules
rules = dict([(field, {'flags': ["--%s" % field]}) for field in fields])
# Parse the argument vector
opts = cmdline(argv, rules)
# Execute the edit request
index.update(**opts.kwargs) | [
"def",
"update",
"(",
"self",
",",
"argv",
")",
":",
"if",
"len",
"(",
"argv",
")",
"==",
"0",
":",
"error",
"(",
"\"Command requires an index name\"",
",",
"2",
")",
"name",
"=",
"argv",
"[",
"0",
"]",
"if",
"name",
"not",
"in",
"self",
".",
"serv... | Update an index according to the given argument vector. | [
"Update",
"an",
"index",
"according",
"to",
"the",
"given",
"argument",
"vector",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/index.py#L145-L169 | train | 216,944 |
splunk/splunk-sdk-python | splunklib/modularinput/argument.py | Argument.add_to_document | def add_to_document(self, parent):
"""Adds an ``Argument`` object to this ElementTree document.
Adds an <arg> subelement to the parent element, typically <args>
and sets up its subelements with their respective text.
:param parent: An ``ET.Element`` to be the parent of a new <arg> subelement
:returns: An ``ET.Element`` object representing this argument.
"""
arg = ET.SubElement(parent, "arg")
arg.set("name", self.name)
if self.title is not None:
ET.SubElement(arg, "title").text = self.title
if self.description is not None:
ET.SubElement(arg, "description").text = self.description
if self.validation is not None:
ET.SubElement(arg, "validation").text = self.validation
# add all other subelements to this Argument, represented by (tag, text)
subelements = [
("data_type", self.data_type),
("required_on_edit", self.required_on_edit),
("required_on_create", self.required_on_create)
]
for name, value in subelements:
ET.SubElement(arg, name).text = str(value).lower()
return arg | python | def add_to_document(self, parent):
"""Adds an ``Argument`` object to this ElementTree document.
Adds an <arg> subelement to the parent element, typically <args>
and sets up its subelements with their respective text.
:param parent: An ``ET.Element`` to be the parent of a new <arg> subelement
:returns: An ``ET.Element`` object representing this argument.
"""
arg = ET.SubElement(parent, "arg")
arg.set("name", self.name)
if self.title is not None:
ET.SubElement(arg, "title").text = self.title
if self.description is not None:
ET.SubElement(arg, "description").text = self.description
if self.validation is not None:
ET.SubElement(arg, "validation").text = self.validation
# add all other subelements to this Argument, represented by (tag, text)
subelements = [
("data_type", self.data_type),
("required_on_edit", self.required_on_edit),
("required_on_create", self.required_on_create)
]
for name, value in subelements:
ET.SubElement(arg, name).text = str(value).lower()
return arg | [
"def",
"add_to_document",
"(",
"self",
",",
"parent",
")",
":",
"arg",
"=",
"ET",
".",
"SubElement",
"(",
"parent",
",",
"\"arg\"",
")",
"arg",
".",
"set",
"(",
"\"name\"",
",",
"self",
".",
"name",
")",
"if",
"self",
".",
"title",
"is",
"not",
"No... | Adds an ``Argument`` object to this ElementTree document.
Adds an <arg> subelement to the parent element, typically <args>
and sets up its subelements with their respective text.
:param parent: An ``ET.Element`` to be the parent of a new <arg> subelement
:returns: An ``ET.Element`` object representing this argument. | [
"Adds",
"an",
"Argument",
"object",
"to",
"this",
"ElementTree",
"document",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/modularinput/argument.py#L72-L103 | train | 216,945 |
splunk/splunk-sdk-python | examples/export/export.py | get_event_start | def get_event_start(event_buffer, event_format):
""" dispatch event start method based on event format type """
if event_format == "csv":
return get_csv_event_start(event_buffer)
elif event_format == "xml":
return get_xml_event_start(event_buffer)
else:
return get_json_event_start(event_buffer) | python | def get_event_start(event_buffer, event_format):
""" dispatch event start method based on event format type """
if event_format == "csv":
return get_csv_event_start(event_buffer)
elif event_format == "xml":
return get_xml_event_start(event_buffer)
else:
return get_json_event_start(event_buffer) | [
"def",
"get_event_start",
"(",
"event_buffer",
",",
"event_format",
")",
":",
"if",
"event_format",
"==",
"\"csv\"",
":",
"return",
"get_csv_event_start",
"(",
"event_buffer",
")",
"elif",
"event_format",
"==",
"\"xml\"",
":",
"return",
"get_xml_event_start",
"(",
... | dispatch event start method based on event format type | [
"dispatch",
"event",
"start",
"method",
"based",
"on",
"event",
"format",
"type"
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/export/export.py#L217-L225 | train | 216,946 |
splunk/splunk-sdk-python | examples/export/export.py | recover | def recover(options):
""" recover from an existing export run. We do this by
finding the last time change between events, truncate the file
and restart from there """
event_format = options.kwargs['omode']
buffer_size = 64*1024
fpd = open(options.kwargs['output'], "r+")
fpd.seek(0, 2) # seek to end
fptr = max(fpd.tell() - buffer_size, 0)
fptr_eof = 0
while (fptr > 0):
fpd.seek(fptr)
event_buffer = fpd.read(buffer_size)
(event_start, next_event_start, last_time) = \
get_event_start(event_buffer, event_format)
if (event_start != -1):
fptr_eof = event_start + fptr
break
fptr = fptr - buffer_size
if fptr < 0:
# didn't find a valid event, so start over
fptr_eof = 0
last_time = 0
# truncate file here
fpd.truncate(fptr_eof)
fpd.seek(fptr_eof)
fpd.write("\n")
fpd.close()
return last_time | python | def recover(options):
""" recover from an existing export run. We do this by
finding the last time change between events, truncate the file
and restart from there """
event_format = options.kwargs['omode']
buffer_size = 64*1024
fpd = open(options.kwargs['output'], "r+")
fpd.seek(0, 2) # seek to end
fptr = max(fpd.tell() - buffer_size, 0)
fptr_eof = 0
while (fptr > 0):
fpd.seek(fptr)
event_buffer = fpd.read(buffer_size)
(event_start, next_event_start, last_time) = \
get_event_start(event_buffer, event_format)
if (event_start != -1):
fptr_eof = event_start + fptr
break
fptr = fptr - buffer_size
if fptr < 0:
# didn't find a valid event, so start over
fptr_eof = 0
last_time = 0
# truncate file here
fpd.truncate(fptr_eof)
fpd.seek(fptr_eof)
fpd.write("\n")
fpd.close()
return last_time | [
"def",
"recover",
"(",
"options",
")",
":",
"event_format",
"=",
"options",
".",
"kwargs",
"[",
"'omode'",
"]",
"buffer_size",
"=",
"64",
"*",
"1024",
"fpd",
"=",
"open",
"(",
"options",
".",
"kwargs",
"[",
"'output'",
"]",
",",
"\"r+\"",
")",
"fpd",
... | recover from an existing export run. We do this by
finding the last time change between events, truncate the file
and restart from there | [
"recover",
"from",
"an",
"existing",
"export",
"run",
".",
"We",
"do",
"this",
"by",
"finding",
"the",
"last",
"time",
"change",
"between",
"events",
"truncate",
"the",
"file",
"and",
"restart",
"from",
"there"
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/export/export.py#L227-L261 | train | 216,947 |
splunk/splunk-sdk-python | examples/export/export.py | cleanup_tail | def cleanup_tail(options):
""" cleanup the tail of a recovery """
if options.kwargs['omode'] == "csv":
options.kwargs['fd'].write("\n")
elif options.kwargs['omode'] == "xml":
options.kwargs['fd'].write("\n</results>\n")
else:
options.kwargs['fd'].write("\n]\n") | python | def cleanup_tail(options):
""" cleanup the tail of a recovery """
if options.kwargs['omode'] == "csv":
options.kwargs['fd'].write("\n")
elif options.kwargs['omode'] == "xml":
options.kwargs['fd'].write("\n</results>\n")
else:
options.kwargs['fd'].write("\n]\n") | [
"def",
"cleanup_tail",
"(",
"options",
")",
":",
"if",
"options",
".",
"kwargs",
"[",
"'omode'",
"]",
"==",
"\"csv\"",
":",
"options",
".",
"kwargs",
"[",
"'fd'",
"]",
".",
"write",
"(",
"\"\\n\"",
")",
"elif",
"options",
".",
"kwargs",
"[",
"'omode'",... | cleanup the tail of a recovery | [
"cleanup",
"the",
"tail",
"of",
"a",
"recovery"
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/export/export.py#L263-L271 | train | 216,948 |
splunk/splunk-sdk-python | splunklib/modularinput/scheme.py | Scheme.to_xml | def to_xml(self):
"""Creates an ``ET.Element`` representing self, then returns it.
:returns root, an ``ET.Element`` representing this scheme.
"""
root = ET.Element("scheme")
ET.SubElement(root, "title").text = self.title
# add a description subelement if it's defined
if self.description is not None:
ET.SubElement(root, "description").text = self.description
# add all other subelements to this Scheme, represented by (tag, text)
subelements = [
("use_external_validation", self.use_external_validation),
("use_single_instance", self.use_single_instance),
("streaming_mode", self.streaming_mode)
]
for name, value in subelements:
ET.SubElement(root, name).text = str(value).lower()
endpoint = ET.SubElement(root, "endpoint")
args = ET.SubElement(endpoint, "args")
# add arguments as subelements to the <args> element
for arg in self.arguments:
arg.add_to_document(args)
return root | python | def to_xml(self):
"""Creates an ``ET.Element`` representing self, then returns it.
:returns root, an ``ET.Element`` representing this scheme.
"""
root = ET.Element("scheme")
ET.SubElement(root, "title").text = self.title
# add a description subelement if it's defined
if self.description is not None:
ET.SubElement(root, "description").text = self.description
# add all other subelements to this Scheme, represented by (tag, text)
subelements = [
("use_external_validation", self.use_external_validation),
("use_single_instance", self.use_single_instance),
("streaming_mode", self.streaming_mode)
]
for name, value in subelements:
ET.SubElement(root, name).text = str(value).lower()
endpoint = ET.SubElement(root, "endpoint")
args = ET.SubElement(endpoint, "args")
# add arguments as subelements to the <args> element
for arg in self.arguments:
arg.add_to_document(args)
return root | [
"def",
"to_xml",
"(",
"self",
")",
":",
"root",
"=",
"ET",
".",
"Element",
"(",
"\"scheme\"",
")",
"ET",
".",
"SubElement",
"(",
"root",
",",
"\"title\"",
")",
".",
"text",
"=",
"self",
".",
"title",
"# add a description subelement if it's defined",
"if",
... | Creates an ``ET.Element`` representing self, then returns it.
:returns root, an ``ET.Element`` representing this scheme. | [
"Creates",
"an",
"ET",
".",
"Element",
"representing",
"self",
"then",
"returns",
"it",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/modularinput/scheme.py#L55-L85 | train | 216,949 |
splunk/splunk-sdk-python | splunklib/modularinput/event_writer.py | EventWriter.write_xml_document | def write_xml_document(self, document):
"""Writes a string representation of an
``ElementTree`` object to the output stream.
:param document: An ``ElementTree`` object.
"""
self._out.write(ET.tostring(document))
self._out.flush() | python | def write_xml_document(self, document):
"""Writes a string representation of an
``ElementTree`` object to the output stream.
:param document: An ``ElementTree`` object.
"""
self._out.write(ET.tostring(document))
self._out.flush() | [
"def",
"write_xml_document",
"(",
"self",
",",
"document",
")",
":",
"self",
".",
"_out",
".",
"write",
"(",
"ET",
".",
"tostring",
"(",
"document",
")",
")",
"self",
".",
"_out",
".",
"flush",
"(",
")"
] | Writes a string representation of an
``ElementTree`` object to the output stream.
:param document: An ``ElementTree`` object. | [
"Writes",
"a",
"string",
"representation",
"of",
"an",
"ElementTree",
"object",
"to",
"the",
"output",
"stream",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/modularinput/event_writer.py#L74-L81 | train | 216,950 |
splunk/splunk-sdk-python | examples/conf.py | Program.create | def create(self, opts):
"""Create a conf stanza."""
argv = opts.args
count = len(argv)
# unflagged arguments are conf, stanza, key. In this order
# however, we must have a conf and stanza.
cpres = True if count > 0 else False
spres = True if count > 1 else False
kpres = True if count > 2 else False
if kpres:
kvpair = argv[2].split("=")
if len(kvpair) != 2:
error("Creating a k/v pair requires key and value", 2)
else:
key, value = kvpair
if not cpres and not spres:
error("Conf name and stanza name is required for create", 2)
name = argv[0]
stan = argv[1]
conf = self.service.confs[name]
if not kpres:
# create stanza
conf.create(stan)
return
# create key/value pair under existing stanza
stanza = conf[stan]
stanza.submit({key: value}) | python | def create(self, opts):
"""Create a conf stanza."""
argv = opts.args
count = len(argv)
# unflagged arguments are conf, stanza, key. In this order
# however, we must have a conf and stanza.
cpres = True if count > 0 else False
spres = True if count > 1 else False
kpres = True if count > 2 else False
if kpres:
kvpair = argv[2].split("=")
if len(kvpair) != 2:
error("Creating a k/v pair requires key and value", 2)
else:
key, value = kvpair
if not cpres and not spres:
error("Conf name and stanza name is required for create", 2)
name = argv[0]
stan = argv[1]
conf = self.service.confs[name]
if not kpres:
# create stanza
conf.create(stan)
return
# create key/value pair under existing stanza
stanza = conf[stan]
stanza.submit({key: value}) | [
"def",
"create",
"(",
"self",
",",
"opts",
")",
":",
"argv",
"=",
"opts",
".",
"args",
"count",
"=",
"len",
"(",
"argv",
")",
"# unflagged arguments are conf, stanza, key. In this order",
"# however, we must have a conf and stanza.",
"cpres",
"=",
"True",
"if",
"cou... | Create a conf stanza. | [
"Create",
"a",
"conf",
"stanza",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/conf.py#L39-L72 | train | 216,951 |
splunk/splunk-sdk-python | examples/conf.py | Program.delete | def delete(self, opts):
"""Delete a conf stanza."""
argv = opts.args
count = len(argv)
# unflagged arguments are conf, stanza, key. In this order
# however, we must have a conf and stanza.
cpres = True if count > 0 else False
spres = True if count > 1 else False
kpres = True if count > 2 else False
if not cpres:
error("Conf name is required for delete", 2)
if not cpres and not spres:
error("Conf name and stanza name is required for delete", 2)
if kpres:
error("Cannot delete individual keys from a stanza", 2)
name = argv[0]
stan = argv[1]
conf = self.service.confs[name]
conf.delete(stan) | python | def delete(self, opts):
"""Delete a conf stanza."""
argv = opts.args
count = len(argv)
# unflagged arguments are conf, stanza, key. In this order
# however, we must have a conf and stanza.
cpres = True if count > 0 else False
spres = True if count > 1 else False
kpres = True if count > 2 else False
if not cpres:
error("Conf name is required for delete", 2)
if not cpres and not spres:
error("Conf name and stanza name is required for delete", 2)
if kpres:
error("Cannot delete individual keys from a stanza", 2)
name = argv[0]
stan = argv[1]
conf = self.service.confs[name]
conf.delete(stan) | [
"def",
"delete",
"(",
"self",
",",
"opts",
")",
":",
"argv",
"=",
"opts",
".",
"args",
"count",
"=",
"len",
"(",
"argv",
")",
"# unflagged arguments are conf, stanza, key. In this order",
"# however, we must have a conf and stanza.",
"cpres",
"=",
"True",
"if",
"cou... | Delete a conf stanza. | [
"Delete",
"a",
"conf",
"stanza",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/conf.py#L75-L99 | train | 216,952 |
splunk/splunk-sdk-python | examples/conf.py | Program.list | def list(self, opts):
"""List all confs or if a conf is given, all the stanzas in it."""
argv = opts.args
count = len(argv)
# unflagged arguments are conf, stanza, key. In this order
# but all are optional
cpres = True if count > 0 else False
spres = True if count > 1 else False
kpres = True if count > 2 else False
if not cpres:
# List out the available confs
for conf in self.service.confs:
print(conf.name)
else:
# Print out detail on the requested conf
# check for optional stanza, or key requested (or all)
name = argv[0]
conf = self.service.confs[name]
for stanza in conf:
if (spres and argv[1] == stanza.name) or not spres:
print("[%s]" % stanza.name)
for key, value in six.iteritems(stanza.content):
if (kpres and argv[2] == key) or not kpres:
print("%s = %s" % (key, value))
print() | python | def list(self, opts):
"""List all confs or if a conf is given, all the stanzas in it."""
argv = opts.args
count = len(argv)
# unflagged arguments are conf, stanza, key. In this order
# but all are optional
cpres = True if count > 0 else False
spres = True if count > 1 else False
kpres = True if count > 2 else False
if not cpres:
# List out the available confs
for conf in self.service.confs:
print(conf.name)
else:
# Print out detail on the requested conf
# check for optional stanza, or key requested (or all)
name = argv[0]
conf = self.service.confs[name]
for stanza in conf:
if (spres and argv[1] == stanza.name) or not spres:
print("[%s]" % stanza.name)
for key, value in six.iteritems(stanza.content):
if (kpres and argv[2] == key) or not kpres:
print("%s = %s" % (key, value))
print() | [
"def",
"list",
"(",
"self",
",",
"opts",
")",
":",
"argv",
"=",
"opts",
".",
"args",
"count",
"=",
"len",
"(",
"argv",
")",
"# unflagged arguments are conf, stanza, key. In this order",
"# but all are optional",
"cpres",
"=",
"True",
"if",
"count",
">",
"0",
"... | List all confs or if a conf is given, all the stanzas in it. | [
"List",
"all",
"confs",
"or",
"if",
"a",
"conf",
"is",
"given",
"all",
"the",
"stanzas",
"in",
"it",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/conf.py#L101-L129 | train | 216,953 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | tob | def tob(data, enc='utf8'):
""" Convert anything to bytes """
return data.encode(enc) if isinstance(data, six.text_type) else bytes(data) | python | def tob(data, enc='utf8'):
""" Convert anything to bytes """
return data.encode(enc) if isinstance(data, six.text_type) else bytes(data) | [
"def",
"tob",
"(",
"data",
",",
"enc",
"=",
"'utf8'",
")",
":",
"return",
"data",
".",
"encode",
"(",
"enc",
")",
"if",
"isinstance",
"(",
"data",
",",
"six",
".",
"text_type",
")",
"else",
"bytes",
"(",
"data",
")"
] | Convert anything to bytes | [
"Convert",
"anything",
"to",
"bytes"
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L91-L93 | train | 216,954 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | make_default_app_wrapper | def make_default_app_wrapper(name):
''' Return a callable that relays calls to the current default app. '''
@functools.wraps(getattr(Bottle, name))
def wrapper(*a, **ka):
return getattr(app(), name)(*a, **ka)
return wrapper | python | def make_default_app_wrapper(name):
''' Return a callable that relays calls to the current default app. '''
@functools.wraps(getattr(Bottle, name))
def wrapper(*a, **ka):
return getattr(app(), name)(*a, **ka)
return wrapper | [
"def",
"make_default_app_wrapper",
"(",
"name",
")",
":",
"@",
"functools",
".",
"wraps",
"(",
"getattr",
"(",
"Bottle",
",",
"name",
")",
")",
"def",
"wrapper",
"(",
"*",
"a",
",",
"*",
"*",
"ka",
")",
":",
"return",
"getattr",
"(",
"app",
"(",
")... | Return a callable that relays calls to the current default app. | [
"Return",
"a",
"callable",
"that",
"relays",
"calls",
"to",
"the",
"current",
"default",
"app",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L1644-L1649 | train | 216,955 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | load_app | def load_app(target):
""" Load a bottle application based on a target string and return the
application object.
If the target is an import path (e.g. package.module), the application
stack is used to isolate the routes defined in that module.
If the target contains a colon (e.g. package.module:myapp) the
module variable specified after the colon is returned instead.
"""
tmp = app.push() # Create a new "default application"
rv = _load(target) # Import the target module
app.remove(tmp) # Remove the temporary added default application
return rv if isinstance(rv, Bottle) else tmp | python | def load_app(target):
""" Load a bottle application based on a target string and return the
application object.
If the target is an import path (e.g. package.module), the application
stack is used to isolate the routes defined in that module.
If the target contains a colon (e.g. package.module:myapp) the
module variable specified after the colon is returned instead.
"""
tmp = app.push() # Create a new "default application"
rv = _load(target) # Import the target module
app.remove(tmp) # Remove the temporary added default application
return rv if isinstance(rv, Bottle) else tmp | [
"def",
"load_app",
"(",
"target",
")",
":",
"tmp",
"=",
"app",
".",
"push",
"(",
")",
"# Create a new \"default application\"",
"rv",
"=",
"_load",
"(",
"target",
")",
"# Import the target module",
"app",
".",
"remove",
"(",
"tmp",
")",
"# Remove the temporary a... | Load a bottle application based on a target string and return the
application object.
If the target is an import path (e.g. package.module), the application
stack is used to isolate the routes defined in that module.
If the target contains a colon (e.g. package.module:myapp) the
module variable specified after the colon is returned instead. | [
"Load",
"a",
"bottle",
"application",
"based",
"on",
"a",
"target",
"string",
"and",
"return",
"the",
"application",
"object",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L1935-L1947 | train | 216,956 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | run | def run(app=None, server='wsgiref', host='127.0.0.1', port=8080,
interval=1, reloader=False, quiet=False, **kargs):
""" Start a server instance. This method blocks until the server terminates.
:param app: WSGI application or target string supported by
:func:`load_app`. (default: :func:`default_app`)
:param server: Server adapter to use. See :data:`server_names` keys
for valid names or pass a :class:`ServerAdapter` subclass.
(default: `wsgiref`)
:param host: Server address to bind to. Pass ``0.0.0.0`` to listens on
all interfaces including the external one. (default: 127.0.0.1)
:param port: Server port to bind to. Values below 1024 require root
privileges. (default: 8080)
:param reloader: Start auto-reloading server? (default: False)
:param interval: Auto-reloader interval in seconds (default: 1)
:param quiet: Suppress output to stdout and stderr? (default: False)
:param options: Options passed to the server adapter.
"""
app = app or default_app()
if isinstance(app, six.string_types):
app = load_app(app)
if isinstance(server, six.string_types):
server = server_names.get(server)
if isinstance(server, type):
server = server(host=host, port=port, **kargs)
if not isinstance(server, ServerAdapter):
raise RuntimeError("Server must be a subclass of ServerAdapter")
server.quiet = server.quiet or quiet
if not server.quiet and not os.environ.get('BOTTLE_CHILD'):
print("Bottle server starting up (using %s)..." % repr(server))
print("Listening on http://%s:%d/" % (server.host, server.port))
print("Use Ctrl-C to quit.")
print()
try:
if reloader:
interval = min(interval, 1)
if os.environ.get('BOTTLE_CHILD'):
_reloader_child(server, app, interval)
else:
_reloader_observer(server, app, interval)
else:
server.run(app)
except KeyboardInterrupt:
pass
if not server.quiet and not os.environ.get('BOTTLE_CHILD'):
print("Shutting down...") | python | def run(app=None, server='wsgiref', host='127.0.0.1', port=8080,
interval=1, reloader=False, quiet=False, **kargs):
""" Start a server instance. This method blocks until the server terminates.
:param app: WSGI application or target string supported by
:func:`load_app`. (default: :func:`default_app`)
:param server: Server adapter to use. See :data:`server_names` keys
for valid names or pass a :class:`ServerAdapter` subclass.
(default: `wsgiref`)
:param host: Server address to bind to. Pass ``0.0.0.0`` to listens on
all interfaces including the external one. (default: 127.0.0.1)
:param port: Server port to bind to. Values below 1024 require root
privileges. (default: 8080)
:param reloader: Start auto-reloading server? (default: False)
:param interval: Auto-reloader interval in seconds (default: 1)
:param quiet: Suppress output to stdout and stderr? (default: False)
:param options: Options passed to the server adapter.
"""
app = app or default_app()
if isinstance(app, six.string_types):
app = load_app(app)
if isinstance(server, six.string_types):
server = server_names.get(server)
if isinstance(server, type):
server = server(host=host, port=port, **kargs)
if not isinstance(server, ServerAdapter):
raise RuntimeError("Server must be a subclass of ServerAdapter")
server.quiet = server.quiet or quiet
if not server.quiet and not os.environ.get('BOTTLE_CHILD'):
print("Bottle server starting up (using %s)..." % repr(server))
print("Listening on http://%s:%d/" % (server.host, server.port))
print("Use Ctrl-C to quit.")
print()
try:
if reloader:
interval = min(interval, 1)
if os.environ.get('BOTTLE_CHILD'):
_reloader_child(server, app, interval)
else:
_reloader_observer(server, app, interval)
else:
server.run(app)
except KeyboardInterrupt:
pass
if not server.quiet and not os.environ.get('BOTTLE_CHILD'):
print("Shutting down...") | [
"def",
"run",
"(",
"app",
"=",
"None",
",",
"server",
"=",
"'wsgiref'",
",",
"host",
"=",
"'127.0.0.1'",
",",
"port",
"=",
"8080",
",",
"interval",
"=",
"1",
",",
"reloader",
"=",
"False",
",",
"quiet",
"=",
"False",
",",
"*",
"*",
"kargs",
")",
... | Start a server instance. This method blocks until the server terminates.
:param app: WSGI application or target string supported by
:func:`load_app`. (default: :func:`default_app`)
:param server: Server adapter to use. See :data:`server_names` keys
for valid names or pass a :class:`ServerAdapter` subclass.
(default: `wsgiref`)
:param host: Server address to bind to. Pass ``0.0.0.0`` to listens on
all interfaces including the external one. (default: 127.0.0.1)
:param port: Server port to bind to. Values below 1024 require root
privileges. (default: 8080)
:param reloader: Start auto-reloading server? (default: False)
:param interval: Auto-reloader interval in seconds (default: 1)
:param quiet: Suppress output to stdout and stderr? (default: False)
:param options: Options passed to the server adapter. | [
"Start",
"a",
"server",
"instance",
".",
"This",
"method",
"blocks",
"until",
"the",
"server",
"terminates",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L1950-L1995 | train | 216,957 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Router.build | def build(self, _name, *anon, **args):
''' Return a string that matches a named route. Use keyword arguments
to fill out named wildcards. Remaining arguments are appended as a
query string. Raises RouteBuildError or KeyError.'''
if _name not in self.named:
raise RouteBuildError("No route with that name.", _name)
rule, pairs = self.named[_name]
if not pairs:
token = self.syntax.split(rule)
parts = [p.replace('\\:',':') for p in token[::3]]
names = token[1::3]
if len(parts) > len(names): names.append(None)
pairs = list(zip(parts, names))
self.named[_name] = (rule, pairs)
try:
anon = list(anon)
url = [s if k is None
else s+str(args.pop(k)) if k else s+str(anon.pop())
for s, k in pairs]
except IndexError:
msg = "Not enough arguments to fill out anonymous wildcards."
raise RouteBuildError(msg)
except KeyError as e:
raise RouteBuildError(*e.args)
if args: url += ['?', urlencode(args)]
return ''.join(url) | python | def build(self, _name, *anon, **args):
''' Return a string that matches a named route. Use keyword arguments
to fill out named wildcards. Remaining arguments are appended as a
query string. Raises RouteBuildError or KeyError.'''
if _name not in self.named:
raise RouteBuildError("No route with that name.", _name)
rule, pairs = self.named[_name]
if not pairs:
token = self.syntax.split(rule)
parts = [p.replace('\\:',':') for p in token[::3]]
names = token[1::3]
if len(parts) > len(names): names.append(None)
pairs = list(zip(parts, names))
self.named[_name] = (rule, pairs)
try:
anon = list(anon)
url = [s if k is None
else s+str(args.pop(k)) if k else s+str(anon.pop())
for s, k in pairs]
except IndexError:
msg = "Not enough arguments to fill out anonymous wildcards."
raise RouteBuildError(msg)
except KeyError as e:
raise RouteBuildError(*e.args)
if args: url += ['?', urlencode(args)]
return ''.join(url) | [
"def",
"build",
"(",
"self",
",",
"_name",
",",
"*",
"anon",
",",
"*",
"*",
"args",
")",
":",
"if",
"_name",
"not",
"in",
"self",
".",
"named",
":",
"raise",
"RouteBuildError",
"(",
"\"No route with that name.\"",
",",
"_name",
")",
"rule",
",",
"pairs... | Return a string that matches a named route. Use keyword arguments
to fill out named wildcards. Remaining arguments are appended as a
query string. Raises RouteBuildError or KeyError. | [
"Return",
"a",
"string",
"that",
"matches",
"a",
"named",
"route",
".",
"Use",
"keyword",
"arguments",
"to",
"fill",
"out",
"named",
"wildcards",
".",
"Remaining",
"arguments",
"are",
"appended",
"as",
"a",
"query",
"string",
".",
"Raises",
"RouteBuildError",
... | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L278-L304 | train | 216,958 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Router._match_path | def _match_path(self, environ):
''' Optimized PATH_INFO matcher. '''
path = environ['PATH_INFO'] or '/'
# Assume we are in a warm state. Search compiled rules first.
match = self.static.get(path)
if match: return match, {}
for combined, rules in self.dynamic:
match = combined.match(path)
if not match: continue
gpat, match = rules[match.lastindex - 1]
return match, gpat.match(path).groupdict() if gpat else {}
# Lazy-check if we are really in a warm state. If yes, stop here.
if self.static or self.dynamic or not self.routes: return None, {}
# Cold state: We have not compiled any rules yet. Do so and try again.
if not environ.get('wsgi.run_once'):
self._compile()
return self._match_path(environ)
# For run_once (CGI) environments, don't compile. Just check one by one.
epath = path.replace(':','\\:') # Turn path into its own static rule.
match = self.routes.get(epath) # This returns static rule only.
if match: return match, {}
for rule in self.rules:
#: Skip static routes to reduce re.compile() calls.
if rule.count(':') < rule.count('\\:'): continue
match = self._compile_pattern(rule).match(path)
if match: return self.routes[rule], match.groupdict()
return None, {} | python | def _match_path(self, environ):
''' Optimized PATH_INFO matcher. '''
path = environ['PATH_INFO'] or '/'
# Assume we are in a warm state. Search compiled rules first.
match = self.static.get(path)
if match: return match, {}
for combined, rules in self.dynamic:
match = combined.match(path)
if not match: continue
gpat, match = rules[match.lastindex - 1]
return match, gpat.match(path).groupdict() if gpat else {}
# Lazy-check if we are really in a warm state. If yes, stop here.
if self.static or self.dynamic or not self.routes: return None, {}
# Cold state: We have not compiled any rules yet. Do so and try again.
if not environ.get('wsgi.run_once'):
self._compile()
return self._match_path(environ)
# For run_once (CGI) environments, don't compile. Just check one by one.
epath = path.replace(':','\\:') # Turn path into its own static rule.
match = self.routes.get(epath) # This returns static rule only.
if match: return match, {}
for rule in self.rules:
#: Skip static routes to reduce re.compile() calls.
if rule.count(':') < rule.count('\\:'): continue
match = self._compile_pattern(rule).match(path)
if match: return self.routes[rule], match.groupdict()
return None, {} | [
"def",
"_match_path",
"(",
"self",
",",
"environ",
")",
":",
"path",
"=",
"environ",
"[",
"'PATH_INFO'",
"]",
"or",
"'/'",
"# Assume we are in a warm state. Search compiled rules first.",
"match",
"=",
"self",
".",
"static",
".",
"get",
"(",
"path",
")",
"if",
... | Optimized PATH_INFO matcher. | [
"Optimized",
"PATH_INFO",
"matcher",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L324-L350 | train | 216,959 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Router._compile | def _compile(self):
''' Prepare static and dynamic search structures. '''
self.static = {}
self.dynamic = []
def fpat_sub(m):
return m.group(0) if len(m.group(1)) % 2 else m.group(1) + '(?:'
for rule in self.rules:
target = self.routes[rule]
if not self.syntax.search(rule):
self.static[rule.replace('\\:',':')] = target
continue
gpat = self._compile_pattern(rule)
fpat = re.sub(r'(\\*)(\(\?P<[^>]*>|\((?!\?))', fpat_sub, gpat.pattern)
gpat = gpat if gpat.groupindex else None
try:
combined = '%s|(%s)' % (self.dynamic[-1][0].pattern, fpat)
self.dynamic[-1] = (re.compile(combined), self.dynamic[-1][1])
self.dynamic[-1][1].append((gpat, target))
except (AssertionError, IndexError) as e: # AssertionError: Too many groups
self.dynamic.append((re.compile('(^%s$)'%fpat),
[(gpat, target)]))
except re.error as e:
raise RouteSyntaxError("Could not add Route: %s (%s)" % (rule, e)) | python | def _compile(self):
''' Prepare static and dynamic search structures. '''
self.static = {}
self.dynamic = []
def fpat_sub(m):
return m.group(0) if len(m.group(1)) % 2 else m.group(1) + '(?:'
for rule in self.rules:
target = self.routes[rule]
if not self.syntax.search(rule):
self.static[rule.replace('\\:',':')] = target
continue
gpat = self._compile_pattern(rule)
fpat = re.sub(r'(\\*)(\(\?P<[^>]*>|\((?!\?))', fpat_sub, gpat.pattern)
gpat = gpat if gpat.groupindex else None
try:
combined = '%s|(%s)' % (self.dynamic[-1][0].pattern, fpat)
self.dynamic[-1] = (re.compile(combined), self.dynamic[-1][1])
self.dynamic[-1][1].append((gpat, target))
except (AssertionError, IndexError) as e: # AssertionError: Too many groups
self.dynamic.append((re.compile('(^%s$)'%fpat),
[(gpat, target)]))
except re.error as e:
raise RouteSyntaxError("Could not add Route: %s (%s)" % (rule, e)) | [
"def",
"_compile",
"(",
"self",
")",
":",
"self",
".",
"static",
"=",
"{",
"}",
"self",
".",
"dynamic",
"=",
"[",
"]",
"def",
"fpat_sub",
"(",
"m",
")",
":",
"return",
"m",
".",
"group",
"(",
"0",
")",
"if",
"len",
"(",
"m",
".",
"group",
"("... | Prepare static and dynamic search structures. | [
"Prepare",
"static",
"and",
"dynamic",
"search",
"structures",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L352-L374 | train | 216,960 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Router._compile_pattern | def _compile_pattern(self, rule):
''' Return a regular expression with named groups for each wildcard. '''
out = ''
for i, part in enumerate(self.syntax.split(rule)):
if i%3 == 0: out += re.escape(part.replace('\\:',':'))
elif i%3 == 1: out += '(?P<%s>' % part if part else '(?:'
else: out += '%s)' % (part or '[^/]+')
return re.compile('^%s$'%out) | python | def _compile_pattern(self, rule):
''' Return a regular expression with named groups for each wildcard. '''
out = ''
for i, part in enumerate(self.syntax.split(rule)):
if i%3 == 0: out += re.escape(part.replace('\\:',':'))
elif i%3 == 1: out += '(?P<%s>' % part if part else '(?:'
else: out += '%s)' % (part or '[^/]+')
return re.compile('^%s$'%out) | [
"def",
"_compile_pattern",
"(",
"self",
",",
"rule",
")",
":",
"out",
"=",
"''",
"for",
"i",
",",
"part",
"in",
"enumerate",
"(",
"self",
".",
"syntax",
".",
"split",
"(",
"rule",
")",
")",
":",
"if",
"i",
"%",
"3",
"==",
"0",
":",
"out",
"+=",... | Return a regular expression with named groups for each wildcard. | [
"Return",
"a",
"regular",
"expression",
"with",
"named",
"groups",
"for",
"each",
"wildcard",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L376-L383 | train | 216,961 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Bottle.mount | def mount(self, app, prefix, **options):
''' Mount an application to a specific URL prefix. The prefix is added
to SCIPT_PATH and removed from PATH_INFO before the sub-application
is called.
:param app: an instance of :class:`Bottle`.
:param prefix: path prefix used as a mount-point.
All other parameters are passed to the underlying :meth:`route` call.
'''
if not isinstance(app, Bottle):
raise TypeError('Only Bottle instances are supported for now.')
prefix = '/'.join([_f for _f in prefix.split('/') if _f])
if not prefix:
raise TypeError('Empty prefix. Perhaps you want a merge()?')
for other in self.mounts:
if other.startswith(prefix):
raise TypeError('Conflict with existing mount: %s' % other)
path_depth = prefix.count('/') + 1
options.setdefault('method', 'ANY')
options.setdefault('skip', True)
self.mounts[prefix] = app
@self.route('/%s/:#.*#' % prefix, **options)
def mountpoint():
request.path_shift(path_depth)
return app._handle(request.environ) | python | def mount(self, app, prefix, **options):
''' Mount an application to a specific URL prefix. The prefix is added
to SCIPT_PATH and removed from PATH_INFO before the sub-application
is called.
:param app: an instance of :class:`Bottle`.
:param prefix: path prefix used as a mount-point.
All other parameters are passed to the underlying :meth:`route` call.
'''
if not isinstance(app, Bottle):
raise TypeError('Only Bottle instances are supported for now.')
prefix = '/'.join([_f for _f in prefix.split('/') if _f])
if not prefix:
raise TypeError('Empty prefix. Perhaps you want a merge()?')
for other in self.mounts:
if other.startswith(prefix):
raise TypeError('Conflict with existing mount: %s' % other)
path_depth = prefix.count('/') + 1
options.setdefault('method', 'ANY')
options.setdefault('skip', True)
self.mounts[prefix] = app
@self.route('/%s/:#.*#' % prefix, **options)
def mountpoint():
request.path_shift(path_depth)
return app._handle(request.environ) | [
"def",
"mount",
"(",
"self",
",",
"app",
",",
"prefix",
",",
"*",
"*",
"options",
")",
":",
"if",
"not",
"isinstance",
"(",
"app",
",",
"Bottle",
")",
":",
"raise",
"TypeError",
"(",
"'Only Bottle instances are supported for now.'",
")",
"prefix",
"=",
"'/... | Mount an application to a specific URL prefix. The prefix is added
to SCIPT_PATH and removed from PATH_INFO before the sub-application
is called.
:param app: an instance of :class:`Bottle`.
:param prefix: path prefix used as a mount-point.
All other parameters are passed to the underlying :meth:`route` call. | [
"Mount",
"an",
"application",
"to",
"a",
"specific",
"URL",
"prefix",
".",
"The",
"prefix",
"is",
"added",
"to",
"SCIPT_PATH",
"and",
"removed",
"from",
"PATH_INFO",
"before",
"the",
"sub",
"-",
"application",
"is",
"called",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L424-L449 | train | 216,962 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Bottle.uninstall | def uninstall(self, plugin):
''' Uninstall plugins. Pass an instance to remove a specific plugin.
Pass a type object to remove all plugins that match that type.
Subclasses are not removed. Pass a string to remove all plugins with
a matching ``name`` attribute. Pass ``True`` to remove all plugins.
The list of affected plugins is returned. '''
removed, remove = [], plugin
for i, plugin in list(enumerate(self.plugins))[::-1]:
if remove is True or remove is plugin or remove is type(plugin) \
or getattr(plugin, 'name', True) == remove:
removed.append(plugin)
del self.plugins[i]
if hasattr(plugin, 'close'): plugin.close()
if removed: self.reset()
return removed | python | def uninstall(self, plugin):
''' Uninstall plugins. Pass an instance to remove a specific plugin.
Pass a type object to remove all plugins that match that type.
Subclasses are not removed. Pass a string to remove all plugins with
a matching ``name`` attribute. Pass ``True`` to remove all plugins.
The list of affected plugins is returned. '''
removed, remove = [], plugin
for i, plugin in list(enumerate(self.plugins))[::-1]:
if remove is True or remove is plugin or remove is type(plugin) \
or getattr(plugin, 'name', True) == remove:
removed.append(plugin)
del self.plugins[i]
if hasattr(plugin, 'close'): plugin.close()
if removed: self.reset()
return removed | [
"def",
"uninstall",
"(",
"self",
",",
"plugin",
")",
":",
"removed",
",",
"remove",
"=",
"[",
"]",
",",
"plugin",
"for",
"i",
",",
"plugin",
"in",
"list",
"(",
"enumerate",
"(",
"self",
".",
"plugins",
")",
")",
"[",
":",
":",
"-",
"1",
"]",
":... | Uninstall plugins. Pass an instance to remove a specific plugin.
Pass a type object to remove all plugins that match that type.
Subclasses are not removed. Pass a string to remove all plugins with
a matching ``name`` attribute. Pass ``True`` to remove all plugins.
The list of affected plugins is returned. | [
"Uninstall",
"plugins",
".",
"Pass",
"an",
"instance",
"to",
"remove",
"a",
"specific",
"plugin",
".",
"Pass",
"a",
"type",
"object",
"to",
"remove",
"all",
"plugins",
"that",
"match",
"that",
"type",
".",
"Subclasses",
"are",
"not",
"removed",
".",
"Pass"... | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L467-L481 | train | 216,963 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Bottle.close | def close(self):
''' Close the application and all installed plugins. '''
for plugin in self.plugins:
if hasattr(plugin, 'close'): plugin.close()
self.stopped = True | python | def close(self):
''' Close the application and all installed plugins. '''
for plugin in self.plugins:
if hasattr(plugin, 'close'): plugin.close()
self.stopped = True | [
"def",
"close",
"(",
"self",
")",
":",
"for",
"plugin",
"in",
"self",
".",
"plugins",
":",
"if",
"hasattr",
"(",
"plugin",
",",
"'close'",
")",
":",
"plugin",
".",
"close",
"(",
")",
"self",
".",
"stopped",
"=",
"True"
] | Close the application and all installed plugins. | [
"Close",
"the",
"application",
"and",
"all",
"installed",
"plugins",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L493-L497 | train | 216,964 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Bottle._build_callback | def _build_callback(self, config):
''' Apply plugins to a route and return a new callable. '''
wrapped = config['callback']
plugins = self.plugins + config['apply']
skip = config['skip']
try:
for plugin in reversed(plugins):
if True in skip: break
if plugin in skip or type(plugin) in skip: continue
if getattr(plugin, 'name', True) in skip: continue
if hasattr(plugin, 'apply'):
wrapped = plugin.apply(wrapped, config)
else:
wrapped = plugin(wrapped)
if not wrapped: break
functools.update_wrapper(wrapped, config['callback'])
return wrapped
except RouteReset: # A plugin may have changed the config dict inplace.
return self._build_callback(config) | python | def _build_callback(self, config):
''' Apply plugins to a route and return a new callable. '''
wrapped = config['callback']
plugins = self.plugins + config['apply']
skip = config['skip']
try:
for plugin in reversed(plugins):
if True in skip: break
if plugin in skip or type(plugin) in skip: continue
if getattr(plugin, 'name', True) in skip: continue
if hasattr(plugin, 'apply'):
wrapped = plugin.apply(wrapped, config)
else:
wrapped = plugin(wrapped)
if not wrapped: break
functools.update_wrapper(wrapped, config['callback'])
return wrapped
except RouteReset: # A plugin may have changed the config dict inplace.
return self._build_callback(config) | [
"def",
"_build_callback",
"(",
"self",
",",
"config",
")",
":",
"wrapped",
"=",
"config",
"[",
"'callback'",
"]",
"plugins",
"=",
"self",
".",
"plugins",
"+",
"config",
"[",
"'apply'",
"]",
"skip",
"=",
"config",
"[",
"'skip'",
"]",
"try",
":",
"for",
... | Apply plugins to a route and return a new callable. | [
"Apply",
"plugins",
"to",
"a",
"route",
"and",
"return",
"a",
"new",
"callable",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L520-L538 | train | 216,965 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Bottle.hook | def hook(self, name):
""" Return a decorator that attaches a callback to a hook. """
def wrapper(func):
self.hooks.add(name, func)
return func
return wrapper | python | def hook(self, name):
""" Return a decorator that attaches a callback to a hook. """
def wrapper(func):
self.hooks.add(name, func)
return func
return wrapper | [
"def",
"hook",
"(",
"self",
",",
"name",
")",
":",
"def",
"wrapper",
"(",
"func",
")",
":",
"self",
".",
"hooks",
".",
"add",
"(",
"name",
",",
"func",
")",
"return",
"func",
"return",
"wrapper"
] | Return a decorator that attaches a callback to a hook. | [
"Return",
"a",
"decorator",
"that",
"attaches",
"a",
"callback",
"to",
"a",
"hook",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L624-L629 | train | 216,966 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Request.bind | def bind(self, environ):
""" Bind a new WSGI environment.
This is done automatically for the global `bottle.request`
instance on every request.
"""
self.environ = environ
# These attributes are used anyway, so it is ok to compute them here
self.path = '/' + environ.get('PATH_INFO', '/').lstrip('/')
self.method = environ.get('REQUEST_METHOD', 'GET').upper() | python | def bind(self, environ):
""" Bind a new WSGI environment.
This is done automatically for the global `bottle.request`
instance on every request.
"""
self.environ = environ
# These attributes are used anyway, so it is ok to compute them here
self.path = '/' + environ.get('PATH_INFO', '/').lstrip('/')
self.method = environ.get('REQUEST_METHOD', 'GET').upper() | [
"def",
"bind",
"(",
"self",
",",
"environ",
")",
":",
"self",
".",
"environ",
"=",
"environ",
"# These attributes are used anyway, so it is ok to compute them here",
"self",
".",
"path",
"=",
"'/'",
"+",
"environ",
".",
"get",
"(",
"'PATH_INFO'",
",",
"'/'",
")"... | Bind a new WSGI environment.
This is done automatically for the global `bottle.request`
instance on every request. | [
"Bind",
"a",
"new",
"WSGI",
"environment",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L787-L796 | train | 216,967 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Request._body | def _body(self):
""" The HTTP request body as a seekable file-like object.
This property returns a copy of the `wsgi.input` stream and should
be used instead of `environ['wsgi.input']`.
"""
maxread = max(0, self.content_length)
stream = self.environ['wsgi.input']
body = BytesIO() if maxread < MEMFILE_MAX else TemporaryFile(mode='w+b')
while maxread > 0:
part = stream.read(min(maxread, MEMFILE_MAX))
if not part: break
body.write(part)
maxread -= len(part)
self.environ['wsgi.input'] = body
body.seek(0)
return body | python | def _body(self):
""" The HTTP request body as a seekable file-like object.
This property returns a copy of the `wsgi.input` stream and should
be used instead of `environ['wsgi.input']`.
"""
maxread = max(0, self.content_length)
stream = self.environ['wsgi.input']
body = BytesIO() if maxread < MEMFILE_MAX else TemporaryFile(mode='w+b')
while maxread > 0:
part = stream.read(min(maxread, MEMFILE_MAX))
if not part: break
body.write(part)
maxread -= len(part)
self.environ['wsgi.input'] = body
body.seek(0)
return body | [
"def",
"_body",
"(",
"self",
")",
":",
"maxread",
"=",
"max",
"(",
"0",
",",
"self",
".",
"content_length",
")",
"stream",
"=",
"self",
".",
"environ",
"[",
"'wsgi.input'",
"]",
"body",
"=",
"BytesIO",
"(",
")",
"if",
"maxread",
"<",
"MEMFILE_MAX",
"... | The HTTP request body as a seekable file-like object.
This property returns a copy of the `wsgi.input` stream and should
be used instead of `environ['wsgi.input']`. | [
"The",
"HTTP",
"request",
"body",
"as",
"a",
"seekable",
"file",
"-",
"like",
"object",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L953-L969 | train | 216,968 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Response.bind | def bind(self):
""" Resets the Response object to its factory defaults. """
self._COOKIES = None
self.status = 200
self.headers = HeaderDict()
self.content_type = 'text/html; charset=UTF-8' | python | def bind(self):
""" Resets the Response object to its factory defaults. """
self._COOKIES = None
self.status = 200
self.headers = HeaderDict()
self.content_type = 'text/html; charset=UTF-8' | [
"def",
"bind",
"(",
"self",
")",
":",
"self",
".",
"_COOKIES",
"=",
"None",
"self",
".",
"status",
"=",
"200",
"self",
".",
"headers",
"=",
"HeaderDict",
"(",
")",
"self",
".",
"content_type",
"=",
"'text/html; charset=UTF-8'"
] | Resets the Response object to its factory defaults. | [
"Resets",
"the",
"Response",
"object",
"to",
"its",
"factory",
"defaults",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L1022-L1027 | train | 216,969 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | Response.delete_cookie | def delete_cookie(self, key, **kwargs):
''' Delete a cookie. Be sure to use the same `domain` and `path`
parameters as used to create the cookie. '''
kwargs['max_age'] = -1
kwargs['expires'] = 0
self.set_cookie(key, '', **kwargs) | python | def delete_cookie(self, key, **kwargs):
''' Delete a cookie. Be sure to use the same `domain` and `path`
parameters as used to create the cookie. '''
kwargs['max_age'] = -1
kwargs['expires'] = 0
self.set_cookie(key, '', **kwargs) | [
"def",
"delete_cookie",
"(",
"self",
",",
"key",
",",
"*",
"*",
"kwargs",
")",
":",
"kwargs",
"[",
"'max_age'",
"]",
"=",
"-",
"1",
"kwargs",
"[",
"'expires'",
"]",
"=",
"0",
"self",
".",
"set_cookie",
"(",
"key",
",",
"''",
",",
"*",
"*",
"kwarg... | Delete a cookie. Be sure to use the same `domain` and `path`
parameters as used to create the cookie. | [
"Delete",
"a",
"cookie",
".",
"Be",
"sure",
"to",
"use",
"the",
"same",
"domain",
"and",
"path",
"parameters",
"as",
"used",
"to",
"create",
"the",
"cookie",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L1110-L1115 | train | 216,970 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | HooksPlugin.add | def add(self, name, func):
''' Attach a callback to a hook. '''
if name not in self.hooks:
raise ValueError("Unknown hook name %s" % name)
was_empty = self._empty()
self.hooks[name].append(func)
if self.app and was_empty and not self._empty(): self.app.reset() | python | def add(self, name, func):
''' Attach a callback to a hook. '''
if name not in self.hooks:
raise ValueError("Unknown hook name %s" % name)
was_empty = self._empty()
self.hooks[name].append(func)
if self.app and was_empty and not self._empty(): self.app.reset() | [
"def",
"add",
"(",
"self",
",",
"name",
",",
"func",
")",
":",
"if",
"name",
"not",
"in",
"self",
".",
"hooks",
":",
"raise",
"ValueError",
"(",
"\"Unknown hook name %s\"",
"%",
"name",
")",
"was_empty",
"=",
"self",
".",
"_empty",
"(",
")",
"self",
... | Attach a callback to a hook. | [
"Attach",
"a",
"callback",
"to",
"a",
"hook",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L1170-L1176 | train | 216,971 |
splunk/splunk-sdk-python | examples/analytics/bottle.py | BaseTemplate.global_config | def global_config(cls, key, *args):
''' This reads or sets the global settings stored in class.settings. '''
if args:
cls.settings[key] = args[0]
else:
return cls.settings[key] | python | def global_config(cls, key, *args):
''' This reads or sets the global settings stored in class.settings. '''
if args:
cls.settings[key] = args[0]
else:
return cls.settings[key] | [
"def",
"global_config",
"(",
"cls",
",",
"key",
",",
"*",
"args",
")",
":",
"if",
"args",
":",
"cls",
".",
"settings",
"[",
"key",
"]",
"=",
"args",
"[",
"0",
"]",
"else",
":",
"return",
"cls",
".",
"settings",
"[",
"key",
"]"
] | This reads or sets the global settings stored in class.settings. | [
"This",
"reads",
"or",
"sets",
"the",
"global",
"settings",
"stored",
"in",
"class",
".",
"settings",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/analytics/bottle.py#L2137-L2142 | train | 216,972 |
splunk/splunk-sdk-python | splunklib/modularinput/script.py | Script.service | def service(self):
""" Returns a Splunk service object for this script invocation.
The service object is created from the Splunkd URI and session key
passed to the command invocation on the modular input stream. It is
available as soon as the :code:`Script.stream_events` method is
called.
:return: :class:splunklib.client.Service. A value of None is returned,
if you call this method before the :code:`Script.stream_events` method
is called.
"""
if self._service is not None:
return self._service
if self._input_definition is None:
return None
splunkd_uri = self._input_definition.metadata["server_uri"]
session_key = self._input_definition.metadata["session_key"]
splunkd = urlsplit(splunkd_uri, allow_fragments=False)
self._service = Service(
scheme=splunkd.scheme,
host=splunkd.hostname,
port=splunkd.port,
token=session_key,
)
return self._service | python | def service(self):
""" Returns a Splunk service object for this script invocation.
The service object is created from the Splunkd URI and session key
passed to the command invocation on the modular input stream. It is
available as soon as the :code:`Script.stream_events` method is
called.
:return: :class:splunklib.client.Service. A value of None is returned,
if you call this method before the :code:`Script.stream_events` method
is called.
"""
if self._service is not None:
return self._service
if self._input_definition is None:
return None
splunkd_uri = self._input_definition.metadata["server_uri"]
session_key = self._input_definition.metadata["session_key"]
splunkd = urlsplit(splunkd_uri, allow_fragments=False)
self._service = Service(
scheme=splunkd.scheme,
host=splunkd.hostname,
port=splunkd.port,
token=session_key,
)
return self._service | [
"def",
"service",
"(",
"self",
")",
":",
"if",
"self",
".",
"_service",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_service",
"if",
"self",
".",
"_input_definition",
"is",
"None",
":",
"return",
"None",
"splunkd_uri",
"=",
"self",
".",
"_input_def... | Returns a Splunk service object for this script invocation.
The service object is created from the Splunkd URI and session key
passed to the command invocation on the modular input stream. It is
available as soon as the :code:`Script.stream_events` method is
called.
:return: :class:splunklib.client.Service. A value of None is returned,
if you call this method before the :code:`Script.stream_events` method
is called. | [
"Returns",
"a",
"Splunk",
"service",
"object",
"for",
"this",
"script",
"invocation",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/modularinput/script.py#L113-L144 | train | 216,973 |
splunk/splunk-sdk-python | examples/random_numbers/random_numbers.py | MyScript.validate_input | def validate_input(self, validation_definition):
"""In this example we are using external validation to verify that min is
less than max. If validate_input does not raise an Exception, the input is
assumed to be valid. Otherwise it prints the exception as an error message
when telling splunkd that the configuration is invalid.
When using external validation, after splunkd calls the modular input with
--scheme to get a scheme, it calls it again with --validate-arguments for
each instance of the modular input in its configuration files, feeding XML
on stdin to the modular input to do validation. It is called the same way
whenever a modular input's configuration is edited.
:param validation_definition: a ValidationDefinition object
"""
# Get the parameters from the ValidationDefinition object,
# then typecast the values as floats
minimum = float(validation_definition.parameters["min"])
maximum = float(validation_definition.parameters["max"])
if minimum >= maximum:
raise ValueError("min must be less than max; found min=%f, max=%f" % minimum, maximum) | python | def validate_input(self, validation_definition):
"""In this example we are using external validation to verify that min is
less than max. If validate_input does not raise an Exception, the input is
assumed to be valid. Otherwise it prints the exception as an error message
when telling splunkd that the configuration is invalid.
When using external validation, after splunkd calls the modular input with
--scheme to get a scheme, it calls it again with --validate-arguments for
each instance of the modular input in its configuration files, feeding XML
on stdin to the modular input to do validation. It is called the same way
whenever a modular input's configuration is edited.
:param validation_definition: a ValidationDefinition object
"""
# Get the parameters from the ValidationDefinition object,
# then typecast the values as floats
minimum = float(validation_definition.parameters["min"])
maximum = float(validation_definition.parameters["max"])
if minimum >= maximum:
raise ValueError("min must be less than max; found min=%f, max=%f" % minimum, maximum) | [
"def",
"validate_input",
"(",
"self",
",",
"validation_definition",
")",
":",
"# Get the parameters from the ValidationDefinition object,",
"# then typecast the values as floats",
"minimum",
"=",
"float",
"(",
"validation_definition",
".",
"parameters",
"[",
"\"min\"",
"]",
"... | In this example we are using external validation to verify that min is
less than max. If validate_input does not raise an Exception, the input is
assumed to be valid. Otherwise it prints the exception as an error message
when telling splunkd that the configuration is invalid.
When using external validation, after splunkd calls the modular input with
--scheme to get a scheme, it calls it again with --validate-arguments for
each instance of the modular input in its configuration files, feeding XML
on stdin to the modular input to do validation. It is called the same way
whenever a modular input's configuration is edited.
:param validation_definition: a ValidationDefinition object | [
"In",
"this",
"example",
"we",
"are",
"using",
"external",
"validation",
"to",
"verify",
"that",
"min",
"is",
"less",
"than",
"max",
".",
"If",
"validate_input",
"does",
"not",
"raise",
"an",
"Exception",
"the",
"input",
"is",
"assumed",
"to",
"be",
"valid... | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/examples/random_numbers/random_numbers.py#L73-L93 | train | 216,974 |
splunk/splunk-sdk-python | splunklib/modularinput/event.py | Event.write_to | def write_to(self, stream):
"""Write an XML representation of self, an ``Event`` object, to the given stream.
The ``Event`` object will only be written if its data field is defined,
otherwise a ``ValueError`` is raised.
:param stream: stream to write XML to.
"""
if self.data is None:
raise ValueError("Events must have at least the data field set to be written to XML.")
event = ET.Element("event")
if self.stanza is not None:
event.set("stanza", self.stanza)
event.set("unbroken", str(int(self.unbroken)))
# if a time isn't set, let Splunk guess by not creating a <time> element
if self.time is not None:
ET.SubElement(event, "time").text = str(self.time)
# add all other subelements to this Event, represented by (tag, text)
subelements = [
("source", self.source),
("sourcetype", self.sourceType),
("index", self.index),
("host", self.host),
("data", self.data)
]
for node, value in subelements:
if value is not None:
ET.SubElement(event, node).text = value
if self.done:
ET.SubElement(event, "done")
stream.write(ET.tostring(event))
stream.flush() | python | def write_to(self, stream):
"""Write an XML representation of self, an ``Event`` object, to the given stream.
The ``Event`` object will only be written if its data field is defined,
otherwise a ``ValueError`` is raised.
:param stream: stream to write XML to.
"""
if self.data is None:
raise ValueError("Events must have at least the data field set to be written to XML.")
event = ET.Element("event")
if self.stanza is not None:
event.set("stanza", self.stanza)
event.set("unbroken", str(int(self.unbroken)))
# if a time isn't set, let Splunk guess by not creating a <time> element
if self.time is not None:
ET.SubElement(event, "time").text = str(self.time)
# add all other subelements to this Event, represented by (tag, text)
subelements = [
("source", self.source),
("sourcetype", self.sourceType),
("index", self.index),
("host", self.host),
("data", self.data)
]
for node, value in subelements:
if value is not None:
ET.SubElement(event, node).text = value
if self.done:
ET.SubElement(event, "done")
stream.write(ET.tostring(event))
stream.flush() | [
"def",
"write_to",
"(",
"self",
",",
"stream",
")",
":",
"if",
"self",
".",
"data",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"Events must have at least the data field set to be written to XML.\"",
")",
"event",
"=",
"ET",
".",
"Element",
"(",
"\"event\"",
... | Write an XML representation of self, an ``Event`` object, to the given stream.
The ``Event`` object will only be written if its data field is defined,
otherwise a ``ValueError`` is raised.
:param stream: stream to write XML to. | [
"Write",
"an",
"XML",
"representation",
"of",
"self",
"an",
"Event",
"object",
"to",
"the",
"given",
"stream",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/modularinput/event.py#L72-L108 | train | 216,975 |
splunk/splunk-sdk-python | splunklib/client.py | Service.capabilities | def capabilities(self):
"""Returns the list of system capabilities.
:return: A ``list`` of capabilities.
"""
response = self.get(PATH_CAPABILITIES)
return _load_atom(response, MATCH_ENTRY_CONTENT).capabilities | python | def capabilities(self):
"""Returns the list of system capabilities.
:return: A ``list`` of capabilities.
"""
response = self.get(PATH_CAPABILITIES)
return _load_atom(response, MATCH_ENTRY_CONTENT).capabilities | [
"def",
"capabilities",
"(",
"self",
")",
":",
"response",
"=",
"self",
".",
"get",
"(",
"PATH_CAPABILITIES",
")",
"return",
"_load_atom",
"(",
"response",
",",
"MATCH_ENTRY_CONTENT",
")",
".",
"capabilities"
] | Returns the list of system capabilities.
:return: A ``list`` of capabilities. | [
"Returns",
"the",
"list",
"of",
"system",
"capabilities",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L423-L429 | train | 216,976 |
splunk/splunk-sdk-python | splunklib/client.py | Service.modular_input_kinds | def modular_input_kinds(self):
"""Returns the collection of the modular input kinds on this Splunk instance.
:return: A :class:`ReadOnlyCollection` of :class:`ModularInputKind` entities.
"""
if self.splunk_version >= (5,):
return ReadOnlyCollection(self, PATH_MODULAR_INPUTS, item=ModularInputKind)
else:
raise IllegalOperationException("Modular inputs are not supported before Splunk version 5.") | python | def modular_input_kinds(self):
"""Returns the collection of the modular input kinds on this Splunk instance.
:return: A :class:`ReadOnlyCollection` of :class:`ModularInputKind` entities.
"""
if self.splunk_version >= (5,):
return ReadOnlyCollection(self, PATH_MODULAR_INPUTS, item=ModularInputKind)
else:
raise IllegalOperationException("Modular inputs are not supported before Splunk version 5.") | [
"def",
"modular_input_kinds",
"(",
"self",
")",
":",
"if",
"self",
".",
"splunk_version",
">=",
"(",
"5",
",",
")",
":",
"return",
"ReadOnlyCollection",
"(",
"self",
",",
"PATH_MODULAR_INPUTS",
",",
"item",
"=",
"ModularInputKind",
")",
"else",
":",
"raise",... | Returns the collection of the modular input kinds on this Splunk instance.
:return: A :class:`ReadOnlyCollection` of :class:`ModularInputKind` entities. | [
"Returns",
"the",
"collection",
"of",
"the",
"modular",
"input",
"kinds",
"on",
"this",
"Splunk",
"instance",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L506-L514 | train | 216,977 |
splunk/splunk-sdk-python | splunklib/client.py | Service.restart_required | def restart_required(self):
"""Indicates whether splunkd is in a state that requires a restart.
:return: A ``boolean`` that indicates whether a restart is required.
"""
response = self.get("messages").body.read()
messages = data.load(response)['feed']
if 'entry' not in messages:
result = False
else:
if isinstance(messages['entry'], dict):
titles = [messages['entry']['title']]
else:
titles = [x['title'] for x in messages['entry']]
result = 'restart_required' in titles
return result | python | def restart_required(self):
"""Indicates whether splunkd is in a state that requires a restart.
:return: A ``boolean`` that indicates whether a restart is required.
"""
response = self.get("messages").body.read()
messages = data.load(response)['feed']
if 'entry' not in messages:
result = False
else:
if isinstance(messages['entry'], dict):
titles = [messages['entry']['title']]
else:
titles = [x['title'] for x in messages['entry']]
result = 'restart_required' in titles
return result | [
"def",
"restart_required",
"(",
"self",
")",
":",
"response",
"=",
"self",
".",
"get",
"(",
"\"messages\"",
")",
".",
"body",
".",
"read",
"(",
")",
"messages",
"=",
"data",
".",
"load",
"(",
"response",
")",
"[",
"'feed'",
"]",
"if",
"'entry'",
"not... | Indicates whether splunkd is in a state that requires a restart.
:return: A ``boolean`` that indicates whether a restart is required. | [
"Indicates",
"whether",
"splunkd",
"is",
"in",
"a",
"state",
"that",
"requires",
"a",
"restart",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L580-L596 | train | 216,978 |
splunk/splunk-sdk-python | splunklib/client.py | Service.splunk_version | def splunk_version(self):
"""Returns the version of the splunkd instance this object is attached
to.
The version is returned as a tuple of the version components as
integers (for example, `(4,3,3)` or `(5,)`).
:return: A ``tuple`` of ``integers``.
"""
if self._splunk_version is None:
self._splunk_version = tuple([int(p) for p in self.info['version'].split('.')])
return self._splunk_version | python | def splunk_version(self):
"""Returns the version of the splunkd instance this object is attached
to.
The version is returned as a tuple of the version components as
integers (for example, `(4,3,3)` or `(5,)`).
:return: A ``tuple`` of ``integers``.
"""
if self._splunk_version is None:
self._splunk_version = tuple([int(p) for p in self.info['version'].split('.')])
return self._splunk_version | [
"def",
"splunk_version",
"(",
"self",
")",
":",
"if",
"self",
".",
"_splunk_version",
"is",
"None",
":",
"self",
".",
"_splunk_version",
"=",
"tuple",
"(",
"[",
"int",
"(",
"p",
")",
"for",
"p",
"in",
"self",
".",
"info",
"[",
"'version'",
"]",
".",
... | Returns the version of the splunkd instance this object is attached
to.
The version is returned as a tuple of the version components as
integers (for example, `(4,3,3)` or `(5,)`).
:return: A ``tuple`` of ``integers``. | [
"Returns",
"the",
"version",
"of",
"the",
"splunkd",
"instance",
"this",
"object",
"is",
"attached",
"to",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L656-L667 | train | 216,979 |
splunk/splunk-sdk-python | splunklib/client.py | Endpoint.get | def get(self, path_segment="", owner=None, app=None, sharing=None, **query):
"""Performs a GET operation on the path segment relative to this endpoint.
This method is named to match the HTTP method. This method makes at least
one roundtrip to the server, one additional round trip for
each 303 status returned, plus at most two additional round
trips if
the ``autologin`` field of :func:`connect` is set to ``True``.
If *owner*, *app*, and *sharing* are omitted, this method takes a
default namespace from the :class:`Service` object for this :class:`Endpoint`.
All other keyword arguments are included in the URL as query parameters.
:raises AuthenticationError: Raised when the ``Service`` is not logged in.
:raises HTTPError: Raised when an error in the request occurs.
:param path_segment: A path segment relative to this endpoint.
:type path_segment: ``string``
:param owner: The owner context of the namespace (optional).
:type owner: ``string``
:param app: The app context of the namespace (optional).
:type app: ``string``
:param sharing: The sharing mode for the namespace (optional).
:type sharing: "global", "system", "app", or "user"
:param query: All other keyword arguments, which are used as query
parameters.
:type query: ``string``
:return: The response from the server.
:rtype: ``dict`` with keys ``body``, ``headers``, ``reason``,
and ``status``
**Example**::
import splunklib.client
s = client.service(...)
apps = s.apps
apps.get() == \\
{'body': ...a response reader object...,
'headers': [('content-length', '26208'),
('expires', 'Fri, 30 Oct 1998 00:00:00 GMT'),
('server', 'Splunkd'),
('connection', 'close'),
('cache-control', 'no-store, max-age=0, must-revalidate, no-cache'),
('date', 'Fri, 11 May 2012 16:30:35 GMT'),
('content-type', 'text/xml; charset=utf-8')],
'reason': 'OK',
'status': 200}
apps.get('nonexistant/path') # raises HTTPError
s.logout()
apps.get() # raises AuthenticationError
"""
# self.path to the Endpoint is relative in the SDK, so passing
# owner, app, sharing, etc. along will produce the correct
# namespace in the final request.
if path_segment.startswith('/'):
path = path_segment
else:
path = self.service._abspath(self.path + path_segment, owner=owner,
app=app, sharing=sharing)
# ^-- This was "%s%s" % (self.path, path_segment).
# That doesn't work, because self.path may be UrlEncoded.
return self.service.get(path,
owner=owner, app=app, sharing=sharing,
**query) | python | def get(self, path_segment="", owner=None, app=None, sharing=None, **query):
"""Performs a GET operation on the path segment relative to this endpoint.
This method is named to match the HTTP method. This method makes at least
one roundtrip to the server, one additional round trip for
each 303 status returned, plus at most two additional round
trips if
the ``autologin`` field of :func:`connect` is set to ``True``.
If *owner*, *app*, and *sharing* are omitted, this method takes a
default namespace from the :class:`Service` object for this :class:`Endpoint`.
All other keyword arguments are included in the URL as query parameters.
:raises AuthenticationError: Raised when the ``Service`` is not logged in.
:raises HTTPError: Raised when an error in the request occurs.
:param path_segment: A path segment relative to this endpoint.
:type path_segment: ``string``
:param owner: The owner context of the namespace (optional).
:type owner: ``string``
:param app: The app context of the namespace (optional).
:type app: ``string``
:param sharing: The sharing mode for the namespace (optional).
:type sharing: "global", "system", "app", or "user"
:param query: All other keyword arguments, which are used as query
parameters.
:type query: ``string``
:return: The response from the server.
:rtype: ``dict`` with keys ``body``, ``headers``, ``reason``,
and ``status``
**Example**::
import splunklib.client
s = client.service(...)
apps = s.apps
apps.get() == \\
{'body': ...a response reader object...,
'headers': [('content-length', '26208'),
('expires', 'Fri, 30 Oct 1998 00:00:00 GMT'),
('server', 'Splunkd'),
('connection', 'close'),
('cache-control', 'no-store, max-age=0, must-revalidate, no-cache'),
('date', 'Fri, 11 May 2012 16:30:35 GMT'),
('content-type', 'text/xml; charset=utf-8')],
'reason': 'OK',
'status': 200}
apps.get('nonexistant/path') # raises HTTPError
s.logout()
apps.get() # raises AuthenticationError
"""
# self.path to the Endpoint is relative in the SDK, so passing
# owner, app, sharing, etc. along will produce the correct
# namespace in the final request.
if path_segment.startswith('/'):
path = path_segment
else:
path = self.service._abspath(self.path + path_segment, owner=owner,
app=app, sharing=sharing)
# ^-- This was "%s%s" % (self.path, path_segment).
# That doesn't work, because self.path may be UrlEncoded.
return self.service.get(path,
owner=owner, app=app, sharing=sharing,
**query) | [
"def",
"get",
"(",
"self",
",",
"path_segment",
"=",
"\"\"",
",",
"owner",
"=",
"None",
",",
"app",
"=",
"None",
",",
"sharing",
"=",
"None",
",",
"*",
"*",
"query",
")",
":",
"# self.path to the Endpoint is relative in the SDK, so passing",
"# owner, app, shari... | Performs a GET operation on the path segment relative to this endpoint.
This method is named to match the HTTP method. This method makes at least
one roundtrip to the server, one additional round trip for
each 303 status returned, plus at most two additional round
trips if
the ``autologin`` field of :func:`connect` is set to ``True``.
If *owner*, *app*, and *sharing* are omitted, this method takes a
default namespace from the :class:`Service` object for this :class:`Endpoint`.
All other keyword arguments are included in the URL as query parameters.
:raises AuthenticationError: Raised when the ``Service`` is not logged in.
:raises HTTPError: Raised when an error in the request occurs.
:param path_segment: A path segment relative to this endpoint.
:type path_segment: ``string``
:param owner: The owner context of the namespace (optional).
:type owner: ``string``
:param app: The app context of the namespace (optional).
:type app: ``string``
:param sharing: The sharing mode for the namespace (optional).
:type sharing: "global", "system", "app", or "user"
:param query: All other keyword arguments, which are used as query
parameters.
:type query: ``string``
:return: The response from the server.
:rtype: ``dict`` with keys ``body``, ``headers``, ``reason``,
and ``status``
**Example**::
import splunklib.client
s = client.service(...)
apps = s.apps
apps.get() == \\
{'body': ...a response reader object...,
'headers': [('content-length', '26208'),
('expires', 'Fri, 30 Oct 1998 00:00:00 GMT'),
('server', 'Splunkd'),
('connection', 'close'),
('cache-control', 'no-store, max-age=0, must-revalidate, no-cache'),
('date', 'Fri, 11 May 2012 16:30:35 GMT'),
('content-type', 'text/xml; charset=utf-8')],
'reason': 'OK',
'status': 200}
apps.get('nonexistant/path') # raises HTTPError
s.logout()
apps.get() # raises AuthenticationError | [
"Performs",
"a",
"GET",
"operation",
"on",
"the",
"path",
"segment",
"relative",
"to",
"this",
"endpoint",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L697-L759 | train | 216,980 |
splunk/splunk-sdk-python | splunklib/client.py | Entity._run_action | def _run_action(self, path_segment, **kwargs):
"""Run a method and return the content Record from the returned XML.
A method is a relative path from an Entity that is not itself
an Entity. _run_action assumes that the returned XML is an
Atom field containing one Entry, and the contents of Entry is
what should be the return value. This is right in enough cases
to make this method useful.
"""
response = self.get(path_segment, **kwargs)
data = self._load_atom_entry(response)
rec = _parse_atom_entry(data)
return rec.content | python | def _run_action(self, path_segment, **kwargs):
"""Run a method and return the content Record from the returned XML.
A method is a relative path from an Entity that is not itself
an Entity. _run_action assumes that the returned XML is an
Atom field containing one Entry, and the contents of Entry is
what should be the return value. This is right in enough cases
to make this method useful.
"""
response = self.get(path_segment, **kwargs)
data = self._load_atom_entry(response)
rec = _parse_atom_entry(data)
return rec.content | [
"def",
"_run_action",
"(",
"self",
",",
"path_segment",
",",
"*",
"*",
"kwargs",
")",
":",
"response",
"=",
"self",
".",
"get",
"(",
"path_segment",
",",
"*",
"*",
"kwargs",
")",
"data",
"=",
"self",
".",
"_load_atom_entry",
"(",
"response",
")",
"rec"... | Run a method and return the content Record from the returned XML.
A method is a relative path from an Entity that is not itself
an Entity. _run_action assumes that the returned XML is an
Atom field containing one Entry, and the contents of Entry is
what should be the return value. This is right in enough cases
to make this method useful. | [
"Run",
"a",
"method",
"and",
"return",
"the",
"content",
"Record",
"from",
"the",
"returned",
"XML",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L958-L970 | train | 216,981 |
splunk/splunk-sdk-python | splunklib/client.py | Entity._proper_namespace | def _proper_namespace(self, owner=None, app=None, sharing=None):
"""Produce a namespace sans wildcards for use in entity requests.
This method tries to fill in the fields of the namespace which are `None`
or wildcard (`'-'`) from the entity's namespace. If that fails, it uses
the service's namespace.
:param owner:
:param app:
:param sharing:
:return:
"""
if owner is None and app is None and sharing is None: # No namespace provided
if self._state is not None and 'access' in self._state:
return (self._state.access.owner,
self._state.access.app,
self._state.access.sharing)
else:
return (self.service.namespace['owner'],
self.service.namespace['app'],
self.service.namespace['sharing'])
else:
return (owner,app,sharing) | python | def _proper_namespace(self, owner=None, app=None, sharing=None):
"""Produce a namespace sans wildcards for use in entity requests.
This method tries to fill in the fields of the namespace which are `None`
or wildcard (`'-'`) from the entity's namespace. If that fails, it uses
the service's namespace.
:param owner:
:param app:
:param sharing:
:return:
"""
if owner is None and app is None and sharing is None: # No namespace provided
if self._state is not None and 'access' in self._state:
return (self._state.access.owner,
self._state.access.app,
self._state.access.sharing)
else:
return (self.service.namespace['owner'],
self.service.namespace['app'],
self.service.namespace['sharing'])
else:
return (owner,app,sharing) | [
"def",
"_proper_namespace",
"(",
"self",
",",
"owner",
"=",
"None",
",",
"app",
"=",
"None",
",",
"sharing",
"=",
"None",
")",
":",
"if",
"owner",
"is",
"None",
"and",
"app",
"is",
"None",
"and",
"sharing",
"is",
"None",
":",
"# No namespace provided",
... | Produce a namespace sans wildcards for use in entity requests.
This method tries to fill in the fields of the namespace which are `None`
or wildcard (`'-'`) from the entity's namespace. If that fails, it uses
the service's namespace.
:param owner:
:param app:
:param sharing:
:return: | [
"Produce",
"a",
"namespace",
"sans",
"wildcards",
"for",
"use",
"in",
"entity",
"requests",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L972-L994 | train | 216,982 |
splunk/splunk-sdk-python | splunklib/client.py | Entity.refresh | def refresh(self, state=None):
"""Refreshes the state of this entity.
If *state* is provided, load it as the new state for this
entity. Otherwise, make a roundtrip to the server (by calling
the :meth:`read` method of ``self``) to fetch an updated state,
plus at most two additional round trips if
the ``autologin`` field of :func:`connect` is set to ``True``.
:param state: Entity-specific arguments (optional).
:type state: ``dict``
:raises EntityDeletedException: Raised if the entity no longer exists on
the server.
**Example**::
import splunklib.client as client
s = client.connect(...)
search = s.apps['search']
search.refresh()
"""
if state is not None:
self._state = state
else:
self._state = self.read(self.get())
return self | python | def refresh(self, state=None):
"""Refreshes the state of this entity.
If *state* is provided, load it as the new state for this
entity. Otherwise, make a roundtrip to the server (by calling
the :meth:`read` method of ``self``) to fetch an updated state,
plus at most two additional round trips if
the ``autologin`` field of :func:`connect` is set to ``True``.
:param state: Entity-specific arguments (optional).
:type state: ``dict``
:raises EntityDeletedException: Raised if the entity no longer exists on
the server.
**Example**::
import splunklib.client as client
s = client.connect(...)
search = s.apps['search']
search.refresh()
"""
if state is not None:
self._state = state
else:
self._state = self.read(self.get())
return self | [
"def",
"refresh",
"(",
"self",
",",
"state",
"=",
"None",
")",
":",
"if",
"state",
"is",
"not",
"None",
":",
"self",
".",
"_state",
"=",
"state",
"else",
":",
"self",
".",
"_state",
"=",
"self",
".",
"read",
"(",
"self",
".",
"get",
"(",
")",
"... | Refreshes the state of this entity.
If *state* is provided, load it as the new state for this
entity. Otherwise, make a roundtrip to the server (by calling
the :meth:`read` method of ``self``) to fetch an updated state,
plus at most two additional round trips if
the ``autologin`` field of :func:`connect` is set to ``True``.
:param state: Entity-specific arguments (optional).
:type state: ``dict``
:raises EntityDeletedException: Raised if the entity no longer exists on
the server.
**Example**::
import splunklib.client as client
s = client.connect(...)
search = s.apps['search']
search.refresh() | [
"Refreshes",
"the",
"state",
"of",
"this",
"entity",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1008-L1033 | train | 216,983 |
splunk/splunk-sdk-python | splunklib/client.py | Entity.disable | def disable(self):
"""Disables the entity at this endpoint."""
self.post("disable")
if self.service.restart_required:
self.service.restart(120)
return self | python | def disable(self):
"""Disables the entity at this endpoint."""
self.post("disable")
if self.service.restart_required:
self.service.restart(120)
return self | [
"def",
"disable",
"(",
"self",
")",
":",
"self",
".",
"post",
"(",
"\"disable\"",
")",
"if",
"self",
".",
"service",
".",
"restart_required",
":",
"self",
".",
"service",
".",
"restart",
"(",
"120",
")",
"return",
"self"
] | Disables the entity at this endpoint. | [
"Disables",
"the",
"entity",
"at",
"this",
"endpoint",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1052-L1057 | train | 216,984 |
splunk/splunk-sdk-python | splunklib/client.py | ReadOnlyCollection._entity_path | def _entity_path(self, state):
"""Calculate the path to an entity to be returned.
*state* should be the dictionary returned by
:func:`_parse_atom_entry`. :func:`_entity_path` extracts the
link to this entity from *state*, and strips all the namespace
prefixes from it to leave only the relative path of the entity
itself, sans namespace.
:rtype: ``string``
:return: an absolute path
"""
# This has been factored out so that it can be easily
# overloaded by Configurations, which has to switch its
# entities' endpoints from its own properties/ to configs/.
raw_path = urllib.parse.unquote(state.links.alternate)
if 'servicesNS/' in raw_path:
return _trailing(raw_path, 'servicesNS/', '/', '/')
elif 'services/' in raw_path:
return _trailing(raw_path, 'services/')
else:
return raw_path | python | def _entity_path(self, state):
"""Calculate the path to an entity to be returned.
*state* should be the dictionary returned by
:func:`_parse_atom_entry`. :func:`_entity_path` extracts the
link to this entity from *state*, and strips all the namespace
prefixes from it to leave only the relative path of the entity
itself, sans namespace.
:rtype: ``string``
:return: an absolute path
"""
# This has been factored out so that it can be easily
# overloaded by Configurations, which has to switch its
# entities' endpoints from its own properties/ to configs/.
raw_path = urllib.parse.unquote(state.links.alternate)
if 'servicesNS/' in raw_path:
return _trailing(raw_path, 'servicesNS/', '/', '/')
elif 'services/' in raw_path:
return _trailing(raw_path, 'services/')
else:
return raw_path | [
"def",
"_entity_path",
"(",
"self",
",",
"state",
")",
":",
"# This has been factored out so that it can be easily",
"# overloaded by Configurations, which has to switch its",
"# entities' endpoints from its own properties/ to configs/.",
"raw_path",
"=",
"urllib",
".",
"parse",
".",... | Calculate the path to an entity to be returned.
*state* should be the dictionary returned by
:func:`_parse_atom_entry`. :func:`_entity_path` extracts the
link to this entity from *state*, and strips all the namespace
prefixes from it to leave only the relative path of the entity
itself, sans namespace.
:rtype: ``string``
:return: an absolute path | [
"Calculate",
"the",
"path",
"to",
"an",
"entity",
"to",
"be",
"returned",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1291-L1312 | train | 216,985 |
splunk/splunk-sdk-python | splunklib/client.py | ReadOnlyCollection.itemmeta | def itemmeta(self):
"""Returns metadata for members of the collection.
Makes a single roundtrip to the server, plus two more at most if
the ``autologin`` field of :func:`connect` is set to ``True``.
:return: A :class:`splunklib.data.Record` object containing the metadata.
**Example**::
import splunklib.client as client
import pprint
s = client.connect(...)
pprint.pprint(s.apps.itemmeta())
{'access': {'app': 'search',
'can_change_perms': '1',
'can_list': '1',
'can_share_app': '1',
'can_share_global': '1',
'can_share_user': '1',
'can_write': '1',
'modifiable': '1',
'owner': 'admin',
'perms': {'read': ['*'], 'write': ['admin']},
'removable': '0',
'sharing': 'user'},
'fields': {'optional': ['author',
'configured',
'description',
'label',
'manageable',
'template',
'visible'],
'required': ['name'], 'wildcard': []}}
"""
response = self.get("_new")
content = _load_atom(response, MATCH_ENTRY_CONTENT)
return _parse_atom_metadata(content) | python | def itemmeta(self):
"""Returns metadata for members of the collection.
Makes a single roundtrip to the server, plus two more at most if
the ``autologin`` field of :func:`connect` is set to ``True``.
:return: A :class:`splunklib.data.Record` object containing the metadata.
**Example**::
import splunklib.client as client
import pprint
s = client.connect(...)
pprint.pprint(s.apps.itemmeta())
{'access': {'app': 'search',
'can_change_perms': '1',
'can_list': '1',
'can_share_app': '1',
'can_share_global': '1',
'can_share_user': '1',
'can_write': '1',
'modifiable': '1',
'owner': 'admin',
'perms': {'read': ['*'], 'write': ['admin']},
'removable': '0',
'sharing': 'user'},
'fields': {'optional': ['author',
'configured',
'description',
'label',
'manageable',
'template',
'visible'],
'required': ['name'], 'wildcard': []}}
"""
response = self.get("_new")
content = _load_atom(response, MATCH_ENTRY_CONTENT)
return _parse_atom_metadata(content) | [
"def",
"itemmeta",
"(",
"self",
")",
":",
"response",
"=",
"self",
".",
"get",
"(",
"\"_new\"",
")",
"content",
"=",
"_load_atom",
"(",
"response",
",",
"MATCH_ENTRY_CONTENT",
")",
"return",
"_parse_atom_metadata",
"(",
"content",
")"
] | Returns metadata for members of the collection.
Makes a single roundtrip to the server, plus two more at most if
the ``autologin`` field of :func:`connect` is set to ``True``.
:return: A :class:`splunklib.data.Record` object containing the metadata.
**Example**::
import splunklib.client as client
import pprint
s = client.connect(...)
pprint.pprint(s.apps.itemmeta())
{'access': {'app': 'search',
'can_change_perms': '1',
'can_list': '1',
'can_share_app': '1',
'can_share_global': '1',
'can_share_user': '1',
'can_write': '1',
'modifiable': '1',
'owner': 'admin',
'perms': {'read': ['*'], 'write': ['admin']},
'removable': '0',
'sharing': 'user'},
'fields': {'optional': ['author',
'configured',
'description',
'label',
'manageable',
'template',
'visible'],
'required': ['name'], 'wildcard': []}} | [
"Returns",
"metadata",
"for",
"members",
"of",
"the",
"collection",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1351-L1388 | train | 216,986 |
splunk/splunk-sdk-python | splunklib/client.py | ReadOnlyCollection.iter | def iter(self, offset=0, count=None, pagesize=None, **kwargs):
"""Iterates over the collection.
This method is equivalent to the :meth:`list` method, but
it returns an iterator and can load a certain number of entities at a
time from the server.
:param offset: The index of the first entity to return (optional).
:type offset: ``integer``
:param count: The maximum number of entities to return (optional).
:type count: ``integer``
:param pagesize: The number of entities to load (optional).
:type pagesize: ``integer``
:param kwargs: Additional arguments (optional):
- "search" (``string``): The search query to filter responses.
- "sort_dir" (``string``): The direction to sort returned items:
"asc" or "desc".
- "sort_key" (``string``): The field to use for sorting (optional).
- "sort_mode" (``string``): The collating sequence for sorting
returned items: "auto", "alpha", "alpha_case", or "num".
:type kwargs: ``dict``
**Example**::
import splunklib.client as client
s = client.connect(...)
for saved_search in s.saved_searches.iter(pagesize=10):
# Loads 10 saved searches at a time from the
# server.
...
"""
assert pagesize is None or pagesize > 0
if count is None:
count = self.null_count
fetched = 0
while count == self.null_count or fetched < count:
response = self.get(count=pagesize or count, offset=offset, **kwargs)
items = self._load_list(response)
N = len(items)
fetched += N
for item in items:
yield item
if pagesize is None or N < pagesize:
break
offset += N
logging.debug("pagesize=%d, fetched=%d, offset=%d, N=%d, kwargs=%s", pagesize, fetched, offset, N, kwargs) | python | def iter(self, offset=0, count=None, pagesize=None, **kwargs):
"""Iterates over the collection.
This method is equivalent to the :meth:`list` method, but
it returns an iterator and can load a certain number of entities at a
time from the server.
:param offset: The index of the first entity to return (optional).
:type offset: ``integer``
:param count: The maximum number of entities to return (optional).
:type count: ``integer``
:param pagesize: The number of entities to load (optional).
:type pagesize: ``integer``
:param kwargs: Additional arguments (optional):
- "search" (``string``): The search query to filter responses.
- "sort_dir" (``string``): The direction to sort returned items:
"asc" or "desc".
- "sort_key" (``string``): The field to use for sorting (optional).
- "sort_mode" (``string``): The collating sequence for sorting
returned items: "auto", "alpha", "alpha_case", or "num".
:type kwargs: ``dict``
**Example**::
import splunklib.client as client
s = client.connect(...)
for saved_search in s.saved_searches.iter(pagesize=10):
# Loads 10 saved searches at a time from the
# server.
...
"""
assert pagesize is None or pagesize > 0
if count is None:
count = self.null_count
fetched = 0
while count == self.null_count or fetched < count:
response = self.get(count=pagesize or count, offset=offset, **kwargs)
items = self._load_list(response)
N = len(items)
fetched += N
for item in items:
yield item
if pagesize is None or N < pagesize:
break
offset += N
logging.debug("pagesize=%d, fetched=%d, offset=%d, N=%d, kwargs=%s", pagesize, fetched, offset, N, kwargs) | [
"def",
"iter",
"(",
"self",
",",
"offset",
"=",
"0",
",",
"count",
"=",
"None",
",",
"pagesize",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"assert",
"pagesize",
"is",
"None",
"or",
"pagesize",
">",
"0",
"if",
"count",
"is",
"None",
":",
"co... | Iterates over the collection.
This method is equivalent to the :meth:`list` method, but
it returns an iterator and can load a certain number of entities at a
time from the server.
:param offset: The index of the first entity to return (optional).
:type offset: ``integer``
:param count: The maximum number of entities to return (optional).
:type count: ``integer``
:param pagesize: The number of entities to load (optional).
:type pagesize: ``integer``
:param kwargs: Additional arguments (optional):
- "search" (``string``): The search query to filter responses.
- "sort_dir" (``string``): The direction to sort returned items:
"asc" or "desc".
- "sort_key" (``string``): The field to use for sorting (optional).
- "sort_mode" (``string``): The collating sequence for sorting
returned items: "auto", "alpha", "alpha_case", or "num".
:type kwargs: ``dict``
**Example**::
import splunklib.client as client
s = client.connect(...)
for saved_search in s.saved_searches.iter(pagesize=10):
# Loads 10 saved searches at a time from the
# server.
... | [
"Iterates",
"over",
"the",
"collection",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1390-L1440 | train | 216,987 |
splunk/splunk-sdk-python | splunklib/client.py | ReadOnlyCollection.list | def list(self, count=None, **kwargs):
"""Retrieves a list of entities in this collection.
The entire collection is loaded at once and is returned as a list. This
function makes a single roundtrip to the server, plus at most two more if
the ``autologin`` field of :func:`connect` is set to ``True``.
There is no caching--every call makes at least one round trip.
:param count: The maximum number of entities to return (optional).
:type count: ``integer``
:param kwargs: Additional arguments (optional):
- "offset" (``integer``): The offset of the first item to return.
- "search" (``string``): The search query to filter responses.
- "sort_dir" (``string``): The direction to sort returned items:
"asc" or "desc".
- "sort_key" (``string``): The field to use for sorting (optional).
- "sort_mode" (``string``): The collating sequence for sorting
returned items: "auto", "alpha", "alpha_case", or "num".
:type kwargs: ``dict``
:return: A ``list`` of entities.
"""
# response = self.get(count=count, **kwargs)
# return self._load_list(response)
return list(self.iter(count=count, **kwargs)) | python | def list(self, count=None, **kwargs):
"""Retrieves a list of entities in this collection.
The entire collection is loaded at once and is returned as a list. This
function makes a single roundtrip to the server, plus at most two more if
the ``autologin`` field of :func:`connect` is set to ``True``.
There is no caching--every call makes at least one round trip.
:param count: The maximum number of entities to return (optional).
:type count: ``integer``
:param kwargs: Additional arguments (optional):
- "offset" (``integer``): The offset of the first item to return.
- "search" (``string``): The search query to filter responses.
- "sort_dir" (``string``): The direction to sort returned items:
"asc" or "desc".
- "sort_key" (``string``): The field to use for sorting (optional).
- "sort_mode" (``string``): The collating sequence for sorting
returned items: "auto", "alpha", "alpha_case", or "num".
:type kwargs: ``dict``
:return: A ``list`` of entities.
"""
# response = self.get(count=count, **kwargs)
# return self._load_list(response)
return list(self.iter(count=count, **kwargs)) | [
"def",
"list",
"(",
"self",
",",
"count",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"# response = self.get(count=count, **kwargs)",
"# return self._load_list(response)",
"return",
"list",
"(",
"self",
".",
"iter",
"(",
"count",
"=",
"count",
",",
"*",
"*... | Retrieves a list of entities in this collection.
The entire collection is loaded at once and is returned as a list. This
function makes a single roundtrip to the server, plus at most two more if
the ``autologin`` field of :func:`connect` is set to ``True``.
There is no caching--every call makes at least one round trip.
:param count: The maximum number of entities to return (optional).
:type count: ``integer``
:param kwargs: Additional arguments (optional):
- "offset" (``integer``): The offset of the first item to return.
- "search" (``string``): The search query to filter responses.
- "sort_dir" (``string``): The direction to sort returned items:
"asc" or "desc".
- "sort_key" (``string``): The field to use for sorting (optional).
- "sort_mode" (``string``): The collating sequence for sorting
returned items: "auto", "alpha", "alpha_case", or "num".
:type kwargs: ``dict``
:return: A ``list`` of entities. | [
"Retrieves",
"a",
"list",
"of",
"entities",
"in",
"this",
"collection",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1443-L1472 | train | 216,988 |
splunk/splunk-sdk-python | splunklib/client.py | Collection.delete | def delete(self, name, **params):
"""Deletes a specified entity from the collection.
:param name: The name of the entity to delete.
:type name: ``string``
:return: The collection.
:rtype: ``self``
This method is implemented for consistency with the REST API's DELETE
method.
If there is no *name* entity on the server, a ``KeyError`` is
thrown. This function always makes a roundtrip to the server.
**Example**::
import splunklib.client as client
c = client.connect(...)
saved_searches = c.saved_searches
saved_searches.create('my_saved_search',
'search * | head 1')
assert 'my_saved_search' in saved_searches
saved_searches.delete('my_saved_search')
assert 'my_saved_search' not in saved_searches
"""
name = UrlEncoded(name, encode_slash=True)
if 'namespace' in params:
namespace = params.pop('namespace')
params['owner'] = namespace.owner
params['app'] = namespace.app
params['sharing'] = namespace.sharing
try:
self.service.delete(_path(self.path, name), **params)
except HTTPError as he:
# An HTTPError with status code 404 means that the entity
# has already been deleted, and we reraise it as a
# KeyError.
if he.status == 404:
raise KeyError("No such entity %s" % name)
else:
raise
return self | python | def delete(self, name, **params):
"""Deletes a specified entity from the collection.
:param name: The name of the entity to delete.
:type name: ``string``
:return: The collection.
:rtype: ``self``
This method is implemented for consistency with the REST API's DELETE
method.
If there is no *name* entity on the server, a ``KeyError`` is
thrown. This function always makes a roundtrip to the server.
**Example**::
import splunklib.client as client
c = client.connect(...)
saved_searches = c.saved_searches
saved_searches.create('my_saved_search',
'search * | head 1')
assert 'my_saved_search' in saved_searches
saved_searches.delete('my_saved_search')
assert 'my_saved_search' not in saved_searches
"""
name = UrlEncoded(name, encode_slash=True)
if 'namespace' in params:
namespace = params.pop('namespace')
params['owner'] = namespace.owner
params['app'] = namespace.app
params['sharing'] = namespace.sharing
try:
self.service.delete(_path(self.path, name), **params)
except HTTPError as he:
# An HTTPError with status code 404 means that the entity
# has already been deleted, and we reraise it as a
# KeyError.
if he.status == 404:
raise KeyError("No such entity %s" % name)
else:
raise
return self | [
"def",
"delete",
"(",
"self",
",",
"name",
",",
"*",
"*",
"params",
")",
":",
"name",
"=",
"UrlEncoded",
"(",
"name",
",",
"encode_slash",
"=",
"True",
")",
"if",
"'namespace'",
"in",
"params",
":",
"namespace",
"=",
"params",
".",
"pop",
"(",
"'name... | Deletes a specified entity from the collection.
:param name: The name of the entity to delete.
:type name: ``string``
:return: The collection.
:rtype: ``self``
This method is implemented for consistency with the REST API's DELETE
method.
If there is no *name* entity on the server, a ``KeyError`` is
thrown. This function always makes a roundtrip to the server.
**Example**::
import splunklib.client as client
c = client.connect(...)
saved_searches = c.saved_searches
saved_searches.create('my_saved_search',
'search * | head 1')
assert 'my_saved_search' in saved_searches
saved_searches.delete('my_saved_search')
assert 'my_saved_search' not in saved_searches | [
"Deletes",
"a",
"specified",
"entity",
"from",
"the",
"collection",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1572-L1613 | train | 216,989 |
splunk/splunk-sdk-python | splunklib/client.py | Collection.get | def get(self, name="", owner=None, app=None, sharing=None, **query):
"""Performs a GET request to the server on the collection.
If *owner*, *app*, and *sharing* are omitted, this method takes a
default namespace from the :class:`Service` object for this :class:`Endpoint`.
All other keyword arguments are included in the URL as query parameters.
:raises AuthenticationError: Raised when the ``Service`` is not logged in.
:raises HTTPError: Raised when an error in the request occurs.
:param path_segment: A path segment relative to this endpoint.
:type path_segment: ``string``
:param owner: The owner context of the namespace (optional).
:type owner: ``string``
:param app: The app context of the namespace (optional).
:type app: ``string``
:param sharing: The sharing mode for the namespace (optional).
:type sharing: "global", "system", "app", or "user"
:param query: All other keyword arguments, which are used as query
parameters.
:type query: ``string``
:return: The response from the server.
:rtype: ``dict`` with keys ``body``, ``headers``, ``reason``,
and ``status``
Example:
import splunklib.client
s = client.service(...)
saved_searches = s.saved_searches
saved_searches.get("my/saved/search") == \\
{'body': ...a response reader object...,
'headers': [('content-length', '26208'),
('expires', 'Fri, 30 Oct 1998 00:00:00 GMT'),
('server', 'Splunkd'),
('connection', 'close'),
('cache-control', 'no-store, max-age=0, must-revalidate, no-cache'),
('date', 'Fri, 11 May 2012 16:30:35 GMT'),
('content-type', 'text/xml; charset=utf-8')],
'reason': 'OK',
'status': 200}
saved_searches.get('nonexistant/search') # raises HTTPError
s.logout()
saved_searches.get() # raises AuthenticationError
"""
name = UrlEncoded(name, encode_slash=True)
return super(Collection, self).get(name, owner, app, sharing, **query) | python | def get(self, name="", owner=None, app=None, sharing=None, **query):
"""Performs a GET request to the server on the collection.
If *owner*, *app*, and *sharing* are omitted, this method takes a
default namespace from the :class:`Service` object for this :class:`Endpoint`.
All other keyword arguments are included in the URL as query parameters.
:raises AuthenticationError: Raised when the ``Service`` is not logged in.
:raises HTTPError: Raised when an error in the request occurs.
:param path_segment: A path segment relative to this endpoint.
:type path_segment: ``string``
:param owner: The owner context of the namespace (optional).
:type owner: ``string``
:param app: The app context of the namespace (optional).
:type app: ``string``
:param sharing: The sharing mode for the namespace (optional).
:type sharing: "global", "system", "app", or "user"
:param query: All other keyword arguments, which are used as query
parameters.
:type query: ``string``
:return: The response from the server.
:rtype: ``dict`` with keys ``body``, ``headers``, ``reason``,
and ``status``
Example:
import splunklib.client
s = client.service(...)
saved_searches = s.saved_searches
saved_searches.get("my/saved/search") == \\
{'body': ...a response reader object...,
'headers': [('content-length', '26208'),
('expires', 'Fri, 30 Oct 1998 00:00:00 GMT'),
('server', 'Splunkd'),
('connection', 'close'),
('cache-control', 'no-store, max-age=0, must-revalidate, no-cache'),
('date', 'Fri, 11 May 2012 16:30:35 GMT'),
('content-type', 'text/xml; charset=utf-8')],
'reason': 'OK',
'status': 200}
saved_searches.get('nonexistant/search') # raises HTTPError
s.logout()
saved_searches.get() # raises AuthenticationError
"""
name = UrlEncoded(name, encode_slash=True)
return super(Collection, self).get(name, owner, app, sharing, **query) | [
"def",
"get",
"(",
"self",
",",
"name",
"=",
"\"\"",
",",
"owner",
"=",
"None",
",",
"app",
"=",
"None",
",",
"sharing",
"=",
"None",
",",
"*",
"*",
"query",
")",
":",
"name",
"=",
"UrlEncoded",
"(",
"name",
",",
"encode_slash",
"=",
"True",
")",... | Performs a GET request to the server on the collection.
If *owner*, *app*, and *sharing* are omitted, this method takes a
default namespace from the :class:`Service` object for this :class:`Endpoint`.
All other keyword arguments are included in the URL as query parameters.
:raises AuthenticationError: Raised when the ``Service`` is not logged in.
:raises HTTPError: Raised when an error in the request occurs.
:param path_segment: A path segment relative to this endpoint.
:type path_segment: ``string``
:param owner: The owner context of the namespace (optional).
:type owner: ``string``
:param app: The app context of the namespace (optional).
:type app: ``string``
:param sharing: The sharing mode for the namespace (optional).
:type sharing: "global", "system", "app", or "user"
:param query: All other keyword arguments, which are used as query
parameters.
:type query: ``string``
:return: The response from the server.
:rtype: ``dict`` with keys ``body``, ``headers``, ``reason``,
and ``status``
Example:
import splunklib.client
s = client.service(...)
saved_searches = s.saved_searches
saved_searches.get("my/saved/search") == \\
{'body': ...a response reader object...,
'headers': [('content-length', '26208'),
('expires', 'Fri, 30 Oct 1998 00:00:00 GMT'),
('server', 'Splunkd'),
('connection', 'close'),
('cache-control', 'no-store, max-age=0, must-revalidate, no-cache'),
('date', 'Fri, 11 May 2012 16:30:35 GMT'),
('content-type', 'text/xml; charset=utf-8')],
'reason': 'OK',
'status': 200}
saved_searches.get('nonexistant/search') # raises HTTPError
s.logout()
saved_searches.get() # raises AuthenticationError | [
"Performs",
"a",
"GET",
"request",
"to",
"the",
"server",
"on",
"the",
"collection",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1615-L1661 | train | 216,990 |
splunk/splunk-sdk-python | splunklib/client.py | Stanza.submit | def submit(self, stanza):
"""Adds keys to the current configuration stanza as a
dictionary of key-value pairs.
:param stanza: A dictionary of key-value pairs for the stanza.
:type stanza: ``dict``
:return: The :class:`Stanza` object.
"""
body = _encode(**stanza)
self.service.post(self.path, body=body)
return self | python | def submit(self, stanza):
"""Adds keys to the current configuration stanza as a
dictionary of key-value pairs.
:param stanza: A dictionary of key-value pairs for the stanza.
:type stanza: ``dict``
:return: The :class:`Stanza` object.
"""
body = _encode(**stanza)
self.service.post(self.path, body=body)
return self | [
"def",
"submit",
"(",
"self",
",",
"stanza",
")",
":",
"body",
"=",
"_encode",
"(",
"*",
"*",
"stanza",
")",
"self",
".",
"service",
".",
"post",
"(",
"self",
".",
"path",
",",
"body",
"=",
"body",
")",
"return",
"self"
] | Adds keys to the current configuration stanza as a
dictionary of key-value pairs.
:param stanza: A dictionary of key-value pairs for the stanza.
:type stanza: ``dict``
:return: The :class:`Stanza` object. | [
"Adds",
"keys",
"to",
"the",
"current",
"configuration",
"stanza",
"as",
"a",
"dictionary",
"of",
"key",
"-",
"value",
"pairs",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1757-L1767 | train | 216,991 |
splunk/splunk-sdk-python | splunklib/client.py | Indexes.delete | def delete(self, name):
""" Deletes a given index.
**Note**: This method is only supported in Splunk 5.0 and later.
:param name: The name of the index to delete.
:type name: ``string``
"""
if self.service.splunk_version >= (5,):
Collection.delete(self, name)
else:
raise IllegalOperationException("Deleting indexes via the REST API is "
"not supported before Splunk version 5.") | python | def delete(self, name):
""" Deletes a given index.
**Note**: This method is only supported in Splunk 5.0 and later.
:param name: The name of the index to delete.
:type name: ``string``
"""
if self.service.splunk_version >= (5,):
Collection.delete(self, name)
else:
raise IllegalOperationException("Deleting indexes via the REST API is "
"not supported before Splunk version 5.") | [
"def",
"delete",
"(",
"self",
",",
"name",
")",
":",
"if",
"self",
".",
"service",
".",
"splunk_version",
">=",
"(",
"5",
",",
")",
":",
"Collection",
".",
"delete",
"(",
"self",
",",
"name",
")",
"else",
":",
"raise",
"IllegalOperationException",
"(",... | Deletes a given index.
**Note**: This method is only supported in Splunk 5.0 and later.
:param name: The name of the index to delete.
:type name: ``string`` | [
"Deletes",
"a",
"given",
"index",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1913-L1925 | train | 216,992 |
splunk/splunk-sdk-python | splunklib/client.py | Index.attached_socket | def attached_socket(self, *args, **kwargs):
"""Opens a raw socket in a ``with`` block to write data to Splunk.
The arguments are identical to those for :meth:`attach`. The socket is
automatically closed at the end of the ``with`` block, even if an
exception is raised in the block.
:param host: The host value for events written to the stream.
:type host: ``string``
:param source: The source value for events written to the stream.
:type source: ``string``
:param sourcetype: The sourcetype value for events written to the
stream.
:type sourcetype: ``string``
:returns: Nothing.
**Example**::
import splunklib.client as client
s = client.connect(...)
index = s.indexes['some_index']
with index.attached_socket(sourcetype='test') as sock:
sock.send('Test event\\r\\n')
"""
try:
sock = self.attach(*args, **kwargs)
yield sock
finally:
sock.shutdown(socket.SHUT_RDWR)
sock.close() | python | def attached_socket(self, *args, **kwargs):
"""Opens a raw socket in a ``with`` block to write data to Splunk.
The arguments are identical to those for :meth:`attach`. The socket is
automatically closed at the end of the ``with`` block, even if an
exception is raised in the block.
:param host: The host value for events written to the stream.
:type host: ``string``
:param source: The source value for events written to the stream.
:type source: ``string``
:param sourcetype: The sourcetype value for events written to the
stream.
:type sourcetype: ``string``
:returns: Nothing.
**Example**::
import splunklib.client as client
s = client.connect(...)
index = s.indexes['some_index']
with index.attached_socket(sourcetype='test') as sock:
sock.send('Test event\\r\\n')
"""
try:
sock = self.attach(*args, **kwargs)
yield sock
finally:
sock.shutdown(socket.SHUT_RDWR)
sock.close() | [
"def",
"attached_socket",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"try",
":",
"sock",
"=",
"self",
".",
"attach",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"yield",
"sock",
"finally",
":",
"sock",
".",
"shutdown",
... | Opens a raw socket in a ``with`` block to write data to Splunk.
The arguments are identical to those for :meth:`attach`. The socket is
automatically closed at the end of the ``with`` block, even if an
exception is raised in the block.
:param host: The host value for events written to the stream.
:type host: ``string``
:param source: The source value for events written to the stream.
:type source: ``string``
:param sourcetype: The sourcetype value for events written to the
stream.
:type sourcetype: ``string``
:returns: Nothing.
**Example**::
import splunklib.client as client
s = client.connect(...)
index = s.indexes['some_index']
with index.attached_socket(sourcetype='test') as sock:
sock.send('Test event\\r\\n') | [
"Opens",
"a",
"raw",
"socket",
"in",
"a",
"with",
"block",
"to",
"write",
"data",
"to",
"Splunk",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L1977-L2008 | train | 216,993 |
splunk/splunk-sdk-python | splunklib/client.py | Index.clean | def clean(self, timeout=60):
"""Deletes the contents of the index.
This method blocks until the index is empty, because it needs to restore
values at the end of the operation.
:param timeout: The time-out period for the operation, in seconds (the
default is 60).
:type timeout: ``integer``
:return: The :class:`Index`.
"""
self.refresh()
tds = self['maxTotalDataSizeMB']
ftp = self['frozenTimePeriodInSecs']
was_disabled_initially = self.disabled
try:
if (not was_disabled_initially and \
self.service.splunk_version < (5,)):
# Need to disable the index first on Splunk 4.x,
# but it doesn't work to disable it on 5.0.
self.disable()
self.update(maxTotalDataSizeMB=1, frozenTimePeriodInSecs=1)
self.roll_hot_buckets()
# Wait until event count goes to 0.
start = datetime.now()
diff = timedelta(seconds=timeout)
while self.content.totalEventCount != '0' and datetime.now() < start+diff:
sleep(1)
self.refresh()
if self.content.totalEventCount != '0':
raise OperationError("Cleaning index %s took longer than %s seconds; timing out." % (self.name, timeout))
finally:
# Restore original values
self.update(maxTotalDataSizeMB=tds, frozenTimePeriodInSecs=ftp)
if (not was_disabled_initially and \
self.service.splunk_version < (5,)):
# Re-enable the index if it was originally enabled and we messed with it.
self.enable()
return self | python | def clean(self, timeout=60):
"""Deletes the contents of the index.
This method blocks until the index is empty, because it needs to restore
values at the end of the operation.
:param timeout: The time-out period for the operation, in seconds (the
default is 60).
:type timeout: ``integer``
:return: The :class:`Index`.
"""
self.refresh()
tds = self['maxTotalDataSizeMB']
ftp = self['frozenTimePeriodInSecs']
was_disabled_initially = self.disabled
try:
if (not was_disabled_initially and \
self.service.splunk_version < (5,)):
# Need to disable the index first on Splunk 4.x,
# but it doesn't work to disable it on 5.0.
self.disable()
self.update(maxTotalDataSizeMB=1, frozenTimePeriodInSecs=1)
self.roll_hot_buckets()
# Wait until event count goes to 0.
start = datetime.now()
diff = timedelta(seconds=timeout)
while self.content.totalEventCount != '0' and datetime.now() < start+diff:
sleep(1)
self.refresh()
if self.content.totalEventCount != '0':
raise OperationError("Cleaning index %s took longer than %s seconds; timing out." % (self.name, timeout))
finally:
# Restore original values
self.update(maxTotalDataSizeMB=tds, frozenTimePeriodInSecs=ftp)
if (not was_disabled_initially and \
self.service.splunk_version < (5,)):
# Re-enable the index if it was originally enabled and we messed with it.
self.enable()
return self | [
"def",
"clean",
"(",
"self",
",",
"timeout",
"=",
"60",
")",
":",
"self",
".",
"refresh",
"(",
")",
"tds",
"=",
"self",
"[",
"'maxTotalDataSizeMB'",
"]",
"ftp",
"=",
"self",
"[",
"'frozenTimePeriodInSecs'",
"]",
"was_disabled_initially",
"=",
"self",
".",
... | Deletes the contents of the index.
This method blocks until the index is empty, because it needs to restore
values at the end of the operation.
:param timeout: The time-out period for the operation, in seconds (the
default is 60).
:type timeout: ``integer``
:return: The :class:`Index`. | [
"Deletes",
"the",
"contents",
"of",
"the",
"index",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L2010-L2053 | train | 216,994 |
splunk/splunk-sdk-python | splunklib/client.py | Index.submit | def submit(self, event, host=None, source=None, sourcetype=None):
"""Submits a single event to the index using ``HTTP POST``.
:param event: The event to submit.
:type event: ``string``
:param `host`: The host value of the event.
:type host: ``string``
:param `source`: The source value of the event.
:type source: ``string``
:param `sourcetype`: The sourcetype value of the event.
:type sourcetype: ``string``
:return: The :class:`Index`.
"""
args = { 'index': self.name }
if host is not None: args['host'] = host
if source is not None: args['source'] = source
if sourcetype is not None: args['sourcetype'] = sourcetype
# The reason we use service.request directly rather than POST
# is that we are not sending a POST request encoded using
# x-www-form-urlencoded (as we do not have a key=value body),
# because we aren't really sending a "form".
self.service.post(PATH_RECEIVERS_SIMPLE, body=event, **args)
return self | python | def submit(self, event, host=None, source=None, sourcetype=None):
"""Submits a single event to the index using ``HTTP POST``.
:param event: The event to submit.
:type event: ``string``
:param `host`: The host value of the event.
:type host: ``string``
:param `source`: The source value of the event.
:type source: ``string``
:param `sourcetype`: The sourcetype value of the event.
:type sourcetype: ``string``
:return: The :class:`Index`.
"""
args = { 'index': self.name }
if host is not None: args['host'] = host
if source is not None: args['source'] = source
if sourcetype is not None: args['sourcetype'] = sourcetype
# The reason we use service.request directly rather than POST
# is that we are not sending a POST request encoded using
# x-www-form-urlencoded (as we do not have a key=value body),
# because we aren't really sending a "form".
self.service.post(PATH_RECEIVERS_SIMPLE, body=event, **args)
return self | [
"def",
"submit",
"(",
"self",
",",
"event",
",",
"host",
"=",
"None",
",",
"source",
"=",
"None",
",",
"sourcetype",
"=",
"None",
")",
":",
"args",
"=",
"{",
"'index'",
":",
"self",
".",
"name",
"}",
"if",
"host",
"is",
"not",
"None",
":",
"args"... | Submits a single event to the index using ``HTTP POST``.
:param event: The event to submit.
:type event: ``string``
:param `host`: The host value of the event.
:type host: ``string``
:param `source`: The source value of the event.
:type source: ``string``
:param `sourcetype`: The sourcetype value of the event.
:type sourcetype: ``string``
:return: The :class:`Index`. | [
"Submits",
"a",
"single",
"event",
"to",
"the",
"index",
"using",
"HTTP",
"POST",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L2063-L2087 | train | 216,995 |
splunk/splunk-sdk-python | splunklib/client.py | Index.upload | def upload(self, filename, **kwargs):
"""Uploads a file for immediate indexing.
**Note**: The file must be locally accessible from the server.
:param filename: The name of the file to upload. The file can be a
plain, compressed, or archived file.
:type filename: ``string``
:param kwargs: Additional arguments (optional). For more about the
available parameters, see `Index parameters <http://dev.splunk.com/view/SP-CAAAEE6#indexparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The :class:`Index`.
"""
kwargs['index'] = self.name
path = 'data/inputs/oneshot'
self.service.post(path, name=filename, **kwargs)
return self | python | def upload(self, filename, **kwargs):
"""Uploads a file for immediate indexing.
**Note**: The file must be locally accessible from the server.
:param filename: The name of the file to upload. The file can be a
plain, compressed, or archived file.
:type filename: ``string``
:param kwargs: Additional arguments (optional). For more about the
available parameters, see `Index parameters <http://dev.splunk.com/view/SP-CAAAEE6#indexparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The :class:`Index`.
"""
kwargs['index'] = self.name
path = 'data/inputs/oneshot'
self.service.post(path, name=filename, **kwargs)
return self | [
"def",
"upload",
"(",
"self",
",",
"filename",
",",
"*",
"*",
"kwargs",
")",
":",
"kwargs",
"[",
"'index'",
"]",
"=",
"self",
".",
"name",
"path",
"=",
"'data/inputs/oneshot'",
"self",
".",
"service",
".",
"post",
"(",
"path",
",",
"name",
"=",
"file... | Uploads a file for immediate indexing.
**Note**: The file must be locally accessible from the server.
:param filename: The name of the file to upload. The file can be a
plain, compressed, or archived file.
:type filename: ``string``
:param kwargs: Additional arguments (optional). For more about the
available parameters, see `Index parameters <http://dev.splunk.com/view/SP-CAAAEE6#indexparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The :class:`Index`. | [
"Uploads",
"a",
"file",
"for",
"immediate",
"indexing",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L2090-L2107 | train | 216,996 |
splunk/splunk-sdk-python | splunklib/client.py | Input.update | def update(self, **kwargs):
"""Updates the server with any changes you've made to the current input
along with any additional arguments you specify.
:param kwargs: Additional arguments (optional). For more about the
available parameters, see `Input parameters <http://dev.splunk.com/view/SP-CAAAEE6#inputparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The input this method was called on.
:rtype: class:`Input`
"""
# UDP and TCP inputs require special handling due to their restrictToHost
# field. For all other inputs kinds, we can dispatch to the superclass method.
if self.kind not in ['tcp', 'splunktcp', 'tcp/raw', 'tcp/cooked', 'udp']:
return super(Input, self).update(**kwargs)
else:
# The behavior of restrictToHost is inconsistent across input kinds and versions of Splunk.
# In Splunk 4.x, the name of the entity is only the port, independent of the value of
# restrictToHost. In Splunk 5.0 this changed so the name will be of the form <restrictToHost>:<port>.
# In 5.0 and 5.0.1, if you don't supply the restrictToHost value on every update, it will
# remove the host restriction from the input. As of 5.0.2 you simply can't change restrictToHost
# on an existing input.
# The logic to handle all these cases:
# - Throw an exception if the user tries to set restrictToHost on an existing input
# for *any* version of Splunk.
# - Set the existing restrictToHost value on the update args internally so we don't
# cause it to change in Splunk 5.0 and 5.0.1.
to_update = kwargs.copy()
if 'restrictToHost' in kwargs:
raise IllegalOperationException("Cannot set restrictToHost on an existing input with the SDK.")
elif 'restrictToHost' in self._state.content and self.kind != 'udp':
to_update['restrictToHost'] = self._state.content['restrictToHost']
# Do the actual update operation.
return super(Input, self).update(**to_update) | python | def update(self, **kwargs):
"""Updates the server with any changes you've made to the current input
along with any additional arguments you specify.
:param kwargs: Additional arguments (optional). For more about the
available parameters, see `Input parameters <http://dev.splunk.com/view/SP-CAAAEE6#inputparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The input this method was called on.
:rtype: class:`Input`
"""
# UDP and TCP inputs require special handling due to their restrictToHost
# field. For all other inputs kinds, we can dispatch to the superclass method.
if self.kind not in ['tcp', 'splunktcp', 'tcp/raw', 'tcp/cooked', 'udp']:
return super(Input, self).update(**kwargs)
else:
# The behavior of restrictToHost is inconsistent across input kinds and versions of Splunk.
# In Splunk 4.x, the name of the entity is only the port, independent of the value of
# restrictToHost. In Splunk 5.0 this changed so the name will be of the form <restrictToHost>:<port>.
# In 5.0 and 5.0.1, if you don't supply the restrictToHost value on every update, it will
# remove the host restriction from the input. As of 5.0.2 you simply can't change restrictToHost
# on an existing input.
# The logic to handle all these cases:
# - Throw an exception if the user tries to set restrictToHost on an existing input
# for *any* version of Splunk.
# - Set the existing restrictToHost value on the update args internally so we don't
# cause it to change in Splunk 5.0 and 5.0.1.
to_update = kwargs.copy()
if 'restrictToHost' in kwargs:
raise IllegalOperationException("Cannot set restrictToHost on an existing input with the SDK.")
elif 'restrictToHost' in self._state.content and self.kind != 'udp':
to_update['restrictToHost'] = self._state.content['restrictToHost']
# Do the actual update operation.
return super(Input, self).update(**to_update) | [
"def",
"update",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"# UDP and TCP inputs require special handling due to their restrictToHost",
"# field. For all other inputs kinds, we can dispatch to the superclass method.",
"if",
"self",
".",
"kind",
"not",
"in",
"[",
"'tcp'",
... | Updates the server with any changes you've made to the current input
along with any additional arguments you specify.
:param kwargs: Additional arguments (optional). For more about the
available parameters, see `Input parameters <http://dev.splunk.com/view/SP-CAAAEE6#inputparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The input this method was called on.
:rtype: class:`Input` | [
"Updates",
"the",
"server",
"with",
"any",
"changes",
"you",
"ve",
"made",
"to",
"the",
"current",
"input",
"along",
"with",
"any",
"additional",
"arguments",
"you",
"specify",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L2137-L2173 | train | 216,997 |
splunk/splunk-sdk-python | splunklib/client.py | Inputs.create | def create(self, name, kind, **kwargs):
"""Creates an input of a specific kind in this collection, with any
arguments you specify.
:param `name`: The input name.
:type name: ``string``
:param `kind`: The kind of input:
- "ad": Active Directory
- "monitor": Files and directories
- "registry": Windows Registry
- "script": Scripts
- "splunktcp": TCP, processed
- "tcp": TCP, unprocessed
- "udp": UDP
- "win-event-log-collections": Windows event log
- "win-perfmon": Performance monitoring
- "win-wmi-collections": WMI
:type kind: ``string``
:param `kwargs`: Additional arguments (optional). For more about the
available parameters, see `Input parameters <http://dev.splunk.com/view/SP-CAAAEE6#inputparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The new :class:`Input`.
"""
kindpath = self.kindpath(kind)
self.post(kindpath, name=name, **kwargs)
# If we created an input with restrictToHost set, then
# its path will be <restrictToHost>:<name>, not just <name>,
# and we have to adjust accordingly.
# Url encodes the name of the entity.
name = UrlEncoded(name, encode_slash=True)
path = _path(
self.path + kindpath,
'%s:%s' % (kwargs['restrictToHost'], name) \
if 'restrictToHost' in kwargs else name
)
return Input(self.service, path, kind) | python | def create(self, name, kind, **kwargs):
"""Creates an input of a specific kind in this collection, with any
arguments you specify.
:param `name`: The input name.
:type name: ``string``
:param `kind`: The kind of input:
- "ad": Active Directory
- "monitor": Files and directories
- "registry": Windows Registry
- "script": Scripts
- "splunktcp": TCP, processed
- "tcp": TCP, unprocessed
- "udp": UDP
- "win-event-log-collections": Windows event log
- "win-perfmon": Performance monitoring
- "win-wmi-collections": WMI
:type kind: ``string``
:param `kwargs`: Additional arguments (optional). For more about the
available parameters, see `Input parameters <http://dev.splunk.com/view/SP-CAAAEE6#inputparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The new :class:`Input`.
"""
kindpath = self.kindpath(kind)
self.post(kindpath, name=name, **kwargs)
# If we created an input with restrictToHost set, then
# its path will be <restrictToHost>:<name>, not just <name>,
# and we have to adjust accordingly.
# Url encodes the name of the entity.
name = UrlEncoded(name, encode_slash=True)
path = _path(
self.path + kindpath,
'%s:%s' % (kwargs['restrictToHost'], name) \
if 'restrictToHost' in kwargs else name
)
return Input(self.service, path, kind) | [
"def",
"create",
"(",
"self",
",",
"name",
",",
"kind",
",",
"*",
"*",
"kwargs",
")",
":",
"kindpath",
"=",
"self",
".",
"kindpath",
"(",
"kind",
")",
"self",
".",
"post",
"(",
"kindpath",
",",
"name",
"=",
"name",
",",
"*",
"*",
"kwargs",
")",
... | Creates an input of a specific kind in this collection, with any
arguments you specify.
:param `name`: The input name.
:type name: ``string``
:param `kind`: The kind of input:
- "ad": Active Directory
- "monitor": Files and directories
- "registry": Windows Registry
- "script": Scripts
- "splunktcp": TCP, processed
- "tcp": TCP, unprocessed
- "udp": UDP
- "win-event-log-collections": Windows event log
- "win-perfmon": Performance monitoring
- "win-wmi-collections": WMI
:type kind: ``string``
:param `kwargs`: Additional arguments (optional). For more about the
available parameters, see `Input parameters <http://dev.splunk.com/view/SP-CAAAEE6#inputparams>`_ on Splunk Developer Portal.
:type kwargs: ``dict``
:return: The new :class:`Input`. | [
"Creates",
"an",
"input",
"of",
"a",
"specific",
"kind",
"in",
"this",
"collection",
"with",
"any",
"arguments",
"you",
"specify",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L2264-L2314 | train | 216,998 |
splunk/splunk-sdk-python | splunklib/client.py | Inputs.delete | def delete(self, name, kind=None):
"""Removes an input from the collection.
:param `kind`: The kind of input:
- "ad": Active Directory
- "monitor": Files and directories
- "registry": Windows Registry
- "script": Scripts
- "splunktcp": TCP, processed
- "tcp": TCP, unprocessed
- "udp": UDP
- "win-event-log-collections": Windows event log
- "win-perfmon": Performance monitoring
- "win-wmi-collections": WMI
:type kind: ``string``
:param name: The name of the input to remove.
:type name: ``string``
:return: The :class:`Inputs` collection.
"""
if kind is None:
self.service.delete(self[name].path)
else:
self.service.delete(self[name, kind].path)
return self | python | def delete(self, name, kind=None):
"""Removes an input from the collection.
:param `kind`: The kind of input:
- "ad": Active Directory
- "monitor": Files and directories
- "registry": Windows Registry
- "script": Scripts
- "splunktcp": TCP, processed
- "tcp": TCP, unprocessed
- "udp": UDP
- "win-event-log-collections": Windows event log
- "win-perfmon": Performance monitoring
- "win-wmi-collections": WMI
:type kind: ``string``
:param name: The name of the input to remove.
:type name: ``string``
:return: The :class:`Inputs` collection.
"""
if kind is None:
self.service.delete(self[name].path)
else:
self.service.delete(self[name, kind].path)
return self | [
"def",
"delete",
"(",
"self",
",",
"name",
",",
"kind",
"=",
"None",
")",
":",
"if",
"kind",
"is",
"None",
":",
"self",
".",
"service",
".",
"delete",
"(",
"self",
"[",
"name",
"]",
".",
"path",
")",
"else",
":",
"self",
".",
"service",
".",
"d... | Removes an input from the collection.
:param `kind`: The kind of input:
- "ad": Active Directory
- "monitor": Files and directories
- "registry": Windows Registry
- "script": Scripts
- "splunktcp": TCP, processed
- "tcp": TCP, unprocessed
- "udp": UDP
- "win-event-log-collections": Windows event log
- "win-perfmon": Performance monitoring
- "win-wmi-collections": WMI
:type kind: ``string``
:param name: The name of the input to remove.
:type name: ``string``
:return: The :class:`Inputs` collection. | [
"Removes",
"an",
"input",
"from",
"the",
"collection",
"."
] | a245a4eeb93b3621730418008e31715912bcdcd8 | https://github.com/splunk/splunk-sdk-python/blob/a245a4eeb93b3621730418008e31715912bcdcd8/splunklib/client.py#L2316-L2351 | train | 216,999 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.