Dataset Viewer
Auto-converted to Parquet Duplicate
instance_id
stringlengths
14
45
pull_number
int64
74
29.2k
repo
stringlengths
10
39
version
stringclasses
37 values
base_commit
stringlengths
40
40
created_at
stringdate
2015-03-20 20:39:55
2025-01-02 13:53:18
patch
stringlengths
525
15.9k
test_patch
stringlengths
606
17.8k
non_py_patch
stringlengths
0
7.21k
new_components
listlengths
1
3
FAIL_TO_PASS
listlengths
1
202
PASS_TO_PASS
listlengths
0
910
problem_statement
stringlengths
907
23.2k
hints_text
stringclasses
26 values
environment_setup_commit
stringlengths
40
40
PyThaiNLP__pythainlp-1054
1,054
PyThaiNLP/pythainlp
null
2252dee57bd7be9503242fa734bf0abc48c5ddf1
2025-01-02T13:53:18Z
diff --git a/docs/api/lm.rst b/docs/api/lm.rst index 471282fd3..063aecb2d 100644 --- a/docs/api/lm.rst +++ b/docs/api/lm.rst @@ -6,4 +6,5 @@ pythainlp.lm Modules ------- +.. autofunction:: calculate_ngram_counts .. autofunction:: remove_repeated_ngrams \ No newline at end of file diff --git a/pythainlp/lm/__init__.py b/pythainlp/lm/__init__.py index f3e43e801..9fe31c161 100644 --- a/pythainlp/lm/__init__.py +++ b/pythainlp/lm/__init__.py @@ -3,6 +3,9 @@ # SPDX-FileType: SOURCE # SPDX-License-Identifier: Apache-2.0 -__all__ = ["remove_repeated_ngrams"] +__all__ = [ + "calculate_ngram_counts", + "remove_repeated_ngrams" +] -from pythainlp.lm.text_util import remove_repeated_ngrams +from pythainlp.lm.text_util import calculate_ngram_counts, remove_repeated_ngrams diff --git a/pythainlp/lm/text_util.py b/pythainlp/lm/text_util.py index 668ded3c5..0d3181d2a 100644 --- a/pythainlp/lm/text_util.py +++ b/pythainlp/lm/text_util.py @@ -4,7 +4,32 @@ # SPDX-License-Identifier: Apache-2.0 # ruff: noqa: C901 -from typing import List +from typing import List, Tuple, Dict + + +def calculate_ngram_counts( + list_words: List[str], + n_min: int = 2, + n_max: int = 4) -> Dict[Tuple[str], int]: + """ + Calculates the counts of n-grams in the list words for the specified range. + + :param List[str] list_words: List of string + :param int n_min: The minimum n-gram size (default: 2). + :param int n_max: The maximum n-gram size (default: 4). + + :return: A dictionary where keys are n-grams and values are their counts. + :rtype: Dict[Tuple[str], int] + """ + + ngram_counts = {} + + for n in range(n_min, n_max + 1): + for i in range(len(list_words) - n + 1): + ngram = tuple(list_words[i:i + n]) + ngram_counts[ngram] = ngram_counts.get(ngram, 0) + 1 + + return ngram_counts def remove_repeated_ngrams(string_list: List[str], n: int = 2) -> List[str]:
diff --git a/tests/core/test_lm.py b/tests/core/test_lm.py index 5d25cc124..9da213d31 100644 --- a/tests/core/test_lm.py +++ b/tests/core/test_lm.py @@ -5,10 +5,23 @@ import unittest -from pythainlp.lm import remove_repeated_ngrams +from pythainlp.lm import calculate_ngram_counts, remove_repeated_ngrams class LMTestCase(unittest.TestCase): + def test_calculate_ngram_counts(self): + self.assertEqual( + calculate_ngram_counts(['1', '2', '3', '4']), + { + ('1', '2'): 1, + ('2', '3'): 1, + ('3', '4'): 1, + ('1', '2', '3'): 1, + ('2', '3', '4'): 1, + ('1', '2', '3', '4'): 1 + } + ) + def test_remove_repeated_ngrams(self): texts = ['ΰΉ€ΰΈ­ΰΈ²', 'ΰΉ€ΰΈ­ΰΈ²', 'แบบ', 'แบบ', 'แบบ', 'ΰΉ„ΰΈ«ΰΈ™'] self.assertEqual(
diff --git a/docs/api/lm.rst b/docs/api/lm.rst index 471282fd3..063aecb2d 100644 --- a/docs/api/lm.rst +++ b/docs/api/lm.rst @@ -6,4 +6,5 @@ pythainlp.lm Modules ------- +.. autofunction:: calculate_ngram_counts .. autofunction:: remove_repeated_ngrams \ No newline at end of file
[ { "components": [ { "doc": "Calculates the counts of n-grams in the list words for the specified range.\n\n:param List[str] list_words: List of string\n:param int n_min: The minimum n-gram size (default: 2).\n:param int n_max: The maximum n-gram size (default: 4).\n\n:return: A dictionary where keys are n-grams and values are their counts.\n:rtype: Dict[Tuple[str], int]", "lines": [ 10, 32 ], "name": "calculate_ngram_counts", "signature": "def calculate_ngram_counts( list_words: List[str], n_min: int = 2, n_max: int = 4) -> Dict[Tuple[str], int]:", "type": "function" } ], "file": "pythainlp/lm/text_util.py" } ]
[ "tests/core/test_lm.py::LMTestCase::test_calculate_ngram_counts", "tests/core/test_lm.py::LMTestCase::test_remove_repeated_ngrams" ]
[]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> Add pythainlp.lm.calculate_ngram_counts Calculates the counts of n-grams in the list words for the specified range. ``` >>> from pythainlp.lm import calculate_ngram_counts >>> a=["1","2","3","4"] >>> calculate_ngram_counts(a) {('1', '2'): 1, ('2', '3'): 1, ('3', '4'): 1, ('1', '2', '3'): 1, ('2', '3', '4'): 1, ('1', '2', '3', '4'): 1} >>> ``` ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in pythainlp/lm/text_util.py] (definition of calculate_ngram_counts:) def calculate_ngram_counts( list_words: List[str], n_min: int = 2, n_max: int = 4) -> Dict[Tuple[str], int]: """Calculates the counts of n-grams in the list words for the specified range. :param List[str] list_words: List of string :param int n_min: The minimum n-gram size (default: 2). :param int n_max: The maximum n-gram size (default: 4). :return: A dictionary where keys are n-grams and values are their counts. :rtype: Dict[Tuple[str], int]""" [end of new definitions in pythainlp/lm/text_util.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
2252dee57bd7be9503242fa734bf0abc48c5ddf1
tobymao__sqlglot-4537
4,537
tobymao/sqlglot
null
6992c1855f343a5d0120a3b4c993d8c406dd29ba
2024-12-19T12:43:38Z
diff --git a/sqlglot/dialects/tsql.py b/sqlglot/dialects/tsql.py index 1acc5ca8a7..7aa7a0e22b 100644 --- a/sqlglot/dialects/tsql.py +++ b/sqlglot/dialects/tsql.py @@ -370,6 +370,16 @@ def _timestrtotime_sql(self: TSQL.Generator, expression: exp.TimeStrToTime): return sql +def _build_datetrunc(args: t.List) -> exp.TimestampTrunc: + unit = seq_get(args, 0) + this = seq_get(args, 1) + + if this and this.is_string: + this = exp.cast(this, exp.DataType.Type.DATETIME2) + + return exp.TimestampTrunc(this=this, unit=unit) + + class TSQL(Dialect): SUPPORTS_SEMI_ANTI_JOIN = False LOG_BASE_FIRST = False @@ -570,6 +580,7 @@ class Parser(parser.Parser): "SUSER_SNAME": exp.CurrentUser.from_arg_list, "SYSTEM_USER": exp.CurrentUser.from_arg_list, "TIMEFROMPARTS": _build_timefromparts, + "DATETRUNC": _build_datetrunc, } JOIN_HINTS = {"LOOP", "HASH", "MERGE", "REMOTE"} @@ -936,6 +947,7 @@ class Generator(generator.Generator): exp.Trim: trim_sql, exp.TsOrDsAdd: date_delta_sql("DATEADD", cast=True), exp.TsOrDsDiff: date_delta_sql("DATEDIFF"), + exp.TimestampTrunc: lambda self, e: self.func("DATETRUNC", e.unit, e.this), } TRANSFORMS.pop(exp.ReturnsProperty)
diff --git a/tests/dialects/test_tsql.py b/tests/dialects/test_tsql.py index e8cd69648b..4c61780d76 100644 --- a/tests/dialects/test_tsql.py +++ b/tests/dialects/test_tsql.py @@ -2090,3 +2090,27 @@ def test_next_value_for(self): "oracle": "SELECT NEXT VALUE FOR db.schema.sequence_name", }, ) + + # string literals in the DATETRUNC are casted as DATETIME2 + def test_datetrunc(self): + self.validate_all( + "SELECT DATETRUNC(month, 'foo')", + write={ + "duckdb": "SELECT DATE_TRUNC('MONTH', CAST('foo' AS TIMESTAMP))", + "tsql": "SELECT DATETRUNC(MONTH, CAST('foo' AS DATETIME2))", + }, + ) + self.validate_all( + "SELECT DATETRUNC(month, foo)", + write={ + "duckdb": "SELECT DATE_TRUNC('MONTH', foo)", + "tsql": "SELECT DATETRUNC(MONTH, foo)", + }, + ) + self.validate_all( + "SELECT DATETRUNC(year, CAST('foo1' AS date))", + write={ + "duckdb": "SELECT DATE_TRUNC('YEAR', CAST('foo1' AS DATE))", + "tsql": "SELECT DATETRUNC(YEAR, CAST('foo1' AS DATE))", + }, + )
[ { "components": [ { "doc": "", "lines": [ 373, 380 ], "name": "_build_datetrunc", "signature": "def _build_datetrunc(args: t.List) -> exp.TimestampTrunc:", "type": "function" } ], "file": "sqlglot/dialects/tsql.py" } ]
[ "tests/dialects/test_tsql.py::TestTSQL::test_datetrunc" ]
[ "tests/dialects/test_tsql.py::TestTSQL::test_add_date", "tests/dialects/test_tsql.py::TestTSQL::test_charindex", "tests/dialects/test_tsql.py::TestTSQL::test_commit", "tests/dialects/test_tsql.py::TestTSQL::test_convert", "tests/dialects/test_tsql.py::TestTSQL::test_count", "tests/dialects/test_tsql.py::TestTSQL::test_current_user", "tests/dialects/test_tsql.py::TestTSQL::test_date_diff", "tests/dialects/test_tsql.py::TestTSQL::test_datefromparts", "tests/dialects/test_tsql.py::TestTSQL::test_datename", "tests/dialects/test_tsql.py::TestTSQL::test_datepart", "tests/dialects/test_tsql.py::TestTSQL::test_ddl", "tests/dialects/test_tsql.py::TestTSQL::test_declare", "tests/dialects/test_tsql.py::TestTSQL::test_eomonth", "tests/dialects/test_tsql.py::TestTSQL::test_format", "tests/dialects/test_tsql.py::TestTSQL::test_fullproc", "tests/dialects/test_tsql.py::TestTSQL::test_grant", "tests/dialects/test_tsql.py::TestTSQL::test_hints", "tests/dialects/test_tsql.py::TestTSQL::test_identifier_prefixes", "tests/dialects/test_tsql.py::TestTSQL::test_insert_cte", "tests/dialects/test_tsql.py::TestTSQL::test_isnull", "tests/dialects/test_tsql.py::TestTSQL::test_json", "tests/dialects/test_tsql.py::TestTSQL::test_lateral_subquery", "tests/dialects/test_tsql.py::TestTSQL::test_lateral_table_valued_function", "tests/dialects/test_tsql.py::TestTSQL::test_len", "tests/dialects/test_tsql.py::TestTSQL::test_next_value_for", "tests/dialects/test_tsql.py::TestTSQL::test_openjson", "tests/dialects/test_tsql.py::TestTSQL::test_option", "tests/dialects/test_tsql.py::TestTSQL::test_parsename", "tests/dialects/test_tsql.py::TestTSQL::test_procedure_keywords", "tests/dialects/test_tsql.py::TestTSQL::test_qualify_derived_table_outputs", "tests/dialects/test_tsql.py::TestTSQL::test_replicate", "tests/dialects/test_tsql.py::TestTSQL::test_rollback", "tests/dialects/test_tsql.py::TestTSQL::test_scope_resolution_op", "tests/dialects/test_tsql.py::TestTSQL::test_set", "tests/dialects/test_tsql.py::TestTSQL::test_string", "tests/dialects/test_tsql.py::TestTSQL::test_system_time", "tests/dialects/test_tsql.py::TestTSQL::test_temporal_table", "tests/dialects/test_tsql.py::TestTSQL::test_top", "tests/dialects/test_tsql.py::TestTSQL::test_transaction", "tests/dialects/test_tsql.py::TestTSQL::test_tsql", "tests/dialects/test_tsql.py::TestTSQL::test_types", "tests/dialects/test_tsql.py::TestTSQL::test_types_bin", "tests/dialects/test_tsql.py::TestTSQL::test_types_date", "tests/dialects/test_tsql.py::TestTSQL::test_types_decimals", "tests/dialects/test_tsql.py::TestTSQL::test_types_ints", "tests/dialects/test_tsql.py::TestTSQL::test_types_string", "tests/dialects/test_tsql.py::TestTSQL::test_udf" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> feat(tsql): add support for DATETRUNC #4531 Fixes [https://github.com/tobymao/sqlglot/issues/4531](https://github.com/tobymao/sqlglot/issues/4531) This PR adds support for the `DATETRUNC` function of `T-SQL`. In cases, where the `date` parameter is a `string literal`, a casting is applied to produce a `DATETIME2` expression. Based on `T-SQL`, all `string literals` should resolve into `DATETIME2`. Dialects like `duckdb` do not support the direct `string literals` as dates, thus the casting is mandatory. **Docs** [T-SQL DATETRUNC](https://learn.microsoft.com/en-us/sql/t-sql/functions/datetrunc-transact-sql?view=sql-server-ver16) ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in sqlglot/dialects/tsql.py] (definition of _build_datetrunc:) def _build_datetrunc(args: t.List) -> exp.TimestampTrunc: [end of new definitions in sqlglot/dialects/tsql.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
ceb42fabad60312699e4b15936aeebac00e22e4d
tobymao__sqlglot-4486
4,486
tobymao/sqlglot
null
2655d7c11d677cf47f33ac62fbfb86f4117ffd75
2024-12-09T09:14:24Z
diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py index e0d392b0cd..93731f3049 100644 --- a/sqlglot/dialects/snowflake.py +++ b/sqlglot/dialects/snowflake.py @@ -107,6 +107,13 @@ def _builder(args: t.List) -> E: return _builder +def _build_bitor(args: t.List) -> exp.BitwiseOr | exp.Anonymous: + if len(args) == 3: + return exp.Anonymous(this="BITOR", expressions=args) + + return binary_from_function(exp.BitwiseOr)(args) + + # https://docs.snowflake.com/en/sql-reference/functions/div0 def _build_if_from_div0(args: t.List) -> exp.If: lhs = exp._wrap(seq_get(args, 0), exp.Binary) @@ -393,6 +400,8 @@ class Parser(parser.Parser): ), "BITXOR": binary_from_function(exp.BitwiseXor), "BIT_XOR": binary_from_function(exp.BitwiseXor), + "BITOR": _build_bitor, + "BIT_OR": _build_bitor, "BOOLXOR": binary_from_function(exp.Xor), "DATE": _build_datetime("DATE", exp.DataType.Type.DATE), "DATE_TRUNC": _date_trunc_to_time, @@ -869,6 +878,7 @@ class Generator(generator.Generator): "CONVERT_TIMEZONE", e.args.get("zone"), e.this ), exp.BitwiseXor: rename_func("BITXOR"), + exp.BitwiseOr: rename_func("BITOR"), exp.Create: transforms.preprocess([_flatten_structured_types_unless_iceberg]), exp.DateAdd: date_delta_sql("DATEADD"), exp.DateDiff: date_delta_sql("DATEDIFF"),
diff --git a/tests/dialects/test_snowflake.py b/tests/dialects/test_snowflake.py index 4eb97235da..f70023b9ea 100644 --- a/tests/dialects/test_snowflake.py +++ b/tests/dialects/test_snowflake.py @@ -976,6 +976,12 @@ def test_snowflake(self): "snowflake": "EDITDISTANCE(col1, col2, 3)", }, ) + self.validate_identity("SELECT BITOR(a, b) FROM table") + + self.validate_identity("SELECT BIT_OR(a, b) FROM table", "SELECT BITOR(a, b) FROM table") + + # Test BITOR with three arguments, padding on the left + self.validate_identity("SELECT BITOR(a, b, 'LEFT') FROM table_name") def test_null_treatment(self): self.validate_all(
[ { "components": [ { "doc": "", "lines": [ 110, 114 ], "name": "_build_bitor", "signature": "def _build_bitor(args: t.List) -> exp.BitwiseOr | exp.Anonymous:", "type": "function" } ], "file": "sqlglot/dialects/snowflake.py" } ]
[ "tests/dialects/test_snowflake.py::TestSnowflake::test_snowflake" ]
[ "tests/dialects/test_snowflake.py::TestSnowflake::test_alter_set_unset", "tests/dialects/test_snowflake.py::TestSnowflake::test_copy", "tests/dialects/test_snowflake.py::TestSnowflake::test_ddl", "tests/dialects/test_snowflake.py::TestSnowflake::test_describe_table", "tests/dialects/test_snowflake.py::TestSnowflake::test_flatten", "tests/dialects/test_snowflake.py::TestSnowflake::test_from_changes", "tests/dialects/test_snowflake.py::TestSnowflake::test_grant", "tests/dialects/test_snowflake.py::TestSnowflake::test_historical_data", "tests/dialects/test_snowflake.py::TestSnowflake::test_match_recognize", "tests/dialects/test_snowflake.py::TestSnowflake::test_minus", "tests/dialects/test_snowflake.py::TestSnowflake::test_null_treatment", "tests/dialects/test_snowflake.py::TestSnowflake::test_parse_like_any", "tests/dialects/test_snowflake.py::TestSnowflake::test_querying_semi_structured_data", "tests/dialects/test_snowflake.py::TestSnowflake::test_regexp_replace", "tests/dialects/test_snowflake.py::TestSnowflake::test_regexp_substr", "tests/dialects/test_snowflake.py::TestSnowflake::test_sample", "tests/dialects/test_snowflake.py::TestSnowflake::test_semi_structured_types", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_columns", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_imported_keys", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_objects", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_primary_keys", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_schemas", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_sequences", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_tables", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_unique_keys", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_users", "tests/dialects/test_snowflake.py::TestSnowflake::test_show_views", "tests/dialects/test_snowflake.py::TestSnowflake::test_staged_files", "tests/dialects/test_snowflake.py::TestSnowflake::test_storage_integration", "tests/dialects/test_snowflake.py::TestSnowflake::test_stored_procedures", "tests/dialects/test_snowflake.py::TestSnowflake::test_swap", "tests/dialects/test_snowflake.py::TestSnowflake::test_table_literal", "tests/dialects/test_snowflake.py::TestSnowflake::test_timestamps", "tests/dialects/test_snowflake.py::TestSnowflake::test_try_cast", "tests/dialects/test_snowflake.py::TestSnowflake::test_user_defined_functions", "tests/dialects/test_snowflake.py::TestSnowflake::test_values", "tests/dialects/test_snowflake.py::TestSnowflake::test_window_function_arg" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> feat(snowflake): Transpile support for bitor/bit_or snowflake function scenarios involving binary data manipulation, such as managing permission sets or feature flags. BITOR Function: The BITOR function computes the bitwise OR between two numeric or binary expressions. This operation compares each bit of the inputs and returns a result where each bit is set to 1 if at least one of the corresponding bits of the operands is 1; otherwise, it is set to 0. Example: Suppose you have a table user_permissions that stores user IDs along with two permission sets, perm_set1 and perm_set2, represented as integers: ``` CREATE OR REPLACE TABLE user_permissions ( user_id INTEGER, perm_set1 INTEGER, perm_set2 INTEGER ); INSERT INTO user_permissions (user_id, perm_set1, perm_set2) VALUES (1, 2, 4), -- Binary: perm_set1 = 010, perm_set2 = 100 (2, 1, 2); -- Binary: perm_set1 = 001, perm_set2 = 010 ``` To determine the combined permissions for each user, you can use the BITOR function: ``` SELECT user_id, BITOR(perm_set1, perm_set2) AS combined_permissions FROM user_permissions; ``` This query produces: ``` +---------+-------------------------+ | user_id | combined_permissions | +---------+-------------------------+ | 1 | 6 | -- Binary: 110 | 2 | 3 | -- Binary: 011 +---------+-------------------------+ ``` At **Walmart**, we use to use `bitor`, `bit_or` a lot of places. I have observed that this support is not there in `sqlglot` and hence I have added the code and test case for this. Kindly check it once. ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in sqlglot/dialects/snowflake.py] (definition of _build_bitor:) def _build_bitor(args: t.List) -> exp.BitwiseOr | exp.Anonymous: [end of new definitions in sqlglot/dialects/snowflake.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
ceb42fabad60312699e4b15936aeebac00e22e4d
googleapis__python-aiplatform-4748
4,748
googleapis/python-aiplatform
null
91d837ece0c02c381cda9fbe9c0c839f9986f182
2024-12-05T14:26:24Z
diff --git a/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py b/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py index f2bcb4ad3a..31417cbe58 100644 --- a/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py +++ b/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py @@ -25,7 +25,9 @@ def vector_search_find_neighbors( deployed_index_id: str, queries: List[List[float]], num_neighbors: int, -) -> None: +) -> List[ + List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] +]: """Query the vector search index. Args: @@ -38,6 +40,9 @@ def vector_search_find_neighbors( queries (List[List[float]]): Required. A list of queries. Each query is a list of floats, representing a single embedding. num_neighbors (int): Required. The number of neighbors to return. + + Returns: + List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query. """ # Initialize the Vertex AI client aiplatform.init(project=project, location=location) @@ -48,12 +53,47 @@ def vector_search_find_neighbors( ) # Query the index endpoint for the nearest neighbors. - resp = my_index_endpoint.find_neighbors( + return my_index_endpoint.find_neighbors( deployed_index_id=deployed_index_id, queries=queries, num_neighbors=num_neighbors, ) - print(resp) + + +# [END aiplatform_sdk_vector_search_find_neighbors_sample] + + +# [START aiplatform_sdk_vector_search_find_neighbors_hybrid_sample] +def vector_search_find_neighbors_hybrid_queries( + project: str, + location: str, + index_endpoint_name: str, + deployed_index_id: str, + num_neighbors: int, +) -> List[ + List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] +]: + """Query the vector search index using example hybrid queries. + + Args: + project (str): Required. Project ID + location (str): Required. The region name + index_endpoint_name (str): Required. Index endpoint to run the query + against. + deployed_index_id (str): Required. The ID of the DeployedIndex to run + the queries against. + num_neighbors (int): Required. The number of neighbors to return. + + Returns: + List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query. + """ + # Initialize the Vertex AI client + aiplatform.init(project=project, location=location) + + # Create the index endpoint instance from an existing endpoint. + my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint( + index_endpoint_name=index_endpoint_name + ) # Query hybrid datapoints, sparse-only datapoints, and dense-only datapoints. hybrid_queries = [ @@ -77,13 +117,79 @@ def vector_search_find_neighbors( ), ] - hybrid_resp = my_index_endpoint.find_neighbors( - deployed_index_id=deployed_index_id, - queries=hybrid_queries, - num_neighbors=num_neighbors,) - print(hybrid_resp) + return my_index_endpoint.find_neighbors( + deployed_index_id=deployed_index_id, + queries=hybrid_queries, + num_neighbors=num_neighbors, + ) -# [END aiplatform_sdk_vector_search_find_neighbors_sample] + +# [END aiplatform_sdk_vector_search_find_neighbors_hybrid_sample] + + +# [START aiplatform_sdk_vector_search_find_neighbors_filtering_crowding_sample] +def vector_search_find_neighbors_filtering_crowding( + project: str, + location: str, + index_endpoint_name: str, + deployed_index_id: str, + queries: List[List[float]], + num_neighbors: int, + filter: List[aiplatform.matching_engine.matching_engine_index_endpoint.Namespace], + numeric_filter: List[ + aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace + ], + per_crowding_attribute_neighbor_count: int, +) -> List[ + List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] +]: + """Query the vector search index with filtering and crowding. + + Args: + project (str): Required. Project ID + location (str): Required. The region name + index_endpoint_name (str): Required. Index endpoint to run the query + against. + deployed_index_id (str): Required. The ID of the DeployedIndex to run + the queries against. + queries (List[List[float]]): Required. A list of queries. Each query is + a list of floats, representing a single embedding. + num_neighbors (int): Required. The number of neighbors to return. + filter (List[Namespace]): Required. A list of Namespaces for filtering + the matching results. For example, + [Namespace("color", ["red"], []), Namespace("shape", [], ["square"])] + will match datapoints that satisfy "red color" but not include + datapoints with "square shape". + numeric_filter (List[NumericNamespace]): Required. A list of + NumericNamespaces for filtering the matching results. For example, + [NumericNamespace(name="cost", value_int=5, op="GREATER")] will limit + the matching results to datapoints with cost greater than 5. + per_crowding_attribute_neighbor_count (int): Required. The maximum + number of returned matches with the same crowding tag. + + Returns: + List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query. + """ + # Initialize the Vertex AI client + aiplatform.init(project=project, location=location) + + # Create the index endpoint instance from an existing endpoint. + my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint( + index_endpoint_name=index_endpoint_name + ) + + # Query the index endpoint for the nearest neighbors. + return my_index_endpoint.find_neighbors( + deployed_index_id=deployed_index_id, + queries=queries, + num_neighbors=num_neighbors, + filter=filter, + numeric_filter=numeric_filter, + per_crowding_attribute_neighbor_count=per_crowding_attribute_neighbor_count, + ) + + +# [END aiplatform_sdk_vector_search_find_neighbors_filtering_crowding_sample] # [START aiplatform_sdk_vector_search_find_neighbors_jwt_sample] @@ -95,7 +201,9 @@ def vector_search_find_neighbors_jwt( queries: List[List[float]], num_neighbors: int, signed_jwt: str, -) -> List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]]: +) -> List[ + List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] +]: """Query the vector search index. Args: @@ -132,4 +240,5 @@ def vector_search_find_neighbors_jwt( ) return resp + # [END aiplatform_sdk_vector_search_find_neighbors_jwt_sample]
diff --git a/samples/model-builder/test_constants.py b/samples/model-builder/test_constants.py index 3a7d3ed3c3..5930d8394d 100644 --- a/samples/model-builder/test_constants.py +++ b/samples/model-builder/test_constants.py @@ -382,14 +382,18 @@ # Vector Search VECTOR_SEARCH_INDEX = "123" VECTOR_SEARCH_INDEX_DATAPOINTS = [ - aiplatform.compat.types.index_v1beta1.IndexDatapoint(datapoint_id="datapoint_id_1", feature_vector=[0.1, 0.2]), - aiplatform.compat.types.index_v1beta1.IndexDatapoint(datapoint_id="datapoint_id_2", feature_vector=[0.3, 0.4]), + aiplatform.compat.types.index_v1beta1.IndexDatapoint( + datapoint_id="datapoint_id_1", feature_vector=[0.1, 0.2] + ), + aiplatform.compat.types.index_v1beta1.IndexDatapoint( + datapoint_id="datapoint_id_2", feature_vector=[0.3, 0.4] + ), ] VECTOR_SEARCH_INDEX_DATAPOINT_IDS = ["datapoint_id_1", "datapoint_id_2"] VECTOR_SEARCH_INDEX_ENDPOINT = "456" VECTOR_SEARCH_DEPLOYED_INDEX_ID = "789" -VECTOR_SERACH_INDEX_QUERIES = [[0.1]] -VECTOR_SERACH_INDEX_HYBRID_QUERIES = [ +VECTOR_SEARCH_INDEX_QUERIES = [[0.1]] +VECTOR_SEARCH_INDEX_HYBRID_QUERIES = [ aiplatform.matching_engine.matching_engine_index_endpoint.HybridQuery( dense_embedding=[1, 2, 3], sparse_embedding_dimensions=[10, 20, 30], @@ -409,6 +413,20 @@ dense_embedding=[1, 2, 3] ), ] +VECTOR_SEARCH_FILTER = [ + aiplatform.matching_engine.matching_engine_index_endpoint.Namespace( + "color", ["red"], [] + ), + aiplatform.matching_engine.matching_engine_index_endpoint.Namespace( + "shape", [], ["squared"] + ), +] +VECTOR_SEARCH_NUMERIC_FILTER = [ + aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace( + name="cost", value_int=5, op="GREATER" + ) +] +VECTOR_SEARCH_PER_CROWDING_ATTRIBUTE_NEIGHBOR_COUNT = 5 VECTOR_SEARCH_INDEX_DISPLAY_NAME = "my-vector-search-index" VECTOR_SEARCH_INDEX_DESCRIPTION = "test description" VECTOR_SEARCH_INDEX_LABELS = {"my_key": "my_value"} diff --git a/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py b/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py index 30f1f5711d..35c6dfe217 100644 --- a/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py +++ b/samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py @@ -12,8 +12,6 @@ # See the License for the specific language governing permissions and # limitations under the License. -from unittest.mock import call - import test_constants as constants from vector_search import vector_search_find_neighbors_sample @@ -26,8 +24,8 @@ def test_vector_search_find_neighbors_sample( location=constants.LOCATION, index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, - num_neighbors=10 + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, + num_neighbors=10, ) # Check client initialization @@ -37,23 +35,79 @@ def test_vector_search_find_neighbors_sample( # Check index endpoint initialization with right index endpoint name mock_index_endpoint_init.assert_called_with( - index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT) + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) # Check index_endpoint.find_neighbors is called with right params. - mock_index_endpoint_find_neighbors.assert_has_calls( - [ - call( - deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, - num_neighbors=10, - ), - call( - deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_HYBRID_QUERIES, - num_neighbors=10, - ), - ], - any_order=False, + mock_index_endpoint_find_neighbors.assert_called_with( + deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, + num_neighbors=10, + ) + + +def test_vector_search_find_neighbors_hybrid_sample( + mock_sdk_init, mock_index_endpoint_init, mock_index_endpoint_find_neighbors +): + vector_search_find_neighbors_sample.vector_search_find_neighbors_hybrid_queries( + project=constants.PROJECT, + location=constants.LOCATION, + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, + deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, + num_neighbors=10, + ) + + # Check client initialization + mock_sdk_init.assert_called_with( + project=constants.PROJECT, location=constants.LOCATION + ) + + # Check index endpoint initialization with right index endpoint name + mock_index_endpoint_init.assert_called_with( + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) + + # Check index_endpoint.find_neighbors is called with right params. + mock_index_endpoint_find_neighbors.assert_called_with( + deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, + queries=constants.VECTOR_SEARCH_INDEX_HYBRID_QUERIES, + num_neighbors=10, + ) + + +def test_vector_search_find_neighbors_filtering_crowding_sample( + mock_sdk_init, mock_index_endpoint_init, mock_index_endpoint_find_neighbors +): + vector_search_find_neighbors_sample.vector_search_find_neighbors_filtering_crowding( + project=constants.PROJECT, + location=constants.LOCATION, + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, + deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, + num_neighbors=10, + filter=constants.VECTOR_SEARCH_FILTER, + numeric_filter=constants.VECTOR_SEARCH_NUMERIC_FILTER, + per_crowding_attribute_neighbor_count=constants.VECTOR_SEARCH_PER_CROWDING_ATTRIBUTE_NEIGHBOR_COUNT, + ) + + # Check client initialization + mock_sdk_init.assert_called_with( + project=constants.PROJECT, location=constants.LOCATION + ) + + # Check index endpoint initialization with right index endpoint name + mock_index_endpoint_init.assert_called_with( + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) + + # Check index_endpoint.find_neighbors is called with right params. + mock_index_endpoint_find_neighbors.assert_called_with( + deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, + num_neighbors=10, + filter=constants.VECTOR_SEARCH_FILTER, + numeric_filter=constants.VECTOR_SEARCH_NUMERIC_FILTER, + per_crowding_attribute_neighbor_count=constants.VECTOR_SEARCH_PER_CROWDING_ATTRIBUTE_NEIGHBOR_COUNT, ) @@ -65,7 +119,7 @@ def test_vector_search_find_neighbors_jwt_sample( location=constants.LOCATION, index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT, ) @@ -77,12 +131,13 @@ def test_vector_search_find_neighbors_jwt_sample( # Check index endpoint initialization with right index endpoint name mock_index_endpoint_init.assert_called_with( - index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT) + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) # Check index_endpoint.find_neighbors is called with right params. mock_index_endpoint_find_neighbors.assert_called_with( deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT, ) diff --git a/samples/model-builder/vector_search/vector_search_match_sample_test.py b/samples/model-builder/vector_search/vector_search_match_sample_test.py index 5d1c5faa64..081e334ef4 100644 --- a/samples/model-builder/vector_search/vector_search_match_sample_test.py +++ b/samples/model-builder/vector_search/vector_search_match_sample_test.py @@ -36,7 +36,8 @@ def test_vector_search_match_hybrid_queries_sample( # Check index endpoint initialization with right index endpoint name mock_index_endpoint_init.assert_called_with( - index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT) + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) # Check index_endpoint.match is called with right params. mock_index_endpoint_match.assert_called_with( @@ -54,7 +55,7 @@ def test_vector_search_match_jwt_sample( location=constants.LOCATION, index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT, ) @@ -66,12 +67,13 @@ def test_vector_search_match_jwt_sample( # Check index endpoint initialization with right index endpoint name mock_index_endpoint_init.assert_called_with( - index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT) + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) # Check index_endpoint.match is called with right params. mock_index_endpoint_match.assert_called_with( deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, signed_jwt=constants.VECTOR_SEARCH_PRIVATE_ENDPOINT_SIGNED_JWT, ) @@ -81,14 +83,14 @@ def test_vector_search_match_psc_manual_sample( mock_sdk_init, mock_index_endpoint, mock_index_endpoint_init, - mock_index_endpoint_match + mock_index_endpoint_match, ): vector_search_match_sample.vector_search_match_psc_manual( project=constants.PROJECT, location=constants.LOCATION, index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, ip_address=constants.VECTOR_SEARCH_PSC_MANUAL_IP_ADDRESS, ) @@ -100,7 +102,8 @@ def test_vector_search_match_psc_manual_sample( # Check index endpoint initialization with right index endpoint name mock_index_endpoint_init.assert_called_with( - index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT) + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) # Check index endpoint PSC IP address is set assert mock_index_endpoint.private_service_connect_ip_address == ( @@ -110,7 +113,7 @@ def test_vector_search_match_psc_manual_sample( # Check index_endpoint.match is called with right params. mock_index_endpoint_match.assert_called_with( deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, ) @@ -123,7 +126,7 @@ def test_vector_search_match_psc_automation_sample( location=constants.LOCATION, index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT, deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, psc_network=constants.VECTOR_SEARCH_VPC_NETWORK, ) @@ -135,12 +138,13 @@ def test_vector_search_match_psc_automation_sample( # Check index endpoint initialization with right index endpoint name mock_index_endpoint_init.assert_called_with( - index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT) + index_endpoint_name=constants.VECTOR_SEARCH_INDEX_ENDPOINT + ) # Check index_endpoint.match is called with right params. mock_index_endpoint_match.assert_called_with( deployed_index_id=constants.VECTOR_SEARCH_DEPLOYED_INDEX_ID, - queries=constants.VECTOR_SERACH_INDEX_QUERIES, + queries=constants.VECTOR_SEARCH_INDEX_QUERIES, num_neighbors=10, psc_network=constants.VECTOR_SEARCH_VPC_NETWORK, )
[ { "components": [ { "doc": "Query the vector search index using example hybrid queries.\n\nArgs:\n project (str): Required. Project ID\n location (str): Required. The region name\n index_endpoint_name (str): Required. Index endpoint to run the query\n against.\n deployed_index_id (str): Required. The ID of the DeployedIndex to run\n the queries against.\n num_neighbors (int): Required. The number of neighbors to return.\n\nReturns:\n List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.", "lines": [ 67, 123 ], "name": "vector_search_find_neighbors_hybrid_queries", "signature": "def vector_search_find_neighbors_hybrid_queries( project: str, location: str, index_endpoint_name: str, deployed_index_id: str, num_neighbors: int, ) -> List[ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] ]:", "type": "function" }, { "doc": "Query the vector search index with filtering and crowding.\n\nArgs:\n project (str): Required. Project ID\n location (str): Required. The region name\n index_endpoint_name (str): Required. Index endpoint to run the query\n against.\n deployed_index_id (str): Required. The ID of the DeployedIndex to run\n the queries against.\n queries (List[List[float]]): Required. A list of queries. Each query is\n a list of floats, representing a single embedding.\n num_neighbors (int): Required. The number of neighbors to return.\n filter (List[Namespace]): Required. A list of Namespaces for filtering\n the matching results. For example,\n [Namespace(\"color\", [\"red\"], []), Namespace(\"shape\", [], [\"square\"])]\n will match datapoints that satisfy \"red color\" but not include\n datapoints with \"square shape\".\n numeric_filter (List[NumericNamespace]): Required. A list of\n NumericNamespaces for filtering the matching results. For example,\n [NumericNamespace(name=\"cost\", value_int=5, op=\"GREATER\")] will limit\n the matching results to datapoints with cost greater than 5.\n per_crowding_attribute_neighbor_count (int): Required. The maximum\n number of returned matches with the same crowding tag.\n\nReturns:\n List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.", "lines": [ 131, 188 ], "name": "vector_search_find_neighbors_filtering_crowding", "signature": "def vector_search_find_neighbors_filtering_crowding( project: str, location: str, index_endpoint_name: str, deployed_index_id: str, queries: List[List[float]], num_neighbors: int, filter: List[aiplatform.matching_engine.matching_engine_index_endpoint.Namespace], numeric_filter: List[ aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace ], per_crowding_attribute_neighbor_count: int, ) -> List[ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] ]:", "type": "function" } ], "file": "samples/model-builder/vector_search/vector_search_find_neighbors_sample.py" } ]
[ "samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_sample", "samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_hybrid_sample", "samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_filtering_crowding_sample" ]
[ "samples/model-builder/vector_search/vector_search_find_neighbors_sample_test.py::test_vector_search_find_neighbors_jwt_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_hybrid_queries_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_jwt_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_psc_manual_sample", "samples/model-builder/vector_search/vector_search_match_sample_test.py::test_vector_search_match_psc_automation_sample" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> chore: Samples - Add vector search sample for filtering and crowding chore: Samples - Add vector search sample for filtering and crowding ---------- <!-- probot comment [11299897]--> Here is the summary of changes. <details> <summary>You are about to add 2 region tags.</summary> - [samples/model-builder/vector_search/vector_search_find_neighbors_sample.py:66](https://github.com/googleapis/python-aiplatform/blob/3ea43524f7961fdb29c4f75cf529852894bf1c86/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py#L66), tag `aiplatform_sdk_vector_search_find_neighbors_hybrid_sample` - [samples/model-builder/vector_search/vector_search_find_neighbors_sample.py:130](https://github.com/googleapis/python-aiplatform/blob/3ea43524f7961fdb29c4f75cf529852894bf1c86/samples/model-builder/vector_search/vector_search_find_neighbors_sample.py#L130), tag `aiplatform_sdk_vector_search_find_neighbors_filtering_crowding_sample` </details> --- This comment is generated by [snippet-bot](https://github.com/apps/snippet-bot). If you find problems with this result, please file an issue at: https://github.com/googleapis/repo-automation-bots/issues. To update this comment, add `snippet-bot:force-run` label or use the checkbox below: - [ ] Refresh this comment </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in samples/model-builder/vector_search/vector_search_find_neighbors_sample.py] (definition of vector_search_find_neighbors_hybrid_queries:) def vector_search_find_neighbors_hybrid_queries( project: str, location: str, index_endpoint_name: str, deployed_index_id: str, num_neighbors: int, ) -> List[ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] ]: """Query the vector search index using example hybrid queries. Args: project (str): Required. Project ID location (str): Required. The region name index_endpoint_name (str): Required. Index endpoint to run the query against. deployed_index_id (str): Required. The ID of the DeployedIndex to run the queries against. num_neighbors (int): Required. The number of neighbors to return. Returns: List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.""" (definition of vector_search_find_neighbors_filtering_crowding:) def vector_search_find_neighbors_filtering_crowding( project: str, location: str, index_endpoint_name: str, deployed_index_id: str, queries: List[List[float]], num_neighbors: int, filter: List[aiplatform.matching_engine.matching_engine_index_endpoint.Namespace], numeric_filter: List[ aiplatform.matching_engine.matching_engine_index_endpoint.NumericNamespace ], per_crowding_attribute_neighbor_count: int, ) -> List[ List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor] ]: """Query the vector search index with filtering and crowding. Args: project (str): Required. Project ID location (str): Required. The region name index_endpoint_name (str): Required. Index endpoint to run the query against. deployed_index_id (str): Required. The ID of the DeployedIndex to run the queries against. queries (List[List[float]]): Required. A list of queries. Each query is a list of floats, representing a single embedding. num_neighbors (int): Required. The number of neighbors to return. filter (List[Namespace]): Required. A list of Namespaces for filtering the matching results. For example, [Namespace("color", ["red"], []), Namespace("shape", [], ["square"])] will match datapoints that satisfy "red color" but not include datapoints with "square shape". numeric_filter (List[NumericNamespace]): Required. A list of NumericNamespaces for filtering the matching results. For example, [NumericNamespace(name="cost", value_int=5, op="GREATER")] will limit the matching results to datapoints with cost greater than 5. per_crowding_attribute_neighbor_count (int): Required. The maximum number of returned matches with the same crowding tag. Returns: List[List[aiplatform.matching_engine.matching_engine_index_endpoint.MatchNeighbor]] - A list of nearest neighbors for each query.""" [end of new definitions in samples/model-builder/vector_search/vector_search_find_neighbors_sample.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
67358fa6a830eb842f6b52d09061af4a41b54af6
tobymao__sqlglot-4433
4,433
tobymao/sqlglot
null
38e2e19ac3e20224dc07128994a47340aa56e635
2024-11-20T16:28:28Z
diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py index 1d2b246e5d..35ac0509cf 100644 --- a/sqlglot/dialects/snowflake.py +++ b/sqlglot/dialects/snowflake.py @@ -198,43 +198,58 @@ def _flatten_structured_type(expression: exp.DataType) -> exp.DataType: return expression -def _unnest_generate_date_array(expression: exp.Expression) -> exp.Expression: - if isinstance(expression, exp.Select): - for unnest in expression.find_all(exp.Unnest): - if ( - isinstance(unnest.parent, (exp.From, exp.Join)) - and len(unnest.expressions) == 1 - and isinstance(unnest.expressions[0], exp.GenerateDateArray) - ): - generate_date_array = unnest.expressions[0] - start = generate_date_array.args.get("start") - end = generate_date_array.args.get("end") - step = generate_date_array.args.get("step") +def _unnest_generate_date_array(unnest: exp.Unnest) -> None: + generate_date_array = unnest.expressions[0] + start = generate_date_array.args.get("start") + end = generate_date_array.args.get("end") + step = generate_date_array.args.get("step") - if not start or not end or not isinstance(step, exp.Interval) or step.name != "1": - continue + if not start or not end or not isinstance(step, exp.Interval) or step.name != "1": + return - unit = step.args.get("unit") + unit = step.args.get("unit") - unnest_alias = unnest.args.get("alias") - if unnest_alias: - unnest_alias = unnest_alias.copy() - sequence_value_name = seq_get(unnest_alias.columns, 0) or "value" - else: - sequence_value_name = "value" + unnest_alias = unnest.args.get("alias") + if unnest_alias: + unnest_alias = unnest_alias.copy() + sequence_value_name = seq_get(unnest_alias.columns, 0) or "value" + else: + sequence_value_name = "value" + + # We'll add the next sequence value to the starting date and project the result + date_add = _build_date_time_add(exp.DateAdd)( + [unit, exp.cast(sequence_value_name, "int"), exp.cast(start, "date")] + ).as_(sequence_value_name) + + # We use DATEDIFF to compute the number of sequence values needed + number_sequence = Snowflake.Parser.FUNCTIONS["ARRAY_GENERATE_RANGE"]( + [exp.Literal.number(0), _build_datediff([unit, start, end]) + 1] + ) + + unnest.set("expressions", [number_sequence]) + unnest.replace(exp.select(date_add).from_(unnest.copy()).subquery(unnest_alias)) - # We'll add the next sequence value to the starting date and project the result - date_add = _build_date_time_add(exp.DateAdd)( - [unit, exp.cast(sequence_value_name, "int"), exp.cast(start, "date")] - ).as_(sequence_value_name) - # We use DATEDIFF to compute the number of sequence values needed - number_sequence = Snowflake.Parser.FUNCTIONS["ARRAY_GENERATE_RANGE"]( - [exp.Literal.number(0), _build_datediff([unit, start, end]) + 1] +def _transform_generate_date_array(expression: exp.Expression) -> exp.Expression: + if isinstance(expression, exp.Select): + for generate_date_array in expression.find_all(exp.GenerateDateArray): + parent = generate_date_array.parent + + # If GENERATE_DATE_ARRAY is used directly as an array (e.g passed into ARRAY_LENGTH), the transformed Snowflake + # query is the following (it'll be unnested properly on the next iteration due to copy): + # SELECT ref(GENERATE_DATE_ARRAY(...)) -> SELECT ref((SELECT ARRAY_AGG(*) FROM UNNEST(GENERATE_DATE_ARRAY(...)))) + if not isinstance(parent, exp.Unnest): + unnest = exp.Unnest(expressions=[generate_date_array.copy()]) + generate_date_array.replace( + exp.select(exp.ArrayAgg(this=exp.Star())).from_(unnest).subquery() ) - unnest.set("expressions", [number_sequence]) - unnest.replace(exp.select(date_add).from_(unnest.copy()).subquery(unnest_alias)) + if ( + isinstance(parent, exp.Unnest) + and isinstance(parent.parent, (exp.From, exp.Join)) + and len(parent.expressions) == 1 + ): + _unnest_generate_date_array(parent) return expression @@ -893,7 +908,7 @@ class Generator(generator.Generator): transforms.eliminate_distinct_on, transforms.explode_to_unnest(), transforms.eliminate_semi_and_anti_joins, - _unnest_generate_date_array, + _transform_generate_date_array, ] ), exp.SafeDivide: lambda self, e: no_safe_divide_sql(self, e, "IFF"),
diff --git a/tests/dialects/test_dialect.py b/tests/dialects/test_dialect.py index f0711fc585..c1aa054a0e 100644 --- a/tests/dialects/test_dialect.py +++ b/tests/dialects/test_dialect.py @@ -2854,6 +2854,13 @@ def test_generate_date_array(self): }, ) + self.validate_all( + "SELECT ARRAY_LENGTH(GENERATE_DATE_ARRAY(DATE '2020-01-01', DATE '2020-02-01', INTERVAL 1 WEEK))", + write={ + "snowflake": "SELECT ARRAY_SIZE((SELECT ARRAY_AGG(*) FROM (SELECT DATEADD(WEEK, CAST(value AS INT), CAST('2020-01-01' AS DATE)) AS value FROM TABLE(FLATTEN(INPUT => ARRAY_GENERATE_RANGE(0, (DATEDIFF(WEEK, CAST('2020-01-01' AS DATE), CAST('2020-02-01' AS DATE)) + 1 - 1) + 1))) AS _u(seq, key, path, index, value, this))))", + }, + ) + def test_set_operation_specifiers(self): self.validate_all( "SELECT 1 EXCEPT ALL SELECT 1",
[ { "components": [ { "doc": "", "lines": [ 233, 254 ], "name": "_transform_generate_date_array", "signature": "def _transform_generate_date_array(expression: exp.Expression) -> exp.Expression:", "type": "function" } ], "file": "sqlglot/dialects/snowflake.py" } ]
[ "tests/dialects/test_dialect.py::TestDialect::test_generate_date_array" ]
[ "tests/dialects/test_dialect.py::TestDialect::test_alias", "tests/dialects/test_dialect.py::TestDialect::test_array", "tests/dialects/test_dialect.py::TestDialect::test_array_any", "tests/dialects/test_dialect.py::TestDialect::test_cast", "tests/dialects/test_dialect.py::TestDialect::test_cast_to_user_defined_type", "tests/dialects/test_dialect.py::TestDialect::test_coalesce", "tests/dialects/test_dialect.py::TestDialect::test_compare_dialects", "tests/dialects/test_dialect.py::TestDialect::test_count_if", "tests/dialects/test_dialect.py::TestDialect::test_create_sequence", "tests/dialects/test_dialect.py::TestDialect::test_cross_join", "tests/dialects/test_dialect.py::TestDialect::test_ddl", "tests/dialects/test_dialect.py::TestDialect::test_decode", "tests/dialects/test_dialect.py::TestDialect::test_enum", "tests/dialects/test_dialect.py::TestDialect::test_escaped_identifier_delimiter", "tests/dialects/test_dialect.py::TestDialect::test_get_or_raise", "tests/dialects/test_dialect.py::TestDialect::test_hash_comments", "tests/dialects/test_dialect.py::TestDialect::test_heredoc_strings", "tests/dialects/test_dialect.py::TestDialect::test_if_null", "tests/dialects/test_dialect.py::TestDialect::test_json", "tests/dialects/test_dialect.py::TestDialect::test_lateral_subquery", "tests/dialects/test_dialect.py::TestDialect::test_limit", "tests/dialects/test_dialect.py::TestDialect::test_logarithm", "tests/dialects/test_dialect.py::TestDialect::test_median", "tests/dialects/test_dialect.py::TestDialect::test_merge", "tests/dialects/test_dialect.py::TestDialect::test_multiple_chained_unnest", "tests/dialects/test_dialect.py::TestDialect::test_nested_ctes", "tests/dialects/test_dialect.py::TestDialect::test_normalize", "tests/dialects/test_dialect.py::TestDialect::test_nullsafe_eq", "tests/dialects/test_dialect.py::TestDialect::test_nullsafe_neq", "tests/dialects/test_dialect.py::TestDialect::test_nvl2", "tests/dialects/test_dialect.py::TestDialect::test_operators", "tests/dialects/test_dialect.py::TestDialect::test_order_by", "tests/dialects/test_dialect.py::TestDialect::test_qualify", "tests/dialects/test_dialect.py::TestDialect::test_random", "tests/dialects/test_dialect.py::TestDialect::test_reserved_keywords", "tests/dialects/test_dialect.py::TestDialect::test_safediv", "tests/dialects/test_dialect.py::TestDialect::test_set_operation_specifiers", "tests/dialects/test_dialect.py::TestDialect::test_set_operators", "tests/dialects/test_dialect.py::TestDialect::test_string_functions", "tests/dialects/test_dialect.py::TestDialect::test_substring", "tests/dialects/test_dialect.py::TestDialect::test_time", "tests/dialects/test_dialect.py::TestDialect::test_transactions", "tests/dialects/test_dialect.py::TestDialect::test_trim", "tests/dialects/test_dialect.py::TestDialect::test_truncate", "tests/dialects/test_dialect.py::TestDialect::test_typeddiv", "tests/dialects/test_dialect.py::TestDialect::test_unsupported_null_ordering", "tests/dialects/test_dialect.py::TestDialect::test_uuid" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> feat(snowflake): Transpile non-UNNEST exp.GenerateDateArray refs Currently, transpilation of BQ's `GENERATE_DATE_ARRAY` to Snowflake is supported only if it's `UNNEST`ed (introduced by https://github.com/tobymao/sqlglot/pull/3899), e.g: 1. Supported: ```Python3 >>> import sqlglot >>> sqlglot.parse_one("SELECT * FROM UNNEST(GENERATE_DATE_ARRAY(DATE '2020-01-01', DATE '2020-02-01', INTERVAL 1 WEEK))", read="bigquery").sql("snowflake") "SELECT * FROM (SELECT DATEADD(WEEK, CAST(value AS INT), CAST('2020-01-01' AS DATE)) AS value FROM TABLE(FLATTEN(INPUT => ARRAY_GENERATE_RANGE(0, (DATEDIFF(WEEK, CAST('2020-01-01' AS DATE), CAST('2020-02-01' AS DATE)) + 1 - 1) + 1))) AS _u(seq, key, path, index, value, this))" bigquery> SELECT * FROM UNNEST(GENERATE_DATE_ARRAY(DATE '2020-01-01', DATE '2020-02-01', INTERVAL 1 WEEK)); f0_ 2020-01-01 2020-01-08 2020-01-15 2020-01-22 2020-01-29 snowflake> SELECT * FROM (SELECT DATEADD(WEEK, CAST(value AS INT), CAST('2020-01-01' AS DATE)) AS value FROM TABLE(FLATTEN(INPUT => ARRAY_GENERATE_RANGE(0, (DATEDIFF(WEEK, CAST('2020-01-01' AS DATE), CAST('2020-02-01' AS DATE)) + 1 - 1) + 1))) AS _u(seq, key, path, index, value, this)); -- 2020-01-01 2020-01-08 2020-01-15 2020-01-22 2020-01-29 ``` 2. Not supported: ```Python3 >>> sqlglot.parse_one("SELECT GENERATE_DATE_ARRAY(DATE '2020-01-01', DATE '2020-02-01', INTERVAL 1 WEEK)", read="bigquery").sql("snowflake") "SELECT GENERATE_DATE_ARRAY(CAST('2020-01-01' AS DATE), CAST('2020-02-01' AS DATE), INTERVAL '1 WEEK')" ``` This PR adds support for (2) by reusing (1) and aggregating it into an array, thus producing the following subquery: ```Python3 >>> sqlglot.parse_one("SELECT GENERATE_DATE_ARRAY(DATE '2020-01-01', DATE '2020-02-01', INTERVAL 1 WEEK)", read="bigquery").sql("snowflake") "SELECT (SELECT ARRAY_AGG(*) FROM (SELECT DATEADD(WEEK, CAST(value AS INT), CAST('2020-01-01' AS DATE)) AS value FROM TABLE(FLATTEN(INPUT => ARRAY_GENERATE_RANGE(0, (DATEDIFF(WEEK, CAST('2020-01-01' AS DATE), CAST('2020-02-01' AS DATE)) + 1 - 1) + 1))) AS _u(seq, key, path, index, value, this)))" bigquery> SELECT GENERATE_DATE_ARRAY(DATE '2020-01-01', DATE '2020-02-01', INTERVAL 1 WEEK); f0_ 2020-01-01 2020-01-08 2020-01-15 2020-01-22 2020-01-29 snowflake> SELECT (SELECT ARRAY_AGG(*) FROM (SELECT DATEADD(WEEK, CAST(value AS INT), CAST('2020-01-01' AS DATE)) AS value FROM TABLE(FLATTEN(INPUT => ARRAY_GENERATE_RANGE(0, (DATEDIFF(WEEK, CAST('2020-01-01' AS DATE), CAST('2020-02-01' AS DATE)) + 1 - 1) + 1))) AS _u(seq, key, path, index, value, this))); -- ["2020-01-01", "2020-01-08", "2020-01-15", "2020-01-22", "2020-01-29"] ``` ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in sqlglot/dialects/snowflake.py] (definition of _transform_generate_date_array:) def _transform_generate_date_array(expression: exp.Expression) -> exp.Expression: [end of new definitions in sqlglot/dialects/snowflake.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
ceb42fabad60312699e4b15936aeebac00e22e4d
aws-powertools__powertools-lambda-python-5588
5,588
aws-powertools/powertools-lambda-python
null
3ff5132bdf894fb49da11cfaa7cbff7412d5675c
2024-11-19T12:49:39Z
diff --git a/aws_lambda_powertools/event_handler/appsync.py b/aws_lambda_powertools/event_handler/appsync.py index 6f1cb72d067..3dbbf207859 100644 --- a/aws_lambda_powertools/event_handler/appsync.py +++ b/aws_lambda_powertools/event_handler/appsync.py @@ -53,6 +53,7 @@ def __init__(self): """ super().__init__() self.context = {} # early init as customers might add context before event resolution + self._exception_handlers: dict[type, Callable] = {} def __call__( self, @@ -142,12 +143,18 @@ def lambda_handler(event, context): self.lambda_context = context Router.lambda_context = context - if isinstance(event, list): - Router.current_batch_event = [data_model(e) for e in event] - response = self._call_batch_resolver(event=event, data_model=data_model) - else: - Router.current_event = data_model(event) - response = self._call_single_resolver(event=event, data_model=data_model) + try: + if isinstance(event, list): + Router.current_batch_event = [data_model(e) for e in event] + response = self._call_batch_resolver(event=event, data_model=data_model) + else: + Router.current_event = data_model(event) + response = self._call_single_resolver(event=event, data_model=data_model) + except Exception as exp: + response_builder = self._lookup_exception_handler(type(exp)) + if response_builder: + return response_builder(exp) + raise # We don't clear the context for coroutines because we don't have control over the event loop. # If we clean the context immediately, it might not be available when the coroutine is actually executed. @@ -470,3 +477,47 @@ def async_batch_resolver( raise_on_error=raise_on_error, aggregate=aggregate, ) + + def exception_handler(self, exc_class: type[Exception] | list[type[Exception]]): + """ + A decorator function that registers a handler for one or more exception types. + + Parameters + ---------- + exc_class (type[Exception] | list[type[Exception]]) + A single exception type or a list of exception types. + + Returns + ------- + Callable: + A decorator function that registers the exception handler. + """ + + def register_exception_handler(func: Callable): + if isinstance(exc_class, list): # pragma: no cover + for exp in exc_class: + self._exception_handlers[exp] = func + else: + self._exception_handlers[exc_class] = func + return func + + return register_exception_handler + + def _lookup_exception_handler(self, exp_type: type) -> Callable | None: + """ + Looks up the registered exception handler for the given exception type or its base classes. + + Parameters + ---------- + exp_type (type): + The exception type to look up the handler for. + + Returns + ------- + Callable | None: + The registered exception handler function if found, otherwise None. + """ + for cls in exp_type.__mro__: + if cls in self._exception_handlers: + return self._exception_handlers[cls] + return None diff --git a/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py b/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py index a52e5fbc7a2..3497227ed70 100644 --- a/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py +++ b/aws_lambda_powertools/utilities/data_classes/code_pipeline_job_event.py @@ -311,7 +311,7 @@ def put_artifact(self, artifact_name: str, body: Any, content_type: str) -> None key = artifact.location.s3_location.key # boto3 doesn't support None to omit the parameter when using ServerSideEncryption and SSEKMSKeyId - # So we are using if/else instead. + # So we are using if/else instead. if self.data.encryption_key: diff --git a/docs/core/event_handler/appsync.md b/docs/core/event_handler/appsync.md index a2f29e5dba5..0c556dedfbf 100644 --- a/docs/core/event_handler/appsync.md +++ b/docs/core/event_handler/appsync.md @@ -288,6 +288,19 @@ You can use `append_context` when you want to share data between your App and Ro --8<-- "examples/event_handler_graphql/src/split_operation_append_context_module.py" ``` +### Exception handling + +You can use **`exception_handler`** decorator with any Python exception. This allows you to handle a common exception outside your resolver, for example validation errors. + +The `exception_handler` function also supports passing a list of exception types you wish to handle with one handler. + +```python hl_lines="5-7 11" title="Exception handling" +--8<-- "examples/event_handler_graphql/src/exception_handling_graphql.py" +``` + +???+ warning + This is not supported when using async single resolvers. + ### Batch processing ```mermaid diff --git a/examples/event_handler_graphql/src/exception_handling_graphql.py b/examples/event_handler_graphql/src/exception_handling_graphql.py new file mode 100644 index 00000000000..b135f75112b --- /dev/null +++ b/examples/event_handler_graphql/src/exception_handling_graphql.py @@ -0,0 +1,17 @@ +from aws_lambda_powertools.event_handler import AppSyncResolver + +app = AppSyncResolver() + + +@app.exception_handler(ValueError) +def handle_value_error(ex: ValueError): + return {"message": "error"} + + +@app.resolver(field_name="createSomething") +def create_something(): + raise ValueError("Raising an exception") + + +def lambda_handler(event, context): + return app.resolve(event, context)
diff --git a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py index c594be54a5b..59c5ec08a15 100644 --- a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py +++ b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py @@ -981,3 +981,125 @@ async def get_user(event: List) -> List: # THEN the resolver must be able to return a field in the batch_current_event assert app.context == {} assert ret[0] == "powertools" + + +def test_exception_handler_with_batch_resolver_and_raise_exception(): + + # GIVEN a AppSyncResolver instance + app = AppSyncResolver() + + event = [ + { + "typeName": "Query", + "info": { + "fieldName": "listLocations", + "parentTypeName": "Post", + }, + "fieldName": "listLocations", + "arguments": {}, + "source": { + "id": "1", + }, + }, + { + "typeName": "Query", + "info": { + "fieldName": "listLocations", + "parentTypeName": "Post", + }, + "fieldName": "listLocations", + "arguments": {}, + "source": { + "id": "2", + }, + }, + { + "typeName": "Query", + "info": { + "fieldName": "listLocations", + "parentTypeName": "Post", + }, + "fieldName": "listLocations", + "arguments": {}, + "source": { + "id": [3, 4], + }, + }, + ] + + # WHEN we configure exception handler for ValueError + @app.exception_handler(ValueError) + def handle_value_error(ex: ValueError): + return {"message": "error"} + + # WHEN the sync batch resolver for the 'listLocations' field is defined with raise_on_error=True + @app.batch_resolver(field_name="listLocations", raise_on_error=True, aggregate=False) + def create_something(event: AppSyncResolverEvent) -> Optional[list]: # noqa AA03 VNE003 + raise ValueError + + # Call the implicit handler + result = app(event, {}) + + # THEN the return must be the Exception Handler error message + assert result["message"] == "error" + + +def test_exception_handler_with_batch_resolver_and_no_raise_exception(): + + # GIVEN a AppSyncResolver instance + app = AppSyncResolver() + + event = [ + { + "typeName": "Query", + "info": { + "fieldName": "listLocations", + "parentTypeName": "Post", + }, + "fieldName": "listLocations", + "arguments": {}, + "source": { + "id": "1", + }, + }, + { + "typeName": "Query", + "info": { + "fieldName": "listLocations", + "parentTypeName": "Post", + }, + "fieldName": "listLocations", + "arguments": {}, + "source": { + "id": "2", + }, + }, + { + "typeName": "Query", + "info": { + "fieldName": "listLocations", + "parentTypeName": "Post", + }, + "fieldName": "listLocations", + "arguments": {}, + "source": { + "id": [3, 4], + }, + }, + ] + + # WHEN we configure exception handler for ValueError + @app.exception_handler(ValueError) + def handle_value_error(ex: ValueError): + return {"message": "error"} + + # WHEN the sync batch resolver for the 'listLocations' field is defined with raise_on_error=False + @app.batch_resolver(field_name="listLocations", raise_on_error=False, aggregate=False) + def create_something(event: AppSyncResolverEvent) -> Optional[list]: # noqa AA03 VNE003 + raise ValueError + + # Call the implicit handler + result = app(event, {}) + + # THEN the return must not trigger the Exception Handler, but instead return from the resolver + assert result == [None, None, None] diff --git a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py index df44793f33b..d58c966e67b 100644 --- a/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py +++ b/tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py @@ -329,3 +329,25 @@ async def get_async(): # THEN assert asyncio.run(result) == "value" assert app.context == {} + + +def test_exception_handler_with_single_resolver(): + # GIVEN a AppSyncResolver instance + mock_event = load_event("appSyncDirectResolver.json") + + app = AppSyncResolver() + + # WHEN we configure exception handler for ValueError + @app.exception_handler(ValueError) + def handle_value_error(ex: ValueError): + return {"message": "error"} + + @app.resolver(field_name="createSomething") + def create_something(id: str): # noqa AA03 VNE003 + raise ValueError("Error") + + # Call the implicit handler + result = app(mock_event, {}) + + # THEN the return must be the Exception Handler error message + assert result["message"] == "error"
diff --git a/docs/core/event_handler/appsync.md b/docs/core/event_handler/appsync.md index a2f29e5dba5..0c556dedfbf 100644 --- a/docs/core/event_handler/appsync.md +++ b/docs/core/event_handler/appsync.md @@ -288,6 +288,19 @@ You can use `append_context` when you want to share data between your App and Ro --8<-- "examples/event_handler_graphql/src/split_operation_append_context_module.py" ``` +### Exception handling + +You can use **`exception_handler`** decorator with any Python exception. This allows you to handle a common exception outside your resolver, for example validation errors. + +The `exception_handler` function also supports passing a list of exception types you wish to handle with one handler. + +```python hl_lines="5-7 11" title="Exception handling" +--8<-- "examples/event_handler_graphql/src/exception_handling_graphql.py" +``` + +???+ warning + This is not supported when using async single resolvers. + ### Batch processing ```mermaid
[ { "components": [ { "doc": "A decorator function that registers a handler for one or more exception types.\n\nParameters\n----------\nexc_class (type[Exception] | list[type[Exception]])\n A single exception type or a list of exception types.\n\nReturns\n-------\nCallable:\n A decorator function that registers the exception handler.", "lines": [ 481, 504 ], "name": "AppSyncResolver.exception_handler", "signature": "def exception_handler(self, exc_class: type[Exception] | list[type[Exception]]):", "type": "function" }, { "doc": "", "lines": [ 496, 502 ], "name": "AppSyncResolver.exception_handler.register_exception_handler", "signature": "def register_exception_handler(func: Callable):", "type": "function" }, { "doc": "Looks up the registered exception handler for the given exception type or its base classes.\n\nParameters\n----------\nexp_type (type):\n The exception type to look up the handler for.\n\nReturns\n-------\nCallable | None:\n The registered exception handler function if found, otherwise None.", "lines": [ 506, 523 ], "name": "AppSyncResolver._lookup_exception_handler", "signature": "def _lookup_exception_handler(self, exp_type: type) -> Callable | None:", "type": "function" } ], "file": "aws_lambda_powertools/event_handler/appsync.py" }, { "components": [ { "doc": "", "lines": [ 7, 8 ], "name": "handle_value_error", "signature": "def handle_value_error(ex: ValueError):", "type": "function" }, { "doc": "", "lines": [ 12, 13 ], "name": "create_something", "signature": "def create_something():", "type": "function" }, { "doc": "", "lines": [ 16, 17 ], "name": "lambda_handler", "signature": "def lambda_handler(event, context):", "type": "function" } ], "file": "examples/event_handler_graphql/src/exception_handling_graphql.py" } ]
[ "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_exception_handler_with_batch_resolver_and_raise_exception", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_exception_handler_with_batch_resolver_and_no_raise_exception", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_exception_handler_with_single_resolver" ]
[ "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_related_events_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_simple_queries_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_raise_on_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_async_resolve_batch_processing_with_raise_on_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_without_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_without_exception_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolver_batch_with_resolver_not_found_one_at_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolver_batch_with_sync_and_async_resolver_at_same_time", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_batch_resolver_with_router", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_and_sync_singular_processing", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_async_resolver_include_batch_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_simple_queries_with_aggregate", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_with_simple_queries_with_aggregate", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_batch_processing_with_aggregate_and_returning_a_non_list", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_with_aggregate_and_returning_a_non_list", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_sync_batch_processing_with_aggregate_and_without_return", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_resolve_async_batch_processing_with_aggregate_and_without_return", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_include_router_access_batch_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_app_access_batch_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_context_is_accessible_in_sync_batch_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_batch_resolvers.py::test_context_is_accessible_in_async_batch_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_direct_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_amplify_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_no_params", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_value_error", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_yield", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_multiple_mappings", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_async", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolve_custom_data_model", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_resolver_include_resolver", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_append_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_router_append_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_route_context_is_cleared_after_resolve", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_router_has_access_to_app_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_include_router_merges_context", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_include_router_access_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_app_access_current_event", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_route_context_is_not_cleared_after_resolve_async", "tests/functional/event_handler/required_dependencies/appsync/test_appsync_single_resolvers.py::test_route_context_is_manually_cleared_after_resolve_async" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> feat(event_handler): add exception handling mechanism for AppSyncResolver <!-- markdownlint-disable MD041 MD043 --> **Issue number:** #2184 ## Summary ### Changes This PR introduces a new feature to handle exceptions in `AppSyncResolver` using a standard error handling mechanism, similar to the one used for HTTP Resolvers. Currently, there is no built-in support for exception handling in `AppSyncResolver`, and this PR aims to address that gap. **Changes** 1. **Added exception catching function**: A new decorator `@app.exception_handler` has been implemented to catch and handle exceptions raised during the execution of AppSync resolvers. This decorator allows developers to define custom error handling logic based on the type of exception raised. 2. **Added tests**: Test cases have been added to ensure the proper functioning of the exception handling mechanism and to maintain code quality. 3. **Added documentation**: The usage and implementation details of the `@app.exception_handler` decorator have been documented to provide guidance for developers who wish to utilize this new feature. **Note** It's important to note that this exception handling mechanism is not supported when using single async resolvers. ### User experience ```python from aws_lambda_powertools.event_handler import AppSyncResolver app = AppSyncResolver() @app.exception_handler(ValueError) def handle_value_error(ex: ValueError): return {"message": "error"} @app.resolver(field_name="createSomething") def create_something(id: str): raise ValueError("Error") def lambda_handler(event, context): return app.resolve(event, context) ``` ## Checklist If your change doesn't seem to apply, please leave them unchecked. * [x] [Meet tenets criteria](https://docs.powertools.aws.dev/lambda/python/#tenets) * [x] I have performed a self-review of this change * [x] Changes have been tested * [x] Changes are documented * [x] PR title follows [conventional commit semantics](https://github.com/aws-powertools/powertools-lambda-python/blob/develop/.github/semantic.yml) <details> <summary>Is this a breaking change?</summary> **RFC issue number**: Checklist: * [ ] Migration process documented * [ ] Implement warnings (if it can live side by side) </details> ## Acknowledgment By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. **Disclaimer**: We value your time and bandwidth. As such, any pull requests created on non-triaged issues might not be successful. ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in aws_lambda_powertools/event_handler/appsync.py] (definition of AppSyncResolver.exception_handler:) def exception_handler(self, exc_class: type[Exception] | list[type[Exception]]): """A decorator function that registers a handler for one or more exception types. Parameters ---------- exc_class (type[Exception] | list[type[Exception]]) A single exception type or a list of exception types. Returns ------- Callable: A decorator function that registers the exception handler.""" (definition of AppSyncResolver.exception_handler.register_exception_handler:) def register_exception_handler(func: Callable): (definition of AppSyncResolver._lookup_exception_handler:) def _lookup_exception_handler(self, exp_type: type) -> Callable | None: """Looks up the registered exception handler for the given exception type or its base classes. Parameters ---------- exp_type (type): The exception type to look up the handler for. Returns ------- Callable | None: The registered exception handler function if found, otherwise None.""" [end of new definitions in aws_lambda_powertools/event_handler/appsync.py] [start of new definitions in examples/event_handler_graphql/src/exception_handling_graphql.py] (definition of handle_value_error:) def handle_value_error(ex: ValueError): (definition of create_something:) def create_something(): (definition of lambda_handler:) def lambda_handler(event, context): [end of new definitions in examples/event_handler_graphql/src/exception_handling_graphql.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
d1a58cdd12dfcac1e9ce022fe8a29f69ea6007b4
fairlearn__fairlearn-1436
1,436
fairlearn/fairlearn
null
403da1fec74bdf2da28dc49487ccd72caa6f6976
2024-11-10T17:43:48Z
diff --git a/fairlearn/metrics/_disaggregated_result.py b/fairlearn/metrics/_disaggregated_result.py index cab28bf04..b78f0ca51 100644 --- a/fairlearn/metrics/_disaggregated_result.py +++ b/fairlearn/metrics/_disaggregated_result.py @@ -2,6 +2,8 @@ # Licensed under the MIT License. from __future__ import annotations +from __future__ import annotations + import logging from typing import Literal @@ -27,14 +29,6 @@ ) -def extract_unique_classes(data: pd.DataFrame, feature_list: list[str]) -> dict[str, np.ndarray]: - """Compute unique values in a given set of columns.""" - result = dict() - for feature in feature_list: - result[feature] = np.unique(data[feature]) - return result - - def apply_to_dataframe( data: pd.DataFrame, metric_functions: dict[str, AnnotatedMetricFunction], @@ -134,48 +128,31 @@ def apply_grouping( if not control_feature_names: if errors == "raise": try: - mf = self.by_group - if grouping_function == "min": - vals = [mf[m].min() for m in mf.columns] - else: - vals = [mf[m].max() for m in mf.columns] - - result = pd.Series(vals, index=self.by_group.columns) + result = self.by_group.agg(grouping_function, axis=0) except ValueError as ve: raise ValueError(_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE) from ve + elif errors == "coerce": - if not control_feature_names: - mf = self.by_group - # Fill in the possible min/max values, else np.nan - if grouping_function == "min": - vals = [ - mf[m].min() if np.isscalar(mf[m].values[0]) else np.nan - for m in mf.columns - ] - else: - vals = [ - mf[m].max() if np.isscalar(mf[m].values[0]) else np.nan - for m in mf.columns - ] - - result = pd.Series(vals, index=mf.columns) + # Fill in the possible min/max values, else np.nan + mf = self.by_group.apply( + lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan) + ) + result = mf.agg(grouping_function, axis=0) else: if errors == "raise": try: - if grouping_function == "min": - result = self.by_group.groupby(level=control_feature_names).min() - else: - result = self.by_group.groupby(level=control_feature_names).max() + result = self.by_group.groupby(level=control_feature_names).agg( + grouping_function + ) + except ValueError as ve: raise ValueError(_MF_CONTAINS_NON_SCALAR_ERROR_MESSAGE) from ve elif errors == "coerce": # Fill all impossible columns with NaN before grouping metric frame - mf = self.by_group.copy() - mf = mf.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan)) - if grouping_function == "min": - result = mf.groupby(level=control_feature_names).min() - else: - result = mf.groupby(level=control_feature_names).max() + mf = self.by_group.apply( + lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan) + ) + result = mf.groupby(level=control_feature_names).agg(grouping_function) assert isinstance(result, pd.Series) or isinstance(result, pd.DataFrame) @@ -227,10 +204,9 @@ def difference( else: raise ValueError("Unrecognised method '{0}' in difference() call".format(method)) - mf = self.by_group.copy() # Can assume errors='coerce', else error would already have been raised in .group_min # Fill all non-scalar values with NaN - mf = mf.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan)) + mf = self.by_group.apply(lambda x: x.apply(lambda y: y if np.isscalar(y) else np.nan)) if control_feature_names is None: result = (mf - subtrahend).abs().max() @@ -289,7 +265,6 @@ def ratio_sub_one(x): if errors not in _VALID_ERROR_STRING: raise ValueError(_INVALID_ERRORS_VALUE_ERROR_MESSAGE) - result = None if method == "between_groups": result = self.apply_grouping( "min", control_feature_names, errors=errors @@ -357,49 +332,61 @@ def create( DisaggregatedResult Freshly constructed instance of this class """ - # Calculate the 'overall' values - if control_feature_names is None: - overall = apply_to_dataframe(data, metric_functions=annotated_functions) - else: - temp = data.groupby(by=control_feature_names).apply( - apply_to_dataframe, - metric_functions=annotated_functions, - # See note in apply_to_dataframe about include_groups - include_groups=False, - ) - # If there are multiple control features, might have missing combinations - if len(control_feature_names) > 1: - cf_classes = extract_unique_classes(data, control_feature_names) - all_indices = pd.MultiIndex.from_product( - cf_classes.values(), names=cf_classes.keys() - ) + overall = DisaggregatedResult._apply_functions( + data=data, + annotated_functions=annotated_functions, + grouping_names=control_feature_names, + ) - overall = temp.reindex(index=all_indices) - else: - overall = temp + by_group = DisaggregatedResult._apply_functions( + data=data, + annotated_functions=annotated_functions, + grouping_names=(control_feature_names or []) + sensitive_feature_names, + ) + + return DisaggregatedResult(overall, by_group) + + @staticmethod + def _apply_functions( + *, + data: pd.DataFrame, + annotated_functions: dict[str, AnnotatedMetricFunction], + grouping_names: list[str] | None, + ) -> pd.Series | pd.DataFrame: + """ + Apply annotated metric functions to a DataFrame, optionally grouping by specified columns. + + Parameters + ---------- + data : pd.DataFrame + The input data on which the metric functions will be applied. + annotated_functions : dict[str, AnnotatedMetricFunction] + A dictionary where keys are metric names and values are the corresponding annotated metric + functions. + grouping_names : list[str] | None + A list of column names to group by before applying the metric functions. If None, the + functions are applied to the entire DataFrame. - # Calculate the 'by_group' values - all_grouping_names = [x for x in sensitive_feature_names] - if control_feature_names is not None: - # Note that we prepend the control feature names - all_grouping_names = control_feature_names + all_grouping_names + Returns + ------- + Series or DataFrame + A Series or DataFrame with the results of the metric functions applied. If grouping_names is provided, + the results are grouped accordingly. + """ + if grouping_names is None or len(grouping_names) == 0: + return apply_to_dataframe(data, metric_functions=annotated_functions) - temp = data.groupby(all_grouping_names).apply( + temp = data.groupby(grouping_names).apply( apply_to_dataframe, metric_functions=annotated_functions, - # See note in apply_to_dataframe about include_groups include_groups=False, ) - if len(all_grouping_names) > 1: - # We might have missing combinations in the input, so expand to fill - all_classes = extract_unique_classes(data, all_grouping_names) + + if len(grouping_names) > 1: all_indices = pd.MultiIndex.from_product( - all_classes.values(), - names=all_classes.keys(), + [np.unique(data[col]) for col in grouping_names], names=grouping_names ) - by_group = temp.reindex(index=all_indices) - else: - by_group = temp + return temp.reindex(index=all_indices) - return DisaggregatedResult(overall, by_group) + return temp
diff --git a/test/unit/metrics/test_disaggregated_result.py b/test/unit/metrics/test_disaggregated_result.py index 7c1486f72..1969f95cb 100644 --- a/test/unit/metrics/test_disaggregated_result.py +++ b/test/unit/metrics/test_disaggregated_result.py @@ -1,11 +1,13 @@ # Copyright (c) Microsoft Corporation and Fairlearn contributors. # Licensed under the MIT License. + import pandas as pd import pytest import sklearn.metrics as skm from fairlearn.metrics._annotated_metric_function import AnnotatedMetricFunction +from fairlearn.metrics._base_metrics import selection_rate from fairlearn.metrics._disaggregated_result import DisaggregatedResult from .data_for_test import g_1, y_p, y_t @@ -89,3 +91,77 @@ def test_bad_ratio_errors(self): assert ( str(e0.value) == "Invalid error value specified. Valid values are ['raise', 'coerce']" ) + + +@pytest.mark.parametrize( + ["grouping_names", "expected"], + [(None, pd.Series({"selection_rate": 0.5})), ([], pd.Series({"selection_rate": 0.5}))], +) +def test_apply_functions_with_no_grouping(grouping_names, expected): + data = pd.DataFrame( + { + "y_pred": [1, 0, 1, 0, 0, 1], + "y_true": [1, 1, 0, 1, 0, 0], + "sensitive_feature": ["A", "A", "A", "B", "B", "B"], + } + ) + + annotated_functions = { + "selection_rate": AnnotatedMetricFunction(func=selection_rate, name="selection_rate") + } + + result = DisaggregatedResult._apply_functions( + data=data, annotated_functions=annotated_functions, grouping_names=grouping_names + ) + + pd.testing.assert_series_equal(result, expected) + + +@pytest.mark.parametrize( + ["grouping_names", "expected"], + [ + ( + ["sensitive_feature"], + pd.DataFrame( + {"selection_rate": [2 / 3, 1 / 3]}, + index=pd.Index(["A", "B"], name="sensitive_feature"), + ), + ), + ( + ["control_feature_1"], + pd.DataFrame( + {"selection_rate": [1 / 3, 2 / 3]}, + index=pd.Index(["X", "Y"], name="control_feature_1"), + ), + ), + ( + ["control_feature_2", "sensitive_feature"], + pd.DataFrame( + {"selection_rate": [1.0, None, 0.5, 1 / 3]}, + index=pd.MultiIndex.from_product( + [("W", "Z"), ("A", "B")], names=["control_feature_2", "sensitive_feature"] + ), + ), + ), + ], +) +def test_apply_functions_with_grouping(grouping_names, expected): + data = pd.DataFrame( + { + "y_pred": [1, 0, 1, 0, 0, 1], + "y_true": [1, 1, 0, 1, 0, 0], + "sensitive_feature": ["A", "A", "A", "B", "B", "B"], + "control_feature_1": ["X", "X", "Y", "Y", "X", "Y"], + "control_feature_2": ["Z", "Z", "W", "Z", "Z", "Z"], + } + ) + + annotated_functions = { + "selection_rate": AnnotatedMetricFunction(func=selection_rate, name="selection_rate") + } + + result = DisaggregatedResult._apply_functions( + data=data, annotated_functions=annotated_functions, grouping_names=grouping_names + ) + + pd.testing.assert_frame_equal(result, expected)
[ { "components": [ { "doc": "Apply annotated metric functions to a DataFrame, optionally grouping by specified columns.\n\nParameters\n----------\ndata : pd.DataFrame\n The input data on which the metric functions will be applied.\nannotated_functions : dict[str, AnnotatedMetricFunction]\n A dictionary where keys are metric names and values are the corresponding annotated metric\n functions.\ngrouping_names : list[str] | None\n A list of column names to group by before applying the metric functions. If None, the\n functions are applied to the entire DataFrame.\n\nReturns\n-------\nSeries or DataFrame\n A Series or DataFrame with the results of the metric functions applied. If grouping_names is provided,\n the results are grouped accordingly.", "lines": [ 350, 392 ], "name": "DisaggregatedResult._apply_functions", "signature": "def _apply_functions( *, data: pd.DataFrame, annotated_functions: dict[str, AnnotatedMetricFunction], grouping_names: list[str] | None, ) -> pd.Series | pd.DataFrame:", "type": "function" } ], "file": "fairlearn/metrics/_disaggregated_result.py" } ]
[ "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_no_grouping[None-expected0]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_no_grouping[grouping_names1-expected1]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_grouping[grouping_names0-expected0]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_grouping[grouping_names1-expected1]", "test/unit/metrics/test_disaggregated_result.py::test_apply_functions_with_grouping[grouping_names2-expected2]" ]
[ "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_grouping", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_difference_method", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_difference_errors", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_ratio_method", "test/unit/metrics/test_disaggregated_result.py::TestErrorMessages::test_bad_ratio_errors" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> refactor: simplify disaggregated result ## Description - Simplified code complexity by using pandas built-in methods. - Refactored the common logic in initializing the `overall` and `by_group` inside `.create`. - Added unit tests to cover the latter. There are some linting changes to the doc that could be merged as part of this [other PR](https://github.com/fairlearn/fairlearn/pull/1434) ## Tests <!--- Select all that apply by putting an x between the brackets: [x] --> - [ ] no new tests required - [x] new tests added - [ ] existing tests adjusted ## Documentation <!--- Select all that apply. --> - [ ] no documentation changes needed - [x] user guide added or updated - [x] API docs added or updated - [ ] example notebook added or updated ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in fairlearn/metrics/_disaggregated_result.py] (definition of DisaggregatedResult._apply_functions:) def _apply_functions( *, data: pd.DataFrame, annotated_functions: dict[str, AnnotatedMetricFunction], grouping_names: list[str] | None, ) -> pd.Series | pd.DataFrame: """Apply annotated metric functions to a DataFrame, optionally grouping by specified columns. Parameters ---------- data : pd.DataFrame The input data on which the metric functions will be applied. annotated_functions : dict[str, AnnotatedMetricFunction] A dictionary where keys are metric names and values are the corresponding annotated metric functions. grouping_names : list[str] | None A list of column names to group by before applying the metric functions. If None, the functions are applied to the entire DataFrame. Returns ------- Series or DataFrame A Series or DataFrame with the results of the metric functions applied. If grouping_names is provided, the results are grouped accordingly.""" [end of new definitions in fairlearn/metrics/_disaggregated_result.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
403da1fec74bdf2da28dc49487ccd72caa6f6976
rytilahti__python-miio-1984
1,984
rytilahti/python-miio
null
62427d2f796e603520acca3b57b29ec3e6489bca
2024-11-09T21:09:56Z
diff --git a/miio/miot_models.py b/miio/miot_models.py index 1269946d5..6f4abfe59 100644 --- a/miio/miot_models.py +++ b/miio/miot_models.py @@ -1,4 +1,5 @@ import logging +from abc import abstractmethod from datetime import timedelta from enum import Enum from typing import Any, Optional @@ -150,6 +151,11 @@ def normalized_name(self) -> str: """ return self.name.replace(":", "_").replace("-", "_") + @property + @abstractmethod + def unique_identifier(self) -> str: + """Return unique identifier.""" + class MiotAction(MiotBaseModel): """Action presentation for miot.""" @@ -176,8 +182,6 @@ def fill_from_parent(self, service: "MiotService"): def get_descriptor(self): """Create a descriptor based on the property information.""" - id_ = self.name - extras = self.extras extras["urn"] = self.urn extras["siid"] = self.siid @@ -190,12 +194,17 @@ def get_descriptor(self): inputs = [prop.get_descriptor() for prop in self.inputs] return ActionDescriptor( - id=id_, + id=self.unique_identifier, name=self.description, inputs=inputs, extras=extras, ) + @property + def unique_identifier(self) -> str: + """Return unique identifier.""" + return f"{self.normalized_name}_{self.siid}_{self.aiid}" + class Config: extra = "forbid" @@ -327,7 +336,7 @@ def _create_enum_descriptor(self) -> EnumDescriptor: raise desc = EnumDescriptor( - id=self.name, + id=self.unique_identifier, name=self.description, status_attribute=self.normalized_name, unit=self.unit, @@ -346,7 +355,7 @@ def _create_range_descriptor( if self.range is None: raise ValueError("Range is None") desc = RangeDescriptor( - id=self.name, + id=self.unique_identifier, name=self.description, status_attribute=self.normalized_name, min_value=self.range[0], @@ -363,7 +372,7 @@ def _create_range_descriptor( def _create_regular_descriptor(self) -> PropertyDescriptor: """Create boolean setting descriptor.""" return PropertyDescriptor( - id=self.name, + id=self.unique_identifier, name=self.description, status_attribute=self.normalized_name, type=self.format, @@ -371,6 +380,11 @@ def _create_regular_descriptor(self) -> PropertyDescriptor: access=self._miot_access_list_to_access(self.access), ) + @property + def unique_identifier(self) -> str: + """Return unique identifier.""" + return f"{self.normalized_name}_{self.siid}_{self.piid}" + class Config: extra = "forbid" @@ -381,6 +395,11 @@ class MiotEvent(MiotBaseModel): eiid: int = Field(alias="iid") arguments: Any + @property + def unique_identifier(self) -> str: + """Return unique identifier.""" + return f"{self.normalized_name}_{self.siid}_{self.eiid}" + class Config: extra = "forbid"
diff --git a/miio/tests/test_miot_models.py b/miio/tests/test_miot_models.py index 32afb76aa..046ad2a08 100644 --- a/miio/tests/test_miot_models.py +++ b/miio/tests/test_miot_models.py @@ -21,6 +21,7 @@ URN, MiotAccess, MiotAction, + MiotBaseModel, MiotEnumValue, MiotEvent, MiotFormat, @@ -349,3 +350,18 @@ def test_get_descriptor_enum_property(read_only, expected): def test_property_pretty_value(): """Test the pretty value conversions.""" raise NotImplementedError() + + +@pytest.mark.parametrize( + ("collection", "id_var"), + [("actions", "aiid"), ("properties", "piid"), ("events", "eiid")], +) +def test_unique_identifier(collection, id_var): + """Test unique identifier for properties, actions, and events.""" + serv = MiotService.parse_raw(DUMMY_SERVICE) + elem: MiotBaseModel = getattr(serv, collection) + first = elem[0] + assert ( + first.unique_identifier + == f"{first.normalized_name}_{serv.siid}_{getattr(first, id_var)}" + )
[ { "components": [ { "doc": "Return unique identifier.", "lines": [ 156, 157 ], "name": "MiotBaseModel.unique_identifier", "signature": "def unique_identifier(self) -> str:", "type": "function" }, { "doc": "Return unique identifier.", "lines": [ 204, 206 ], "name": "MiotAction.unique_identifier", "signature": "def unique_identifier(self) -> str:", "type": "function" }, { "doc": "Return unique identifier.", "lines": [ 384, 386 ], "name": "MiotProperty.unique_identifier", "signature": "def unique_identifier(self) -> str:", "type": "function" }, { "doc": "Return unique identifier.", "lines": [ 399, 401 ], "name": "MiotEvent.unique_identifier", "signature": "def unique_identifier(self) -> str:", "type": "function" } ], "file": "miio/miot_models.py" } ]
[ "miio/tests/test_miot_models.py::test_unique_identifier[actions-aiid]", "miio/tests/test_miot_models.py::test_unique_identifier[properties-piid]", "miio/tests/test_miot_models.py::test_unique_identifier[events-eiid]" ]
[ "miio/tests/test_miot_models.py::test_enum", "miio/tests/test_miot_models.py::test_enum_missing_description", "miio/tests/test_miot_models.py::test_format[bool-bool]", "miio/tests/test_miot_models.py::test_format[string-str]", "miio/tests/test_miot_models.py::test_format[float-float]", "miio/tests/test_miot_models.py::test_format[uint8-int]", "miio/tests/test_miot_models.py::test_format[uint16-int]", "miio/tests/test_miot_models.py::test_format[uint32-int]", "miio/tests/test_miot_models.py::test_format[int8-int]", "miio/tests/test_miot_models.py::test_format[int16-int]", "miio/tests/test_miot_models.py::test_format[int32-int]", "miio/tests/test_miot_models.py::test_action", "miio/tests/test_miot_models.py::test_action_with_nulls", "miio/tests/test_miot_models.py::test_urn[regular_urn]", "miio/tests/test_miot_models.py::test_urn[unexpected_component]", "miio/tests/test_miot_models.py::test_urn[multiple_unexpected_components]", "miio/tests/test_miot_models.py::test_service", "miio/tests/test_miot_models.py::test_service_back_references[actions]", "miio/tests/test_miot_models.py::test_service_back_references[properties]", "miio/tests/test_miot_models.py::test_service_back_references[events]", "miio/tests/test_miot_models.py::test_entity_names[actions]", "miio/tests/test_miot_models.py::test_entity_names[properties]", "miio/tests/test_miot_models.py::test_entity_names[events]", "miio/tests/test_miot_models.py::test_event", "miio/tests/test_miot_models.py::test_property", "miio/tests/test_miot_models.py::test_get_descriptor_bool_property[True-r--]", "miio/tests/test_miot_models.py::test_get_descriptor_bool_property[False-rw-]", "miio/tests/test_miot_models.py::test_get_descriptor_ranged_property[True-PropertyDescriptor]", "miio/tests/test_miot_models.py::test_get_descriptor_ranged_property[False-RangeDescriptor]", "miio/tests/test_miot_models.py::test_get_descriptor_enum_property[True-PropertyDescriptor]", "miio/tests/test_miot_models.py::test_get_descriptor_enum_property[False-EnumDescriptor]" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> Add unique_identifier property to miot properties, actions, and events This allows descriptors to have device-unique identifiers, the format is '<normalized_name>_<siid>_<id>'. This also changes 'id' of the descriptors to use this identifier in-place of a plain name from the description. ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in miio/miot_models.py] (definition of MiotBaseModel.unique_identifier:) def unique_identifier(self) -> str: """Return unique identifier.""" (definition of MiotAction.unique_identifier:) def unique_identifier(self) -> str: """Return unique identifier.""" (definition of MiotProperty.unique_identifier:) def unique_identifier(self) -> str: """Return unique identifier.""" (definition of MiotEvent.unique_identifier:) def unique_identifier(self) -> str: """Return unique identifier.""" [end of new definitions in miio/miot_models.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
62427d2f796e603520acca3b57b29ec3e6489bca
tobymao__sqlglot-4249
4,249
tobymao/sqlglot
null
fcc05c9daa31c7a51474ec9c72ceafd682359f90
2024-10-15T19:16:53Z
diff --git a/sqlglot/dialects/oracle.py b/sqlglot/dialects/oracle.py index 81c2a4a5c3..0845258f36 100644 --- a/sqlglot/dialects/oracle.py +++ b/sqlglot/dialects/oracle.py @@ -15,6 +15,7 @@ from sqlglot.helper import seq_get from sqlglot.parser import OPTIONS_TYPE, build_coalesce from sqlglot.tokens import TokenType +from sqlglot.errors import ParseError if t.TYPE_CHECKING: from sqlglot._typing import E @@ -205,6 +206,57 @@ def _parse_json_array(self, expr_type: t.Type[E], **kwargs) -> E: ) def _parse_hint(self) -> t.Optional[exp.Hint]: + start_index = self._index + should_fallback_to_string = False + + if not self._match(TokenType.HINT): + return None + + hints = [] + + try: + for hint in iter( + lambda: self._parse_csv( + lambda: self._parse_hint_function_call() or self._parse_var(upper=True), + ), + [], + ): + hints.extend(hint) + except ParseError: + should_fallback_to_string = True + + if not self._match_pair(TokenType.STAR, TokenType.SLASH): + should_fallback_to_string = True + + if should_fallback_to_string: + self._retreat(start_index) + return self._parse_hint_fallback_to_string() + + return self.expression(exp.Hint, expressions=hints) + + def _parse_hint_function_call(self) -> t.Optional[exp.Expression]: + if not self._curr or not self._next or self._next.token_type != TokenType.L_PAREN: + return None + + this = self._curr.text + + self._advance(2) + args = self._parse_hint_args() + this = self.expression(exp.Anonymous, this=this, expressions=args) + self._match_r_paren(this) + return this + + def _parse_hint_args(self): + args = [] + result = self._parse_var() + + while result: + args.append(result) + result = self._parse_var() + + return args + + def _parse_hint_fallback_to_string(self) -> t.Optional[exp.Hint]: if self._match(TokenType.HINT): start = self._curr while self._curr and not self._match_pair(TokenType.STAR, TokenType.SLASH): @@ -271,6 +323,7 @@ class Generator(generator.Generator): LAST_DAY_SUPPORTS_DATE_PART = False SUPPORTS_SELECT_INTO = True TZ_TO_WITH_TIME_ZONE = True + QUERY_HINT_SEP = " " TYPE_MAPPING = { **generator.Generator.TYPE_MAPPING, @@ -370,3 +423,23 @@ def into_sql(self, expression: exp.Into) -> str: return f"{self.seg(into)} {self.sql(expression, 'this')}" return f"{self.seg(into)} {self.expressions(expression)}" + + def hint_sql(self, expression: exp.Hint) -> str: + expressions = [] + + for expression in expression.expressions: + if isinstance(expression, exp.Anonymous): + formatted_args = self._format_hint_function_args(*expression.expressions) + expressions.append(f"{self.sql(expression, 'this')}({formatted_args})") + else: + expressions.append(self.sql(expression)) + + return f" /*+ {self.expressions(sqls=expressions, sep=self.QUERY_HINT_SEP).strip()} */" + + def _format_hint_function_args(self, *args: t.Optional[str | exp.Expression]) -> str: + arg_sqls = tuple(self.sql(arg) for arg in args) + if self.pretty and self.too_wide(arg_sqls): + return self.indent( + "\n" + "\n".join(arg_sqls) + "\n", skip_first=True, skip_last=True + ) + return " ".join(arg_sqls)
diff --git a/tests/dialects/test_oracle.py b/tests/dialects/test_oracle.py index d2bbedcde5..36ce5d02e6 100644 --- a/tests/dialects/test_oracle.py +++ b/tests/dialects/test_oracle.py @@ -329,6 +329,57 @@ def test_hints(self): ) self.validate_identity("INSERT /*+ APPEND */ INTO IAP_TBL (id, col1) VALUES (2, 'test2')") self.validate_identity("INSERT /*+ APPEND_VALUES */ INTO dest_table VALUES (i, 'Value')") + self.validate_identity( + "SELECT /*+ LEADING(departments employees) USE_NL(employees) */ * FROM employees JOIN departments ON employees.department_id = departments.department_id", + """SELECT /*+ LEADING(departments employees) + USE_NL(employees) */ + * +FROM employees +JOIN departments + ON employees.department_id = departments.department_id""", + pretty=True, + ) + self.validate_identity( + "SELECT /*+ USE_NL(bbbbbbbbbbbbbbbbbbbbbbbb) LEADING(aaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbb cccccccccccccccccccccccc dddddddddddddddddddddddd) INDEX(cccccccccccccccccccccccc) */ * FROM aaaaaaaaaaaaaaaaaaaaaaaa JOIN bbbbbbbbbbbbbbbbbbbbbbbb ON aaaaaaaaaaaaaaaaaaaaaaaa.id = bbbbbbbbbbbbbbbbbbbbbbbb.a_id JOIN cccccccccccccccccccccccc ON bbbbbbbbbbbbbbbbbbbbbbbb.id = cccccccccccccccccccccccc.b_id JOIN dddddddddddddddddddddddd ON cccccccccccccccccccccccc.id = dddddddddddddddddddddddd.c_id", + ) + self.validate_identity( + "SELECT /*+ USE_NL(bbbbbbbbbbbbbbbbbbbbbbbb) LEADING(aaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbb cccccccccccccccccccccccc dddddddddddddddddddddddd) INDEX(cccccccccccccccccccccccc) */ * FROM aaaaaaaaaaaaaaaaaaaaaaaa JOIN bbbbbbbbbbbbbbbbbbbbbbbb ON aaaaaaaaaaaaaaaaaaaaaaaa.id = bbbbbbbbbbbbbbbbbbbbbbbb.a_id JOIN cccccccccccccccccccccccc ON bbbbbbbbbbbbbbbbbbbbbbbb.id = cccccccccccccccccccccccc.b_id JOIN dddddddddddddddddddddddd ON cccccccccccccccccccccccc.id = dddddddddddddddddddddddd.c_id", + """SELECT /*+ USE_NL(bbbbbbbbbbbbbbbbbbbbbbbb) + LEADING( + aaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbb + cccccccccccccccccccccccc + dddddddddddddddddddddddd + ) + INDEX(cccccccccccccccccccccccc) */ + * +FROM aaaaaaaaaaaaaaaaaaaaaaaa +JOIN bbbbbbbbbbbbbbbbbbbbbbbb + ON aaaaaaaaaaaaaaaaaaaaaaaa.id = bbbbbbbbbbbbbbbbbbbbbbbb.a_id +JOIN cccccccccccccccccccccccc + ON bbbbbbbbbbbbbbbbbbbbbbbb.id = cccccccccccccccccccccccc.b_id +JOIN dddddddddddddddddddddddd + ON cccccccccccccccccccccccc.id = dddddddddddddddddddddddd.c_id""", + pretty=True, + ) + # Test that parsing error with keywords like select where etc falls back + self.validate_identity( + "SELECT /*+ LEADING(departments employees) USE_NL(employees) select where group by is order by */ * FROM employees JOIN departments ON employees.department_id = departments.department_id", + """SELECT /*+ LEADING(departments employees) USE_NL(employees) select where group by is order by */ + * +FROM employees +JOIN departments + ON employees.department_id = departments.department_id""", + pretty=True, + ) + # Test that parsing error with , inside hint function falls back + self.validate_identity( + "SELECT /*+ LEADING(departments, employees) */ * FROM employees JOIN departments ON employees.department_id = departments.department_id" + ) + # Test that parsing error with keyword inside hint function falls back + self.validate_identity( + "SELECT /*+ LEADING(departments select) */ * FROM employees JOIN departments ON employees.department_id = departments.department_id" + ) def test_xml_table(self): self.validate_identity("XMLTABLE('x')")
[ { "components": [ { "doc": "", "lines": [ 237, 247 ], "name": "Oracle.Parser._parse_hint_function_call", "signature": "def _parse_hint_function_call(self) -> t.Optional[exp.Expression]:", "type": "function" }, { "doc": "", "lines": [ 249, 257 ], "name": "Oracle.Parser._parse_hint_args", "signature": "def _parse_hint_args(self):", "type": "function" }, { "doc": "", "lines": [ 259, 271 ], "name": "Oracle.Parser._parse_hint_fallback_to_string", "signature": "def _parse_hint_fallback_to_string(self) -> t.Optional[exp.Hint]:", "type": "function" }, { "doc": "", "lines": [ 427, 437 ], "name": "Oracle.Generator.hint_sql", "signature": "def hint_sql(self, expression: exp.Hint) -> str:", "type": "function" }, { "doc": "", "lines": [ 439, 445 ], "name": "Oracle.Generator._format_hint_function_args", "signature": "def _format_hint_function_args(self, *args: t.Optional[str | exp.Expression]) -> str:", "type": "function" } ], "file": "sqlglot/dialects/oracle.py" } ]
[ "tests/dialects/test_oracle.py::TestOracle::test_hints" ]
[ "tests/dialects/test_oracle.py::TestOracle::test_connect_by", "tests/dialects/test_oracle.py::TestOracle::test_grant", "tests/dialects/test_oracle.py::TestOracle::test_join_marker", "tests/dialects/test_oracle.py::TestOracle::test_json_functions", "tests/dialects/test_oracle.py::TestOracle::test_json_table", "tests/dialects/test_oracle.py::TestOracle::test_match_recognize", "tests/dialects/test_oracle.py::TestOracle::test_multitable_inserts", "tests/dialects/test_oracle.py::TestOracle::test_oracle", "tests/dialects/test_oracle.py::TestOracle::test_query_restrictions", "tests/dialects/test_oracle.py::TestOracle::test_xml_table" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> Feature/oracle hints Hi @georgesittas Please review this PR we discussed over on [Slack](https://tobiko-data.slack.com/archives/C0448SFS3PF/p1728489927717069), which adds better support for Oracle Hints. ---- The current sqlglot implementation is to parse oracle hints as one big string, instead of breaking them up into Anonymous and Var expressions. This PR attempts to parse Oralce hints into Anonymous and Var expressions. In the event the user has some kind of spelling error, or uses keywords like `select`, this will fall back to the original implementation, instead of throwing an error or instead of parsing into Anonymous/Var. Oracle hints differ from other dialects in that arguments to hint functions are separated by space, not comma. Oracle does not, as far as I am aware from experience and the documentation, support nested hint functions. In the event that nested hints are supported or will by supported in the future, this implementation will simply fall back and parse it as a string. This PR also handles the pretty printing in a way that is identical with how sqlglot does it for mysql. Please see the tests for a few examples. ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in sqlglot/dialects/oracle.py] (definition of Oracle.Parser._parse_hint_function_call:) def _parse_hint_function_call(self) -> t.Optional[exp.Expression]: (definition of Oracle.Parser._parse_hint_args:) def _parse_hint_args(self): (definition of Oracle.Parser._parse_hint_fallback_to_string:) def _parse_hint_fallback_to_string(self) -> t.Optional[exp.Hint]: (definition of Oracle.Generator.hint_sql:) def hint_sql(self, expression: exp.Hint) -> str: (definition of Oracle.Generator._format_hint_function_args:) def _format_hint_function_args(self, *args: t.Optional[str | exp.Expression]) -> str: [end of new definitions in sqlglot/dialects/oracle.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
ceb42fabad60312699e4b15936aeebac00e22e4d
tobymao__sqlglot-4217
4,217
tobymao/sqlglot
null
22a16848d80a2fa6d310f99d21f7d81f90eb9440
2024-10-07T08:22:14Z
diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py index c5e40bed04..064805d5ca 100644 --- a/sqlglot/expressions.py +++ b/sqlglot/expressions.py @@ -3284,6 +3284,200 @@ class Update(Expression): "limit": False, } + def table( + self, expression: ExpOrStr, dialect: DialectType = None, copy: bool = True, **opts + ) -> Update: + """ + Set the table to update. + + Example: + >>> Update().table("my_table").set_("x = 1").sql() + 'UPDATE my_table SET x = 1' + + Args: + expression : the SQL code strings to parse. + If a `Table` instance is passed, this is used as-is. + If another `Expression` instance is passed, it will be wrapped in a `Table`. + dialect: the dialect used to parse the input expression. + copy: if `False`, modify this expression instance in-place. + opts: other options to use to parse the input expressions. + + Returns: + The modified Update expression. + """ + return _apply_builder( + expression=expression, + instance=self, + arg="this", + into=Table, + prefix=None, + dialect=dialect, + copy=copy, + **opts, + ) + + def set_( + self, + *expressions: ExpOrStr, + append: bool = True, + dialect: DialectType = None, + copy: bool = True, + **opts, + ) -> Update: + """ + Append to or set the SET expressions. + + Example: + >>> Update().table("my_table").set_("x = 1").sql() + 'UPDATE my_table SET x = 1' + + Args: + *expressions: the SQL code strings to parse. + If `Expression` instance(s) are passed, they will be used as-is. + Multiple expressions are combined with a comma. + append: if `True`, add the new expressions to any existing SET expressions. + Otherwise, this resets the expressions. + dialect: the dialect used to parse the input expressions. + copy: if `False`, modify this expression instance in-place. + opts: other options to use to parse the input expressions. + """ + return _apply_list_builder( + *expressions, + instance=self, + arg="expressions", + append=append, + into=Expression, + prefix=None, + dialect=dialect, + copy=copy, + **opts, + ) + + def where( + self, + *expressions: t.Optional[ExpOrStr], + append: bool = True, + dialect: DialectType = None, + copy: bool = True, + **opts, + ) -> Select: + """ + Append to or set the WHERE expressions. + + Example: + >>> Update().table("tbl").set_("x = 1").where("x = 'a' OR x < 'b'").sql() + "UPDATE tbl SET x = 1 WHERE x = 'a' OR x < 'b'" + + Args: + *expressions: the SQL code strings to parse. + If an `Expression` instance is passed, it will be used as-is. + Multiple expressions are combined with an AND operator. + append: if `True`, AND the new expressions to any existing expression. + Otherwise, this resets the expression. + dialect: the dialect used to parse the input expressions. + copy: if `False`, modify this expression instance in-place. + opts: other options to use to parse the input expressions. + + Returns: + Select: the modified expression. + """ + return _apply_conjunction_builder( + *expressions, + instance=self, + arg="where", + append=append, + into=Where, + dialect=dialect, + copy=copy, + **opts, + ) + + def from_( + self, + expression: t.Optional[ExpOrStr] = None, + dialect: DialectType = None, + copy: bool = True, + **opts, + ) -> Update: + """ + Set the FROM expression. + + Example: + >>> Update().table("my_table").set_("x = 1").from_("baz").sql() + 'UPDATE my_table SET x = 1 FROM baz' + + Args: + expression : the SQL code strings to parse. + If a `From` instance is passed, this is used as-is. + If another `Expression` instance is passed, it will be wrapped in a `From`. + If nothing is passed in then a from is not applied to the expression + dialect: the dialect used to parse the input expression. + copy: if `False`, modify this expression instance in-place. + opts: other options to use to parse the input expressions. + + Returns: + The modified Update expression. + """ + if not expression: + return maybe_copy(self, copy) + + return _apply_builder( + expression=expression, + instance=self, + arg="from", + into=From, + prefix="FROM", + dialect=dialect, + copy=copy, + **opts, + ) + + def with_( + self, + alias: ExpOrStr, + as_: ExpOrStr, + recursive: t.Optional[bool] = None, + materialized: t.Optional[bool] = None, + append: bool = True, + dialect: DialectType = None, + copy: bool = True, + **opts, + ) -> Update: + """ + Append to or set the common table expressions. + + Example: + >>> Update().table("my_table").set_("x = 1").from_("baz").with_("baz", "SELECT id FROM foo").sql() + 'WITH baz AS (SELECT id FROM foo) UPDATE my_table SET x = 1 FROM baz' + + Args: + alias: the SQL code string to parse as the table name. + If an `Expression` instance is passed, this is used as-is. + as_: the SQL code string to parse as the table expression. + If an `Expression` instance is passed, it will be used as-is. + recursive: set the RECURSIVE part of the expression. Defaults to `False`. + materialized: set the MATERIALIZED part of the expression. + append: if `True`, add to any existing expressions. + Otherwise, this resets the expressions. + dialect: the dialect used to parse the input expression. + copy: if `False`, modify this expression instance in-place. + opts: other options to use to parse the input expressions. + + Returns: + The modified expression. + """ + return _apply_cte_builder( + self, + alias, + as_, + recursive=recursive, + materialized=materialized, + append=append, + dialect=dialect, + copy=copy, + **opts, + ) + class Values(UDTF): arg_types = {"expressions": True, "alias": False} @@ -6803,9 +6997,10 @@ def from_(expression: ExpOrStr, dialect: DialectType = None, **opts) -> Select: def update( table: str | Table, - properties: dict, + properties: t.Optional[dict] = None, where: t.Optional[ExpOrStr] = None, from_: t.Optional[ExpOrStr] = None, + with_: t.Optional[t.Dict[str, ExpOrStr]] = None, dialect: DialectType = None, **opts, ) -> Update: @@ -6813,14 +7008,15 @@ def update( Creates an update statement. Example: - >>> update("my_table", {"x": 1, "y": "2", "z": None}, from_="baz", where="id > 1").sql() - "UPDATE my_table SET x = 1, y = '2', z = NULL FROM baz WHERE id > 1" + >>> update("my_table", {"x": 1, "y": "2", "z": None}, from_="baz_cte", where="baz_cte.id > 1 and my_table.id = baz_cte.id", with_={"baz_cte": "SELECT id FROM foo"}).sql() + "WITH baz_cte AS (SELECT id FROM foo) UPDATE my_table SET x = 1, y = '2', z = NULL FROM baz_cte WHERE baz_cte.id > 1 AND my_table.id = baz_cte.id" Args: - *properties: dictionary of properties to set which are + properties: dictionary of properties to SET which are auto converted to sql objects eg None -> NULL where: sql conditional parsed into a WHERE statement from_: sql statement parsed into a FROM statement + with_: dictionary of CTE aliases / select statements to include in a WITH clause. dialect: the dialect used to parse the input expressions. **opts: other options to use to parse the input expressions. @@ -6828,13 +7024,14 @@ def update( Update: the syntax tree for the UPDATE statement. """ update_expr = Update(this=maybe_parse(table, into=Table, dialect=dialect)) - update_expr.set( - "expressions", - [ - EQ(this=maybe_parse(k, dialect=dialect, **opts), expression=convert(v)) - for k, v in properties.items() - ], - ) + if properties: + update_expr.set( + "expressions", + [ + EQ(this=maybe_parse(k, dialect=dialect, **opts), expression=convert(v)) + for k, v in properties.items() + ], + ) if from_: update_expr.set( "from", @@ -6847,6 +7044,15 @@ def update( "where", maybe_parse(where, into=Where, dialect=dialect, prefix="WHERE", **opts), ) + if with_: + cte_list = [ + CTE(this=maybe_parse(qry, dialect=dialect, **opts), alias=alias) + for alias, qry in with_.items() + ] + update_expr.set( + "with", + With(expressions=cte_list), + ) return update_expr
diff --git a/tests/test_build.py b/tests/test_build.py index 7518b72a2a..5d383ad00b 100644 --- a/tests/test_build.py +++ b/tests/test_build.py @@ -577,6 +577,36 @@ def test_build(self): lambda: exp.update("tbl", {"x": 1}, from_="tbl2 cross join tbl3"), "UPDATE tbl SET x = 1 FROM tbl2 CROSS JOIN tbl3", ), + ( + lambda: exp.update( + "my_table", + {"x": 1}, + from_="baz", + where="my_table.id = baz.id", + with_={"baz": "SELECT id FROM foo UNION SELECT id FROM bar"}, + ), + "WITH baz AS (SELECT id FROM foo UNION SELECT id FROM bar) UPDATE my_table SET x = 1 FROM baz WHERE my_table.id = baz.id", + ), + ( + lambda: exp.update("my_table").set_("x = 1"), + "UPDATE my_table SET x = 1", + ), + ( + lambda: exp.update("my_table").set_("x = 1").where("y = 2"), + "UPDATE my_table SET x = 1 WHERE y = 2", + ), + ( + lambda: exp.update("my_table").set_("a = 1").set_("b = 2"), + "UPDATE my_table SET a = 1, b = 2", + ), + ( + lambda: exp.update("my_table") + .set_("x = 1") + .where("my_table.id = baz.id") + .from_("baz") + .with_("baz", "SELECT id FROM foo"), + "WITH baz AS (SELECT id FROM foo) UPDATE my_table SET x = 1 FROM baz WHERE my_table.id = baz.id", + ), ( lambda: union("SELECT * FROM foo", "SELECT * FROM bla"), "SELECT * FROM foo UNION SELECT * FROM bla",
[ { "components": [ { "doc": "Set the table to update.\n\nExample:\n >>> Update().table(\"my_table\").set_(\"x = 1\").sql()\n 'UPDATE my_table SET x = 1'\n\nArgs:\n expression : the SQL code strings to parse.\n If a `Table` instance is passed, this is used as-is.\n If another `Expression` instance is passed, it will be wrapped in a `Table`.\n dialect: the dialect used to parse the input expression.\n copy: if `False`, modify this expression instance in-place.\n opts: other options to use to parse the input expressions.\n\nReturns:\n The modified Update expression.", "lines": [ 3293, 3322 ], "name": "Update.table", "signature": "def table( self, expression: ExpOrStr, dialect: DialectType = None, copy: bool = True, **opts ) -> Update:", "type": "function" }, { "doc": "Append to or set the SET expressions.\n\nExample:\n >>> Update().table(\"my_table\").set_(\"x = 1\").sql()\n 'UPDATE my_table SET x = 1'\n\nArgs:\n *expressions: the SQL code strings to parse.\n If `Expression` instance(s) are passed, they will be used as-is.\n Multiple expressions are combined with a comma.\n append: if `True`, add the new expressions to any existing SET expressions.\n Otherwise, this resets the expressions.\n dialect: the dialect used to parse the input expressions.\n copy: if `False`, modify this expression instance in-place.\n opts: other options to use to parse the input expressions.", "lines": [ 3325, 3359 ], "name": "Update.set_", "signature": "def set_( self, *expressions: ExpOrStr, append: bool = True, dialect: DialectType = None, copy: bool = True, **opts, ) -> Update:", "type": "function" }, { "doc": "Append to or set the WHERE expressions.\n\nExample:\n >>> Update().table(\"tbl\").set_(\"x = 1\").where(\"x = 'a' OR x < 'b'\").sql()\n \"UPDATE tbl SET x = 1 WHERE x = 'a' OR x < 'b'\"\n\nArgs:\n *expressions: the SQL code strings to parse.\n If an `Expression` instance is passed, it will be used as-is.\n Multiple expressions are combined with an AND operator.\n append: if `True`, AND the new expressions to any existing expression.\n Otherwise, this resets the expression.\n dialect: the dialect used to parse the input expressions.\n copy: if `False`, modify this expression instance in-place.\n opts: other options to use to parse the input expressions.\n\nReturns:\n Select: the modified expression.", "lines": [ 3362, 3398 ], "name": "Update.where", "signature": "def where( self, *expressions: t.Optional[ExpOrStr], append: bool = True, dialect: DialectType = None, copy: bool = True, **opts, ) -> Select:", "type": "function" }, { "doc": "Set the FROM expression.\n\nExample:\n >>> Update().table(\"my_table\").set_(\"x = 1\").from_(\"baz\").sql()\n 'UPDATE my_table SET x = 1 FROM baz'\n\nArgs:\n expression : the SQL code strings to parse.\n If a `From` instance is passed, this is used as-is.\n If another `Expression` instance is passed, it will be wrapped in a `From`.\n If nothing is passed in then a from is not applied to the expression\n dialect: the dialect used to parse the input expression.\n copy: if `False`, modify this expression instance in-place.\n opts: other options to use to parse the input expressions.\n\nReturns:\n The modified Update expression.", "lines": [ 3401, 3438 ], "name": "Update.from_", "signature": "def from_( self, expression: t.Optional[ExpOrStr] = None, dialect: DialectType = None, copy: bool = True, **opts, ) -> Update:", "type": "function" }, { "doc": "Append to or set the common table expressions.\n\nExample:\n >>> Update().table(\"my_table\").set_(\"x = 1\").from_(\"baz\").with_(\"baz\", \"SELECT id FROM foo\").sql()\n 'WITH baz AS (SELECT id FROM foo) UPDATE my_table SET x = 1 FROM baz'\n\nArgs:\n alias: the SQL code string to parse as the table name.\n If an `Expression` instance is passed, this is used as-is.\n as_: the SQL code string to parse as the table expression.\n If an `Expression` instance is passed, it will be used as-is.\n recursive: set the RECURSIVE part of the expression. Defaults to `False`.\n materialized: set the MATERIALIZED part of the expression.\n append: if `True`, add to any existing expressions.\n Otherwise, this resets the expressions.\n dialect: the dialect used to parse the input expression.\n copy: if `False`, modify this expression instance in-place.\n opts: other options to use to parse the input expressions.\n\nReturns:\n The modified expression.", "lines": [ 3441, 3484 ], "name": "Update.with_", "signature": "def with_( self, alias: ExpOrStr, as_: ExpOrStr, recursive: t.Optional[bool] = None, materialized: t.Optional[bool] = None, append: bool = True, dialect: DialectType = None, copy: bool = True, **opts, ) -> Update:", "type": "function" } ], "file": "sqlglot/expressions.py" } ]
[ "tests/test_build.py::TestBuild::test_build" ]
[]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> Feat: add builder methods to exp.Update and add with_ arg to exp.update Improve ergonomics of UPDATEs: - Add builder methods to Update class so it can be constructed incrementally in the same way a Select - `exp.update` changes: - Add a `with_` arg so updates with CTEs can be created in one-shot without subsequent need to `.set(...)` the With clause - Make the `properties` arg optional, so `set_` builder method can be used with an AST (`exp.update("tbl").set_(set_expr)` ) instead of forcing specification as a `dict` ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in sqlglot/expressions.py] (definition of Update.table:) def table( self, expression: ExpOrStr, dialect: DialectType = None, copy: bool = True, **opts ) -> Update: """Set the table to update. Example: >>> Update().table("my_table").set_("x = 1").sql() 'UPDATE my_table SET x = 1' Args: expression : the SQL code strings to parse. If a `Table` instance is passed, this is used as-is. If another `Expression` instance is passed, it will be wrapped in a `Table`. dialect: the dialect used to parse the input expression. copy: if `False`, modify this expression instance in-place. opts: other options to use to parse the input expressions. Returns: The modified Update expression.""" (definition of Update.set_:) def set_( self, *expressions: ExpOrStr, append: bool = True, dialect: DialectType = None, copy: bool = True, **opts, ) -> Update: """Append to or set the SET expressions. Example: >>> Update().table("my_table").set_("x = 1").sql() 'UPDATE my_table SET x = 1' Args: *expressions: the SQL code strings to parse. If `Expression` instance(s) are passed, they will be used as-is. Multiple expressions are combined with a comma. append: if `True`, add the new expressions to any existing SET expressions. Otherwise, this resets the expressions. dialect: the dialect used to parse the input expressions. copy: if `False`, modify this expression instance in-place. opts: other options to use to parse the input expressions.""" (definition of Update.where:) def where( self, *expressions: t.Optional[ExpOrStr], append: bool = True, dialect: DialectType = None, copy: bool = True, **opts, ) -> Select: """Append to or set the WHERE expressions. Example: >>> Update().table("tbl").set_("x = 1").where("x = 'a' OR x < 'b'").sql() "UPDATE tbl SET x = 1 WHERE x = 'a' OR x < 'b'" Args: *expressions: the SQL code strings to parse. If an `Expression` instance is passed, it will be used as-is. Multiple expressions are combined with an AND operator. append: if `True`, AND the new expressions to any existing expression. Otherwise, this resets the expression. dialect: the dialect used to parse the input expressions. copy: if `False`, modify this expression instance in-place. opts: other options to use to parse the input expressions. Returns: Select: the modified expression.""" (definition of Update.from_:) def from_( self, expression: t.Optional[ExpOrStr] = None, dialect: DialectType = None, copy: bool = True, **opts, ) -> Update: """Set the FROM expression. Example: >>> Update().table("my_table").set_("x = 1").from_("baz").sql() 'UPDATE my_table SET x = 1 FROM baz' Args: expression : the SQL code strings to parse. If a `From` instance is passed, this is used as-is. If another `Expression` instance is passed, it will be wrapped in a `From`. If nothing is passed in then a from is not applied to the expression dialect: the dialect used to parse the input expression. copy: if `False`, modify this expression instance in-place. opts: other options to use to parse the input expressions. Returns: The modified Update expression.""" (definition of Update.with_:) def with_( self, alias: ExpOrStr, as_: ExpOrStr, recursive: t.Optional[bool] = None, materialized: t.Optional[bool] = None, append: bool = True, dialect: DialectType = None, copy: bool = True, **opts, ) -> Update: """Append to or set the common table expressions. Example: >>> Update().table("my_table").set_("x = 1").from_("baz").with_("baz", "SELECT id FROM foo").sql() 'WITH baz AS (SELECT id FROM foo) UPDATE my_table SET x = 1 FROM baz' Args: alias: the SQL code string to parse as the table name. If an `Expression` instance is passed, this is used as-is. as_: the SQL code string to parse as the table expression. If an `Expression` instance is passed, it will be used as-is. recursive: set the RECURSIVE part of the expression. Defaults to `False`. materialized: set the MATERIALIZED part of the expression. append: if `True`, add to any existing expressions. Otherwise, this resets the expressions. dialect: the dialect used to parse the input expression. copy: if `False`, modify this expression instance in-place. opts: other options to use to parse the input expressions. Returns: The modified expression.""" [end of new definitions in sqlglot/expressions.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
ceb42fabad60312699e4b15936aeebac00e22e4d
embeddings-benchmark__mteb-1256
1,256
embeddings-benchmark/mteb
null
f04279d5975f4a5c7fd5f5f284bfe14303b8f2a0
2024-09-29T12:28:20Z
diff --git a/README.md b/README.md index e35ad3bdbc..a7eb03e4f2 100644 --- a/README.md +++ b/README.md @@ -378,6 +378,7 @@ df = results_to_dataframe(results) | Documentation | | | ------------------------------ | ---------------------- | | πŸ“‹ [Tasks] |Β Overview of available tasks | +| πŸ“ [Benchmarks] | Overview of available benchmarks | | πŸ“ˆ [Leaderboard] | The interactive leaderboard of the benchmark | | πŸ€– [Adding a model] | Information related to how to submit a model to the leaderboard | | πŸ‘©β€πŸ”¬ [Reproducible workflows] | Information related to how to reproduce and create reproducible workflows with MTEB | @@ -387,6 +388,7 @@ df = results_to_dataframe(results) | 🌐 [MMTEB] | An open-source effort to extend MTEB to cover a broad set of languages | Β  [Tasks]: docs/tasks.md +[Benchmarks]: docs/benchmarks.md [Contributing]: CONTRIBUTING.md [Adding a model]: docs/adding_a_model.md [Adding a dataset]: docs/adding_a_dataset.md diff --git a/docs/benchmarks.md b/docs/benchmarks.md index 9eb471d187..a5abe50215 100644 --- a/docs/benchmarks.md +++ b/docs/benchmarks.md @@ -1,5 +1,5 @@ ## Available benchmarks -The following tables give you an overview of the benchmarks in MTEB. +The following table gives you an overview of the benchmarks in MTEB. <details> diff --git a/mteb/cli.py b/mteb/cli.py index 24e99bd241..b891d381f4 100644 --- a/mteb/cli.py +++ b/mteb/cli.py @@ -30,6 +30,14 @@ mteb available_tasks --task_types Clustering # list tasks of type Clustering ``` +## Listing Available Benchmarks + +To list the available benchmarks within MTEB, use the `mteb available_benchmarks` command. For example: + +```bash +mteb available_benchmarks # list all available benchmarks +``` + ## Creating Model Metadata @@ -144,6 +152,12 @@ def run(args: argparse.Namespace) -> None: _save_model_metadata(model, Path(args.output_folder)) +def available_benchmarks(args: argparse.Namespace) -> None: + benchmarks = mteb.get_benchmarks() + eval = mteb.MTEB(tasks=benchmarks) + eval.mteb_benchmarks() + + def available_tasks(args: argparse.Namespace) -> None: tasks = mteb.get_tasks( categories=args.categories, @@ -198,6 +212,15 @@ def add_available_tasks_parser(subparsers) -> None: parser.set_defaults(func=available_tasks) +def add_available_benchmarks_parser(subparsers) -> None: + parser = subparsers.add_parser( + "available_benchmarks", help="List the available benchmarks within MTEB" + ) + add_task_selection_args(parser) + + parser.set_defaults(func=available_benchmarks) + + def add_run_parser(subparsers) -> None: parser = subparsers.add_parser("run", help="Run a model on a set of tasks") @@ -321,6 +344,7 @@ def main(): ) add_run_parser(subparsers) add_available_tasks_parser(subparsers) + add_available_benchmarks_parser(subparsers) add_create_meta_parser(subparsers) args = parser.parse_args() diff --git a/mteb/evaluation/MTEB.py b/mteb/evaluation/MTEB.py index ab25169364..70f3e21ca8 100644 --- a/mteb/evaluation/MTEB.py +++ b/mteb/evaluation/MTEB.py @@ -168,6 +168,12 @@ def _display_tasks(self, task_list, name=None): console.print(f"{prefix}{name}{category}{multilingual}") console.print("\n") + def mteb_benchmarks(self): + """Get all benchmarks available in the MTEB.""" + for benchmark in self._tasks: + name = benchmark.name + self._display_tasks(benchmark.tasks, name=name) + @classmethod def mteb_tasks(cls): """Get all tasks available in the MTEB."""
diff --git a/tests/test_cli.py b/tests/test_cli.py index fdcd1b014a..1d0400e985 100644 --- a/tests/test_cli.py +++ b/tests/test_cli.py @@ -22,6 +22,15 @@ def test_available_tasks(): ), "Sample task Banking77Classification task not found in available tasks" +def test_available_benchmarks(): + command = f"{sys.executable} -m mteb available_benchmarks" + result = subprocess.run(command, shell=True, capture_output=True, text=True) + assert result.returncode == 0, "Command failed" + assert ( + "MTEB(eng)" in result.stdout + ), "Sample benchmark MTEB(eng) task not found in available bencmarks" + + run_task_fixures = [ ( "average_word_embeddings_komninos",
diff --git a/README.md b/README.md index e35ad3bdbc..a7eb03e4f2 100644 --- a/README.md +++ b/README.md @@ -378,6 +378,7 @@ df = results_to_dataframe(results) | Documentation | | | ------------------------------ | ---------------------- | | πŸ“‹ [Tasks] |Β Overview of available tasks | +| πŸ“ [Benchmarks] | Overview of available benchmarks | | πŸ“ˆ [Leaderboard] | The interactive leaderboard of the benchmark | | πŸ€– [Adding a model] | Information related to how to submit a model to the leaderboard | | πŸ‘©β€πŸ”¬ [Reproducible workflows] | Information related to how to reproduce and create reproducible workflows with MTEB | @@ -387,6 +388,7 @@ df = results_to_dataframe(results) | 🌐 [MMTEB] | An open-source effort to extend MTEB to cover a broad set of languages | Β  [Tasks]: docs/tasks.md +[Benchmarks]: docs/benchmarks.md [Contributing]: CONTRIBUTING.md [Adding a model]: docs/adding_a_model.md [Adding a dataset]: docs/adding_a_dataset.md diff --git a/docs/benchmarks.md b/docs/benchmarks.md index 9eb471d187..a5abe50215 100644 --- a/docs/benchmarks.md +++ b/docs/benchmarks.md @@ -1,5 +1,5 @@ ## Available benchmarks -The following tables give you an overview of the benchmarks in MTEB. +The following table gives you an overview of the benchmarks in MTEB. <details>
[ { "components": [ { "doc": "", "lines": [ 155, 158 ], "name": "available_benchmarks", "signature": "def available_benchmarks(args: argparse.Namespace) -> None:", "type": "function" }, { "doc": "", "lines": [ 215, 221 ], "name": "add_available_benchmarks_parser", "signature": "def add_available_benchmarks_parser(subparsers) -> None:", "type": "function" } ], "file": "mteb/cli.py" }, { "components": [ { "doc": "Get all benchmarks available in the MTEB.", "lines": [ 171, 175 ], "name": "MTEB.mteb_benchmarks", "signature": "def mteb_benchmarks(self):", "type": "function" } ], "file": "mteb/evaluation/MTEB.py" } ]
[ "tests/test_cli.py::test_available_benchmarks" ]
[ "tests/test_cli.py::test_available_tasks", "tests/test_cli.py::test_run_task[average_word_embeddings_komninos-BornholmBitextMining-21eec43590414cb8e3a6f654857abed0483ae36e]", "tests/test_cli.py::test_run_task[intfloat/multilingual-e5-small-BornholmBitextMining-e4ce9877abf3edfe10b0d82785e83bdcb973e22e]", "tests/test_cli.py::test_create_meta", "tests/test_cli.py::test_create_meta_from_existing[existing_readme.md-model_card_gold_existing.md]", "tests/test_cli.py::test_create_meta_from_existing[model_card_without_frontmatter.md-model_card_gold_without_frontmatter.md]", "tests/test_cli.py::test_save_predictions" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> fix: Add listing all available benchmarks CLI option <!-- If you are submitting a dataset or a model for the model registry please use the corresponding checklists below otherwise feel free to remove them. --> <!-- add additional description, question etc. related to the new dataset --> Addresses #1250 point 1. - Add benchmarks under the "Docs" section of README - Add CLI option to list all available benchmarks - Headers will print the benchmark name - The tasks will be printed the same way as it is now - Add test case for the added CLI option ## Example This command should yield something like the following, starting with MTEB(eng): ``` mteb available_benchmarks ``` <details> <summary>Full printout</summary> ```bash >> mteb available_benchmarks ─────────────────────────────────────────────────────────────── MTEB(eng) ──────────────────────────────────────────────────────────────── Summarization - SummEval, p2p PairClassification - SprintDuplicateQuestions, s2s - TwitterSemEval2015, s2s - TwitterURLCorpus, s2s Classification - AmazonCounterfactualClassification, s2s, multilingual 2 / 4 Subsets - AmazonPolarityClassification, p2p - AmazonReviewsClassification, s2s, multilingual 1 / 6 Subsets - Banking77Classification, s2s - EmotionClassification, s2s - ImdbClassification, p2p - MTOPDomainClassification, s2s, multilingual 1 / 6 Subsets - MTOPIntentClassification, s2s, multilingual 1 / 6 Subsets - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets - ToxicConversationsClassification, s2s - TweetSentimentExtractionClassification, s2s Retrieval - ArguAna, s2p - CQADupstackAndroidRetrieval, s2p - CQADupstackEnglishRetrieval, s2p - CQADupstackGamingRetrieval, s2p - CQADupstackGisRetrieval, s2p - CQADupstackMathematicaRetrieval, s2p - CQADupstackPhysicsRetrieval, s2p - CQADupstackProgrammersRetrieval, s2p - CQADupstackStatsRetrieval, s2p - CQADupstackTexRetrieval, s2p - CQADupstackUnixRetrieval, s2p - CQADupstackWebmastersRetrieval, s2p - CQADupstackWordpressRetrieval, s2p - ClimateFEVER, s2p - DBPedia, s2p - FEVER, s2p - FiQA2018, s2p - HotpotQA, s2p - MSMARCO, s2p - NFCorpus, s2p - NQ, s2p - QuoraRetrieval, s2s - SCIDOCS, s2p - SciFact, s2p - TRECCOVID, s2p - Touche2020, s2p STS - BIOSSES, s2s - SICK-R, s2s - STS12, s2s - STS13, s2s - STS14, s2s - STS15, s2s - STS16, s2s - STS17, s2s, multilingual 8 / 11 Subsets - STS22, p2p, multilingual 5 / 18 Subsets - STSBenchmark, s2s Clustering - ArxivClusteringP2P, p2p - ArxivClusteringS2S, s2s - BiorxivClusteringP2P, p2p - BiorxivClusteringS2S, s2s - MedrxivClusteringP2P, p2p - MedrxivClusteringS2S, s2s - RedditClustering, s2s - RedditClusteringP2P, p2p - StackExchangeClustering, s2s - StackExchangeClusteringP2P, p2p - TwentyNewsgroupsClustering, s2s Reranking - AskUbuntuDupQuestions, s2s - MindSmallReranking, s2s - SciDocsRR, s2s - StackOverflowDupQuestions, s2s ─────────────────────────────────────────────────────────────── MTEB(rus) ──────────────────────────────────────────────────────────────── PairClassification - TERRa, s2s Classification - GeoreviewClassification, p2p - HeadlineClassification, s2s - InappropriatenessClassification, s2s - KinopoiskClassification, p2p - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets - RuReviewsClassification, p2p - RuSciBenchGRNTIClassification, p2p - RuSciBenchOECDClassification, p2p MultilabelClassification - CEDRClassification, s2s - SensitiveTopicsClassification, s2s Retrieval - MIRACLRetrieval, s2p, multilingual 1 / 18 Subsets - RiaNewsRetrieval, s2p - RuBQRetrieval, s2p STS - RUParaPhraserSTS, s2s - RuSTSBenchmarkSTS, s2s - STS22, p2p, multilingual 1 / 18 Subsets Clustering - GeoreviewClusteringP2P, p2p - RuSciBenchGRNTIClusteringP2P, p2p - RuSciBenchOECDClusteringP2P, p2p Reranking - MIRACLReranking, s2s, multilingual 1 / 18 Subsets - RuBQReranking, s2p ───────────────────────────────────────────────────── MTEB(Retrieval w/Instructions) ───────────────────────────────────────────────────── InstructionRetrieval - Robust04InstructionRetrieval, s2p - News21InstructionRetrieval, s2p - Core17InstructionRetrieval, s2p ─────────────────────────────────────────────────────────────── MTEB(law) ──────────────────────────────────────────────────────────────── Retrieval - AILACasedocs, p2p - AILAStatutes, p2p - LegalSummarization, s2p - GerDaLIRSmall, p2p - LeCaRDv2, p2p - LegalBenchConsumerContractsQA, s2p - LegalBenchCorporateLobbying, s2p - LegalQuAD, s2p ─────────────────────────────────────────────────────────── MINERSBitextMining ─────────────────────────────────────────────────────────── BitextMining - BUCC, s2s, multilingual 4 / 4 Subsets - LinceMTBitextMining, s2s, multilingual 1 / 1 Subsets - NollySentiBitextMining, s2s, multilingual 4 / 4 Subsets - NusaXBitextMining, s2s, multilingual 11 / 11 Subsets - NusaTranslationBitextMining, s2s, multilingual 11 / 11 Subsets - PhincBitextMining, s2s, multilingual 1 / 1 Subsets - Tatoeba, s2s, multilingual 112 / 112 Subsets ─────────────────────────────────────────────────────────── MTEB(Scandinavian) ─────────────────────────────────────────────────────────── Classification - AngryTweetsClassification, s2s - DanishPoliticalCommentsClassification, s2s - DalajClassification, s2s - DKHateClassification, s2s - LccSentimentClassification, s2s - MassiveIntentClassification, s2s, multilingual 3 / 51 Subsets - MassiveScenarioClassification, s2s, multilingual 3 / 51 Subsets - NordicLangClassification, s2s - NoRecClassification, s2s - NorwegianParliamentClassification, s2s - ScalaClassification, s2s, multilingual 4 / 4 Subsets - SwedishSentimentClassification, s2s - SweRecClassification, s2s BitextMining - BornholmBitextMining, s2s - NorwegianCourtsBitextMining, s2s Retrieval - DanFEVER, p2p - NorQuadRetrieval, p2p - SNLRetrieval, p2p - SwednRetrieval, p2p - SweFaqRetrieval, s2s - TV2Nordretrieval, p2p - TwitterHjerneRetrieval, p2p Clustering - SNLHierarchicalClusteringS2S, s2s - SNLHierarchicalClusteringP2P, p2p - SwednClusteringP2P, p2p - SwednClusteringS2S, s2s - VGHierarchicalClusteringS2S, p2p - VGHierarchicalClusteringP2P, p2p ────────────────────────────────────────────────────────────────── CoIR ────────────────────────────────────────────────────────────────── Retrieval - AppsRetrieval, p2p - CodeFeedbackMT, p2p - CodeFeedbackST, p2p - CodeSearchNetCCRetrieval, p2p, multilingual 6 / 6 Subsets - CodeTransOceanContest, p2p - CodeTransOceanDL, p2p - CosQA, p2p - COIRCodeSearchNetRetrieval, p2p, multilingual 6 / 6 Subsets - StackOverflowQA, p2p - SyntheticText2SQL, p2p ─────────────────────────────────────────────────────────────── MTEB(fra) ──────────────────────────────────────────────────────────────── Summarization - SummEvalFr, p2p PairClassification - OpusparcusPC, s2s, multilingual 1 / 6 Subsets - PawsXPairClassification, s2s, multilingual 1 / 7 Subsets Classification - AmazonReviewsClassification, s2s, multilingual 1 / 6 Subsets - MasakhaNEWSClassification, s2s, multilingual 1 / 16 Subsets - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets - MTOPDomainClassification, s2s, multilingual 1 / 6 Subsets - MTOPIntentClassification, s2s, multilingual 1 / 6 Subsets Retrieval - AlloprofRetrieval, s2p - BSARDRetrieval, s2p - MintakaRetrieval, s2p, multilingual 1 / 8 Subsets - SyntecRetrieval, s2p - XPQARetrieval, s2p, multilingual 3 / 36 Subsets STS - SICKFr, s2s - STS22, p2p, multilingual 3 / 18 Subsets - STSBenchmarkMultilingualSTS, s2s, multilingual 1 / 10 Subsets Clustering - AlloProfClusteringP2P, p2p - AlloProfClusteringS2S, s2s - HALClusteringS2S, s2s - MasakhaNEWSClusteringP2P, p2p, multilingual 1 / 16 Subsets - MasakhaNEWSClusteringS2S, s2s, multilingual 1 / 16 Subsets - MLSUMClusteringP2P, p2p, multilingual 1 / 4 Subsets - MLSUMClusteringS2S, s2s, multilingual 1 / 4 Subsets Reranking - AlloprofReranking, s2p - SyntecReranking, s2p ─────────────────────────────────────────────────────────────── MTEB(deu) ──────────────────────────────────────────────────────────────── PairClassification - FalseFriendsGermanEnglish, s2s - PawsXPairClassification, s2s, multilingual 1 / 7 Subsets Classification - AmazonCounterfactualClassification, s2s, multilingual 1 / 4 Subsets - AmazonReviewsClassification, s2s, multilingual 1 / 6 Subsets - MTOPDomainClassification, s2s, multilingual 1 / 6 Subsets - MTOPIntentClassification, s2s, multilingual 1 / 6 Subsets - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets Retrieval - GermanQuAD-Retrieval, s2p - GermanDPR, s2p - XMarket, s2p, multilingual 1 / 3 Subsets - GerDaLIR, s2p STS - GermanSTSBenchmark, s2s - STS22, p2p, multilingual 4 / 18 Subsets Clustering - BlurbsClusteringP2P, p2p - BlurbsClusteringS2S, s2s - TenKGnadClusteringP2P, p2p - TenKGnadClusteringS2S, s2s Reranking - MIRACLReranking, s2s, multilingual 1 / 18 Subsets ─────────────────────────────────────────────────────────────── MTEB(kor) ──────────────────────────────────────────────────────────────── Classification - KLUE-TC, s2s Retrieval - MIRACLRetrieval, s2p, multilingual 1 / 18 Subsets - Ko-StrategyQA, s2p STS - KLUE-STS, s2s - KorSTS, s2s Reranking - MIRACLReranking, s2s, multilingual 1 / 18 Subsets ─────────────────────────────────────────────────────────────── MTEB(pol) ──────────────────────────────────────────────────────────────── PairClassification - CDSC-E, s2s - PpcPC, s2s - PSC, s2s - SICK-E-PL, s2s Classification - AllegroReviews, s2s - CBD, s2s - MassiveIntentClassification, s2s, multilingual 1 / 51 Subsets - MassiveScenarioClassification, s2s, multilingual 1 / 51 Subsets - PolEmo2.0-IN, s2s - PolEmo2.0-OUT, s2s - PAC, p2p STS - CDSC-R, s2s - STS22, p2p, multilingual 4 / 18 Subsets - STSBenchmarkMultilingualSTS, s2s, multilingual 1 / 10 Subsets - SICK-R-PL, s2s Clustering - EightTagsClustering, s2s - PlscClusteringS2S, s2s - PlscClusteringP2P, s2s ─────────────────────────────────────────────────────────────── MTEB(code) ─────────────────────────────────────────────────────────────── Retrieval - AppsRetrieval, p2p - CodeEditSearchRetrieval, p2p, multilingual 13 / 13 Subsets - CodeFeedbackMT, p2p - CodeFeedbackST, p2p - CodeSearchNetCCRetrieval, p2p, multilingual 6 / 6 Subsets - CodeSearchNetRetrieval, p2p, multilingual 6 / 6 Subsets - CodeTransOceanContest, p2p - CodeTransOceanDL, p2p - CosQA, p2p - COIRCodeSearchNetRetrieval, p2p, multilingual 6 / 6 Subsets - StackOverflowQA, p2p - SyntheticText2SQL, p2p ─────────────────────────────────────────────────────────── MTEB(Multilingual) ─────────────────────────────────────────────────────────── PairClassification - CTKFactsNLI, s2s - SprintDuplicateQuestions, s2s - TwitterURLCorpus, s2s - ArmenianParaphrasePC, s2s - indonli, s2s - OpusparcusPC, s2s, multilingual 6 / 6 Subsets - PawsXPairClassification, s2s, multilingual 7 / 7 Subsets - RTE3, s2s, multilingual 4 / 4 Subsets - XNLI, s2s, multilingual 14 / 14 Subsets - PpcPC, s2s - TERRa, s2s Classification - BulgarianStoreReviewSentimentClassfication, s2s - CzechProductReviewSentimentClassification, s2s - GreekLegalCodeClassification, s2s - DBpediaClassification, s2s - FinancialPhrasebankClassification, s2s - PoemSentimentClassification, s2s - ToxicConversationsClassification, s2s - TweetTopicSingleClassification, s2s - EstonianValenceClassification, s2s - FilipinoShopeeReviewsClassification, s2s - GujaratiNewsClassification, s2s - SentimentAnalysisHindi, s2s - IndonesianIdClickbaitClassification, s2s - ItaCaseholdClassification, s2s - KorSarcasmClassification, s2s - KurdishSentimentClassification, s2s - MacedonianTweetSentimentClassification, s2s - AfriSentiClassification, s2s, multilingual 12 / 12 Subsets - AmazonCounterfactualClassification, s2s, multilingual 4 / 4 Subsets - CataloniaTweetClassification, s2s, multilingual 2 / 2 Subsets - CyrillicTurkicLangClassification, s2s - IndicLangClassification, s2s - MasakhaNEWSClassification, s2s, multilingual 16 / 16 Subsets - MassiveIntentClassification, s2s, multilingual 51 / 51 Subsets - MultiHateClassification, s2s, multilingual 11 / 11 Subsets - NordicLangClassification, s2s - NusaParagraphEmotionClassification, s2s, multilingual 10 / 10 Subsets - NusaX-senti, s2s, multilingual 12 / 12 Subsets - ScalaClassification, s2s, multilingual 4 / 4 Subsets - SwissJudgementClassification, s2s, multilingual 3 / 3 Subsets - NepaliNewsClassification, s2s - OdiaNewsClassification, s2s - PunjabiNewsClassification, s2s - PolEmo2.0-OUT, s2s - PAC, p2p - SinhalaNewsClassification, s2s - CSFDSKMovieReviewSentimentClassification, s2s - SiswatiNewsClassification, s2s - SlovakMovieReviewSentimentClassification, s2s - SwahiliNewsClassification, s2s - DalajClassification, s2s - TswanaNewsClassification, s2s - IsiZuluNewsClassification, s2s BitextMining - BornholmBitextMining, s2s - BibleNLPBitextMining, s2s, multilingual 1656 / 1656 Subsets - BUCC.v2, s2s, multilingual 4 / 4 Subsets - DiaBlaBitextMining, s2s, multilingual 2 / 2 Subsets - FloresBitextMining, s2s, multilingual 41412 / 41412 Subsets - IN22GenBitextMining, s2s, multilingual 506 / 506 Subsets - IndicGenBenchFloresBitextMining, s2s, multilingual 58 / 58 Subsets - NollySentiBitextMining, s2s, multilingual 4 / 4 Subsets - NorwegianCourtsBitextMining, s2s - NTREXBitextMining, s2s, multilingual 1916 / 1916 Subsets - NusaTranslationBitextMining, s2s, multilingual 11 / 11 Subsets - NusaXBitextMining, s2s, multilingual 11 / 11 Subsets - Tatoeba, s2s, multilingual 112 / 112 Subsets MultilabelClassification - KorHateSpeechMLClassification, s2s - MalteseNewsClassification, s2s - MultiEURLEXMultilabelClassification, p2p, multilingual 23 / 23 Subsets - BrazilianToxicTweetsClassification, s2s - CEDRClassification, s2s InstructionRetrieval - Core17InstructionRetrieval, s2p - News21InstructionRetrieval, s2p - Robust04InstructionRetrieval, s2p Retrieval - StackOverflowQA, p2p - TwitterHjerneRetrieval, p2p - AILAStatutes, p2p - ArguAna, s2p - HagridRetrieval, s2p - LegalBenchCorporateLobbying, s2p - LEMBPasskeyRetrieval, s2p - SCIDOCS, s2p - SpartQA, s2s - TempReasonL1, s2s - TRECCOVID, s2p - WinoGrande, s2s - BelebeleRetrieval, s2p, multilingual 376 / 376 Subsets - MLQARetrieval, s2p, multilingual 49 / 49 Subsets - StatcanDialogueDatasetRetrieval, s2p, multilingual 2 / 2 Subsets - WikipediaRetrievalMultilingual, s2p, multilingual 16 / 16 Subsets - CovidRetrieval, s2p STS - GermanSTSBenchmark, s2s - SICK-R, s2s - STS12, s2s - STS13, s2s - STS14, s2s - STS15, s2s - STSBenchmark, s2s - FaroeseSTS, s2s - FinParaSTS, s2s - JSICK, s2s - IndicCrosslingualSTS, s2s, multilingual 12 / 12 Subsets - SemRel24STS, s2s, multilingual 12 / 12 Subsets - STS17, s2s, multilingual 11 / 11 Subsets - STS22.v2, p2p, multilingual 18 / 18 Subsets - STSES, s2s - STSB, s2s Clustering - WikiCitiesClustering, p2p - MasakhaNEWSClusteringS2S, s2s, multilingual 16 / 16 Subsets - RomaniBibleClustering, p2p - ArXivHierarchicalClusteringP2P, p2p - ArXivHierarchicalClusteringS2S, p2p - BigPatentClustering.v2, p2p - BiorxivClusteringP2P.v2, p2p - MedrxivClusteringP2P.v2, p2p - StackExchangeClustering.v2, s2s - AlloProfClusteringS2S.v2, s2s - HALClusteringS2S.v2, s2s - SIB200ClusteringS2S, s2s, multilingual 197 / 197 Subsets - WikiClusteringP2P.v2, p2p, multilingual 14 / 14 Subsets - SNLHierarchicalClusteringP2P, p2p - PlscClusteringP2P.v2, s2s - SwednClusteringP2P, p2p - CLSClusteringP2P.v2, p2p Reranking - WebLINXCandidatesReranking, p2p - AlloprofReranking, s2p - VoyageMMarcoReranking, s2s - WikipediaRerankingMultilingual, s2p, multilingual 16 / 16 Subsets - RuBQReranking, s2p - T2Reranking, s2s ``` </details> ![image](https://github.com/user-attachments/assets/3d05c14f-8f40-4239-83bc-02afd7374292) ## Checklist <!-- Please do not delete this --> - [x] Run tests locally to make sure nothing is broken using `make test`. - [x] Run the formatter to format the code using `make lint`. ---------- </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in mteb/cli.py] (definition of available_benchmarks:) def available_benchmarks(args: argparse.Namespace) -> None: (definition of add_available_benchmarks_parser:) def add_available_benchmarks_parser(subparsers) -> None: [end of new definitions in mteb/cli.py] [start of new definitions in mteb/evaluation/MTEB.py] (definition of MTEB.mteb_benchmarks:) def mteb_benchmarks(self): """Get all benchmarks available in the MTEB.""" [end of new definitions in mteb/evaluation/MTEB.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
b580b95fc91a7e7e675d27c3ae9a9df64ddad169
google-deepmind__optax-1063
1,063
google-deepmind/optax
null
ee63e4500fbd412d778a1ea143237007e45a5628
2024-09-17T21:50:23Z
diff --git a/docs/api/utilities.rst b/docs/api/utilities.rst index b199697f6..c792944fd 100644 --- a/docs/api/utilities.rst +++ b/docs/api/utilities.rst @@ -101,6 +101,7 @@ Tree tree_mul tree_ones_like tree_random_like + tree_split_key_like tree_scalar_mul tree_set tree_sub @@ -153,6 +154,10 @@ Tree ones like ~~~~~~~~~~~~~~ .. autofunction:: tree_ones_like +Tree with random keys +~~~~~~~~~~~~~~~~~~~~~~~ +.. autofunction:: tree_split_key_like + Tree with random values ~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: tree_random_like diff --git a/optax/tree_utils/__init__.py b/optax/tree_utils/__init__.py index e89aef861..44e2e195d 100644 --- a/optax/tree_utils/__init__.py +++ b/optax/tree_utils/__init__.py @@ -17,6 +17,7 @@ # pylint: disable=g-importing-member from optax.tree_utils._casting import tree_cast from optax.tree_utils._random import tree_random_like +from optax.tree_utils._random import tree_split_key_like from optax.tree_utils._state_utils import NamedTupleKey from optax.tree_utils._state_utils import tree_get from optax.tree_utils._state_utils import tree_get_all_with_path diff --git a/optax/tree_utils/_random.py b/optax/tree_utils/_random.py index 33783b2b1..6b4fab307 100644 --- a/optax/tree_utils/_random.py +++ b/optax/tree_utils/_random.py @@ -21,7 +21,7 @@ from jax import tree_util as jtu -def _tree_rng_keys_split( +def tree_split_key_like( rng_key: chex.PRNGKey, target_tree: chex.ArrayTree ) -> chex.ArrayTree: """Split keys to match structure of target tree. @@ -67,7 +67,7 @@ def tree_random_like( .. versionadded:: 0.2.1 """ - keys_tree = _tree_rng_keys_split(rng_key, target_tree) + keys_tree = tree_split_key_like(rng_key, target_tree) return jtu.tree_map( lambda l, k: sampler(k, l.shape, dtype or l.dtype), target_tree,
diff --git a/optax/tree_utils/_random_test.py b/optax/tree_utils/_random_test.py index 25ea580aa..077ca678a 100644 --- a/optax/tree_utils/_random_test.py +++ b/optax/tree_utils/_random_test.py @@ -22,6 +22,7 @@ import jax.numpy as jnp import jax.random as jrd import jax.tree_util as jtu +import numpy as np from optax import tree_utils as otu # We consider samplers with varying input dtypes, we do not test all possible @@ -48,6 +49,19 @@ def get_variable(type_var: str): class RandomTest(chex.TestCase): + def test_tree_split_key_like(self): + rng_key = jrd.PRNGKey(0) + tree = {'a': jnp.zeros(2), 'b': {'c': [jnp.ones(3), jnp.zeros([4, 5])]}} + keys_tree = otu.tree_split_key_like(rng_key, tree) + + with self.subTest('Test structure matches'): + self.assertEqual(jtu.tree_structure(tree), jtu.tree_structure(keys_tree)) + + with self.subTest('Test random key split'): + fst = jnp.stack(jtu.tree_flatten(keys_tree)[0]) + snd = jrd.split(rng_key, jtu.tree_structure(tree).num_leaves) + np.testing.assert_array_equal(fst, snd) + @parameterized.product( _SAMPLER_DTYPES, type_var=['real_array', 'complex_array', 'pytree'],
diff --git a/docs/api/utilities.rst b/docs/api/utilities.rst index b199697f6..c792944fd 100644 --- a/docs/api/utilities.rst +++ b/docs/api/utilities.rst @@ -101,6 +101,7 @@ Tree tree_mul tree_ones_like tree_random_like + tree_split_key_like tree_scalar_mul tree_set tree_sub @@ -153,6 +154,10 @@ Tree ones like ~~~~~~~~~~~~~~ .. autofunction:: tree_ones_like +Tree with random keys +~~~~~~~~~~~~~~~~~~~~~~~ +.. autofunction:: tree_split_key_like + Tree with random values ~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: tree_random_like
[ { "components": [ { "doc": "Split keys to match structure of target tree.\n\nArgs:\n rng_key: the key to split.\n target_tree: the tree whose structure to match.\n\nReturns:\n a tree of rng keys.", "lines": [ 24, 38 ], "name": "tree_split_key_like", "signature": "def tree_split_key_like( rng_key: chex.PRNGKey, target_tree: chex.ArrayTree ) -> chex.ArrayTree:", "type": "function" } ], "file": "optax/tree_utils/_random.py" } ]
[ "optax/tree_utils/_random_test.py::RandomTest::test_tree_split_key_like" ]
[ "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like0", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like1", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like10", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like11", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like12", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like13", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like14", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like2", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like3", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like4", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like5", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like6", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like7", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like8", "optax/tree_utils/_random_test.py::RandomTest::test_tree_random_like9" ]
This is a feature request which requires a new feature to add in the code repository. <<NEW FEATURE REQUEST>> <request> Add optax.tree_utils.tree_random_split. This exposes the formerly private function `_tree_rng_keys_split` in [`optax/tree_utils/_random.py`](https://github.com/google-deepmind/optax/blob/main/optax/tree_utils/_random.py) to the public API. I've found this to be a useful helper function for manipulation of random trees, and intend to use it for future PRs. ---------- Thanks for doing that, it's a good idea, happy to know it can be useful. (sorry for the incremental review, didn't mean to do it like that). </request> There are several new functions or classes that need to be implemented, using the definitions below: <<NEW DEFINITIONS>> There are several new functions or classes that need to be implemented, using the definitions below: <definitions> [start of new definitions in optax/tree_utils/_random.py] (definition of tree_split_key_like:) def tree_split_key_like( rng_key: chex.PRNGKey, target_tree: chex.ArrayTree ) -> chex.ArrayTree: """Split keys to match structure of target tree. Args: rng_key: the key to split. target_tree: the tree whose structure to match. Returns: a tree of rng keys.""" [end of new definitions in optax/tree_utils/_random.py] </definitions> Please note that in addition to the newly added components mentioned above, you also need to make other code changes to ensure that the new feature can be executed properly. <<END>>
1e08bccf195ac54e7d9d766eb5e69345bf0e3230
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4