repo stringlengths 5 51 | instance_id stringlengths 11 56 | base_commit stringlengths 40 40 | fixed_commit stringclasses 20
values | patch stringlengths 400 56.6k | test_patch stringlengths 0 895k | problem_statement stringlengths 27 55.6k | hints_text stringlengths 0 72k | created_at int64 1,447B 1,739B | labels listlengths 0 7 ⌀ | category stringclasses 4
values | edit_functions listlengths 1 10 | added_functions listlengths 0 19 | edit_functions_length int64 1 10 | __index_level_0__ int64 1 659 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
py-pdf/pypdf | py-pdf__pypdf-2644 | 6226d6663eac5410416484da9348ab2c5fd71973 | null | diff --git a/pypdf/filters.py b/pypdf/filters.py
index d62cf7842..d573ae30e 100644
--- a/pypdf/filters.py
+++ b/pypdf/filters.py
@@ -80,14 +80,19 @@ def decompress(data: bytes) -> bytes:
try:
return zlib.decompress(data)
except zlib.error:
- d = zlib.decompressobj(zlib.MAX_WBITS | 32)
- ... | diff --git a/tests/bench.py b/tests/bench.py
index dcfc30a9b..ea5597f88 100644
--- a/tests/bench.py
+++ b/tests/bench.py
@@ -227,3 +227,15 @@ def test_image_new_property_performance(benchmark):
data = BytesIO(get_data_from_url(url, name=name))
benchmark(image_new_property, data)
+
+
+def image_extraction(da... | Performance Bug: Reading large compressed images takes huge time to process
Reading a compressed image takes (>10minutes), if the image is large-ish (>250kb)
## Environment
Which environment were you using when you encountered the problem?
```bash
$ python -m platform
Linux-5.15.146.1-microsoft-standard-WSL2... | 1,715,679,264,000 | [] | Performance Issue | [
"pypdf/filters.py:decompress"
] | [] | 1 | 371 | |
huggingface/trl | huggingface__trl-2450 | 9c5388b69e0842f76edc46a2ff9d0b51e1db4337 | null | diff --git a/trl/trainer/utils.py b/trl/trainer/utils.py
index d1cc3a0e9d..1122086ca9 100644
--- a/trl/trainer/utils.py
+++ b/trl/trainer/utils.py
@@ -274,7 +274,7 @@ def __call__(self, examples: list[dict[str, Any]]) -> dict[str, torch.Tensor]:
if "input_ids" not in example:
message = exa... | diff --git a/tests/test_utils.py b/tests/test_utils.py
index 4d26819058..87404070a8 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -205,54 +205,74 @@ def setUp(self):
ignore_index=self.ignore_index,
)
- # See https://github.com/huggingface/trl/pull/2287#discussion_r1856594421
-... | DataCollatorForChatML unexpected generation prompt
### System Info
- Platform: macOS-15.1.1-arm64-arm-64bit
- Python version: 3.10.15
- PyTorch version: 2.4.1
- CUDA device(s): not available
- Transformers version: 4.45.2
- Accelerate version: 1.0.1
- Accelerate config: not found
- Datasets version: 3.0.1
- HF... | 1,733,578,241,000 | [
"🐛 bug"
] | Bug Report | [
"trl/trainer/utils.py:DataCollatorForChatML.__call__"
] | [] | 1 | 372 | |
huggingface/trl | huggingface__trl-2398 | 9368dccef68dcbaffe847cba3fc73705755dd0b4 | null | diff --git a/trl/models/modeling_value_head.py b/trl/models/modeling_value_head.py
index 0797794013..592879ae3e 100644
--- a/trl/models/modeling_value_head.py
+++ b/trl/models/modeling_value_head.py
@@ -69,9 +69,6 @@ class AutoModelForCausalLMWithValueHead(PreTrainedModelWrapper):
Class attributes:
- **tr... | diff --git a/tests/test_modeling_value_head.py b/tests/test_modeling_value_head.py
index ddc4eb850c..be4932e62f 100644
--- a/tests/test_modeling_value_head.py
+++ b/tests/test_modeling_value_head.py
@@ -265,14 +265,6 @@ def test_generate(self, model_name):
# Just check if the generation works
_ = mode... | Still not supporting for ChatGLM3 maybe
### System Info
trl 0.12.1
transformers 4.46.2
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
... | 1,732,633,882,000 | [] | Bug Report | [
"trl/models/modeling_value_head.py:AutoModelForCausalLMWithValueHead.__init__"
] | [] | 1 | 373 | |
rq/rq | rq__rq-2138 | bb7f34053730da924486f97baa0d34fee9b1918b | null | diff --git a/rq/job.py b/rq/job.py
index 918f7fdc1..7a964aef0 100644
--- a/rq/job.py
+++ b/rq/job.py
@@ -729,6 +729,10 @@ def set_id(self, value: str) -> None:
"""
if not isinstance(value, str):
raise TypeError('id must be a string, not {0}'.format(type(value)))
+
+ if ":" in value... | diff --git a/tests/test_job.py b/tests/test_job.py
index 36c356e50..9f93cfbf8 100644
--- a/tests/test_job.py
+++ b/tests/test_job.py
@@ -900,6 +900,13 @@ def test_create_job_with_id(self):
self.assertRaises(TypeError, queue.enqueue, fixtures.say_hello, job_id=1234)
+ def test_create_job_with_invalid_id(... | Dangling job occurs when job_id contains the character ":"
Hi,
After updating to v2.0.0, job_id cannot contain the character ":"; otherwise, the job is never picked up or started by a worker.
I'll investigate this over the next few days, but I suspect that [PR #1964](https://github.com/rq/rq/pull/1964) might rest... | Thanks, mind opening a PR when you're free? | 1,730,770,766,000 | [] | Bug Report | [
"rq/job.py:Job.set_id"
] | [] | 1 | 375 |
cython/cython | cython__cython-6508 | 033ae2eb614f9fcb526c6049a751706df561db88 | null | diff --git a/Cython/Compiler/FusedNode.py b/Cython/Compiler/FusedNode.py
index 1b0ceb72ca0..9cb5f4e9059 100644
--- a/Cython/Compiler/FusedNode.py
+++ b/Cython/Compiler/FusedNode.py
@@ -643,11 +643,25 @@ def _fused_signature_index(self, pyx_code):
Generate Cython code for constructing a persistent nested dictio... | diff --git a/tests/run/fused_def.pyx b/tests/run/fused_def.pyx
index 96fcb93ea2d..2da9ddb89cb 100644
--- a/tests/run/fused_def.pyx
+++ b/tests/run/fused_def.pyx
@@ -461,3 +461,53 @@ cdef class HasBound:
func = bind_me[float]
func_fused = bind_me
+
+
+
+ctypedef fused IntOrFloat1:
+ int
+ float
+
+ctyp... | [BUG] No matching signature found on free-threaded Python code generation
### Describe the bug
Whilst testing SciPy for issues against free-threaded CPython 3.13 https://github.com/scipy/scipy/pull/21496, we have observed several instances of errors on functions that fuse dtypes, for some reason, these errors are bein... | There's definitely some caching inside the fused dispatch function (so it's quicker to match on subsequent calls). On a quick look, I'm not convinced that caching is currently thread-safe. I'll look into it.
(There's also some code that imports numpy and gets the dtype. That's less likely to be an issue, but probab... | 1,732,362,796,000 | [
"defect",
"Code Generation",
"freethreading CPython"
] | Bug Report | [
"Cython/Compiler/FusedNode.py:FusedCFuncDefNode._fused_signature_index",
"Cython/Compiler/FusedNode.py:FusedCFuncDefNode.make_fused_cpdef"
] | [] | 2 | 376 |
tobymao/sqlglot | tobymao__sqlglot-4524 | 992f6e9fc867aa5ad60a255be593b8982a0fbcba | null | diff --git a/sqlglot/optimizer/qualify_columns.py b/sqlglot/optimizer/qualify_columns.py
index ceb9ceb44f..871bb365ba 100644
--- a/sqlglot/optimizer/qualify_columns.py
+++ b/sqlglot/optimizer/qualify_columns.py
@@ -898,10 +898,22 @@ def get_source_columns(self, name: str, only_visible: bool = False) -> t.Sequenc
... | diff --git a/tests/fixtures/optimizer/qualify_columns.sql b/tests/fixtures/optimizer/qualify_columns.sql
index eeaf8b3555..ecb6eee5ce 100644
--- a/tests/fixtures/optimizer/qualify_columns.sql
+++ b/tests/fixtures/optimizer/qualify_columns.sql
@@ -205,6 +205,30 @@ SELECT x.a + 1 AS i, missing_column AS missing_column FR... | `sqlglot.optimizer.qualify.qualify` fails when using `CONNECT BY` + `level` from a CTE
## Fully reproducible code snippet
In Snowflake, the `SELECT FROM <data_source> START WITH ... CONNECT BY` construct allows a `<level_expression>` by basically allowing to select a `level` column in addition to the existing columns:... | 1,734,360,638,000 | [] | Bug Report | [
"sqlglot/optimizer/qualify_columns.py:Resolver.get_source_columns"
] | [
"sqlglot/optimizer/qualify_columns.py:Resolver._get_source_pseudocolumns"
] | 1 | 377 | |
tobymao/sqlglot | tobymao__sqlglot-4523 | 98906d4520a0c582a0534384ee3d0c1449846ee6 | null | diff --git a/sqlglot/dialects/tsql.py b/sqlglot/dialects/tsql.py
index 856142edc4..1acc5ca8a7 100644
--- a/sqlglot/dialects/tsql.py
+++ b/sqlglot/dialects/tsql.py
@@ -232,7 +232,7 @@ def _builder(args: t.List) -> E:
if start_date and start_date.is_number:
# Numeric types are valid DATETIME values
... | diff --git a/tests/dialects/test_tsql.py b/tests/dialects/test_tsql.py
index 61365994bd..e8cd69648b 100644
--- a/tests/dialects/test_tsql.py
+++ b/tests/dialects/test_tsql.py
@@ -1579,6 +1579,11 @@ def test_date_diff(self):
},
)
+ self.validate_identity(
+ "SELECT DATEADD(DAY, ... | TSQL fails on parsing DATEDIFF function with literals
Hello,
I just discovered that this code generates an error while parsing DATEDIFF second argument:
```
import sqlglot
statement = """
SELECT DATEADD(DAY, DATEDIFF(DAY, -3, CREATION_TIME_NEW), '08:00:00')
FROM y
"""
ast = sqlglot.parse_one(statement, 'tsql'... | 1,734,354,762,000 | [] | Bug Report | [
"sqlglot/dialects/tsql.py:_build_date_delta"
] | [] | 1 | 378 | |
tobymao/sqlglot | tobymao__sqlglot-4519 | e15cd0be1c66e0e72d9815575fa9b210e66cf7c9 | null | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index fbbc8f14d4..48b59ecfaa 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -4620,14 +4620,14 @@ def _parse_interval(self, match_interval: bool = True) -> t.Optional[exp.Add | e
this = exp.Literal.string(this.to_py())
elif this and this... | diff --git a/tests/dialects/test_postgres.py b/tests/dialects/test_postgres.py
index 204151ed38..acdb2d4a83 100644
--- a/tests/dialects/test_postgres.py
+++ b/tests/dialects/test_postgres.py
@@ -71,6 +71,9 @@ def test_postgres(self):
self.validate_identity("EXEC AS myfunc @id = 123", check_command_warning=True... | interval parsing for postgresql
Hi! I have some problems with parsing postgresql statements that has more that one date units in intervals...
lib version 25.34.0
```python3
import sqlglot
sql = """
select *
from table
where
some_column >= (current_date + interval '1 day 1 hour')
and some_another_col... | Shall I check this?
One more example that has the expected result, but strange (for postgres) syntax
```sql
select *
from table
where
some_column >= current_date + (interval '1 day 1 hour')
and some_another_column is True;
```
@ankur334 Feel free to take a stab at it if that's what you mean, thank you!
... | 1,734,102,016,000 | [] | Bug Report | [
"sqlglot/parser.py:Parser._parse_interval"
] | [] | 1 | 379 |
tobymao/sqlglot | tobymao__sqlglot-4515 | 5d3ee4cac1c5c9e45cbf6263c32c87fda78f9854 | null | diff --git a/sqlglot/optimizer/qualify_columns.py b/sqlglot/optimizer/qualify_columns.py
index 4880a615c6..ca5684407c 100644
--- a/sqlglot/optimizer/qualify_columns.py
+++ b/sqlglot/optimizer/qualify_columns.py
@@ -75,7 +75,7 @@ def qualify_columns(
if not schema.empty and expand_alias_refs:
_expa... | diff --git a/tests/fixtures/optimizer/qualify_columns.sql b/tests/fixtures/optimizer/qualify_columns.sql
index 2640145bed..735c71a5e7 100644
--- a/tests/fixtures/optimizer/qualify_columns.sql
+++ b/tests/fixtures/optimizer/qualify_columns.sql
@@ -190,6 +190,10 @@ SELECT x._col_0 AS _col_0, x._col_1 AS _col_1 FROM (VALU... | `sqlglot.optimizer.qualify_columns.qualify_columns` fails on `table.*` inside `UNION ALL`
### Explanation
When using `qualify_columns` on a SQL query that contains a `UNION ALL` and one of the statement uses the `SELECT table.*` notation, `qualify_columns` raises an `sqlglot.errors.OptimizeError: Unknown table: p` e... | 1,734,017,130,000 | [] | Bug Report | [
"sqlglot/optimizer/qualify_columns.py:qualify_columns"
] | [] | 1 | 380 | |
tobymao/sqlglot | tobymao__sqlglot-4480 | 41c6d24c99e130b3c8e35e348a25a59e9e3d5553 | null | diff --git a/sqlglot/generator.py b/sqlglot/generator.py
index 842df5a753..9aee03c1ca 100644
--- a/sqlglot/generator.py
+++ b/sqlglot/generator.py
@@ -2399,18 +2399,21 @@ def ordered_sql(self, expression: exp.Ordered) -> str:
f"'{nulls_sort_change.strip()}' translation not supported in window funct... | diff --git a/tests/dialects/test_bigquery.py b/tests/dialects/test_bigquery.py
index 80a5dcc387..ec16dba243 100644
--- a/tests/dialects/test_bigquery.py
+++ b/tests/dialects/test_bigquery.py
@@ -2131,6 +2131,16 @@ def test_null_ordering(self):
},
)
+ self.validate_all(
+ ... | Wrong result for transpiling `OVER (ORDER BY .. )` from `trino` to `bigquery`
**Fully reproducible code snippet**
It can be reproduced by the following code:
```python
import sqlglot
sql = """
WITH t1 as (
select 1 f1, 2 f2 union all select 2 f1, 4 f2 union all select 3 f1, 6 f2
)
select sum(f1) over (orde... | 1,733,395,877,000 | [] | Bug Report | [
"sqlglot/generator.py:Generator.ordered_sql"
] | [] | 1 | 382 | |
tobymao/sqlglot | tobymao__sqlglot-4434 | 38e2e19ac3e20224dc07128994a47340aa56e635 | null | diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py
index 1d2b246e5d..350f7773b5 100644
--- a/sqlglot/dialects/snowflake.py
+++ b/sqlglot/dialects/snowflake.py
@@ -465,6 +465,12 @@ class Parser(parser.Parser):
PROPERTY_PARSERS = {
**parser.Parser.PROPERTY_PARSERS,
... | diff --git a/tests/dialects/test_clickhouse.py b/tests/dialects/test_clickhouse.py
index 19b3ce3934..65b12bbd58 100644
--- a/tests/dialects/test_clickhouse.py
+++ b/tests/dialects/test_clickhouse.py
@@ -974,11 +974,11 @@ def test_ddl(self):
amount Float64
)
PRIMARY KEY (id)
-SOURCE(CLICKHOUSE(
+SOURCE (CLICKHOUSE(... | Support Snowflake WITH TAG syntax
**Is your feature request related to a problem? Please describe.**
This library does not support snowflake's WITH TAG syntax for setting tags while creating an object.
For example,
```
sqlglot.parse_one("CREATE VIEW my_view WITH TAG (my_tag = 'tag')", dialect="snowflake")
'CREAT... | This is low priority for us right now, but we'll be happy to accept a well-crafted & tested PR. Shouldn't be too hard to support. | 1,732,138,295,000 | [] | Feature Request | [
"sqlglot/generator.py:Generator.dictproperty_sql",
"sqlglot/generator.py:Generator.dictsubproperty_sql",
"sqlglot/parser.py:Parser._parse_dict_property"
] | [
"sqlglot/parser.py:Parser._parse_key_value_list"
] | 3 | 385 |
tobymao/sqlglot | tobymao__sqlglot-4415 | 122ef5f41c4e29347026a81e6f6460ccf8e910ed | null | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index 5fa7d1ef13..6e7b21a6cc 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -5109,9 +5109,8 @@ def _parse_column_ops(self, this: t.Optional[exp.Expression]) -> t.Optional[exp.
else:
field = self._parse_field(any_token=True, anon... | diff --git a/tests/dialects/test_snowflake.py b/tests/dialects/test_snowflake.py
index 515a07c4f4..157947df46 100644
--- a/tests/dialects/test_snowflake.py
+++ b/tests/dialects/test_snowflake.py
@@ -2250,3 +2250,13 @@ def test_grant(self):
self.validate_identity(
"GRANT ALL PRIVILEGES ON FUNCTION ... | User defined function recognized as column (Snowflake dialect)
This query:
`select COL1,COL2 from some_table,TABLE(SOME_DB.SOME_SCHEMA.TABLE_FUNC(value1, value2) over (PARTITION BY value1))`
`p = sqlglot.parse_one(query, dialect=sqlglot.Dialects.SNOWFLAKE)`
creates this ast:
`Select(
expressions=[
Column(
... | 1,731,927,378,000 | [] | Bug Report | [
"sqlglot/parser.py:Parser._parse_column_ops"
] | [] | 1 | 387 | |
tobymao/sqlglot | tobymao__sqlglot-4390 | e7b67e0c280179188ce1bca650735978b758dca1 | null | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index a2f118fb5c..5fa7d1ef13 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -1738,9 +1738,13 @@ def _parse_drop(self, exists: bool = False) -> exp.Drop | exp.Command:
concurrently = self._match_text_seq("CONCURRENTLY")
if_exists = exists ... | diff --git a/tests/test_parser.py b/tests/test_parser.py
index ba1240c4f7..b60d719341 100644
--- a/tests/test_parser.py
+++ b/tests/test_parser.py
@@ -879,3 +879,8 @@ def test_odbc_date_literals(self):
expr = parse_one(sql)
self.assertIsInstance(expr, exp.Insert)
self.assertIsInst... | Simple query, wrong tables list
Very simple query, wrong tables list! it considers `activity_id` as a table!
```python
sql='ALTER TABLE ride DROP COLUMN activity_id'
list(sqlglot.parse_one(sql, read='mysql').find_all(sqlglot.exp.Table))
# list(sqlglot.parse_one(sql, read='mysql').find_all(sqlglot.exp.Table))
# ... | 1,731,576,547,000 | [] | Bug Report | [
"sqlglot/parser.py:Parser._parse_drop"
] | [] | 1 | 388 | |
tobymao/sqlglot | tobymao__sqlglot-4366 | 79c675a49fb44a6a7a97ea0de79822d8571724be | null | diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py
index bf1abe2f1d..a183a883f5 100644
--- a/sqlglot/dialects/duckdb.py
+++ b/sqlglot/dialects/duckdb.py
@@ -156,18 +156,24 @@ def _struct_sql(self: DuckDB.Generator, expression: exp.Struct) -> str:
# BigQuery allows inline construction such as "S... | diff --git a/tests/dialects/test_duckdb.py b/tests/dialects/test_duckdb.py
index 5b3b2a4ff4..3d4fe9cc4a 100644
--- a/tests/dialects/test_duckdb.py
+++ b/tests/dialects/test_duckdb.py
@@ -1154,6 +1154,7 @@ def test_cast(self):
self.validate_identity("CAST(x AS BINARY)", "CAST(x AS BLOB)")
self.validate... | BUG(duckdb): Compiling STRUCTs throws away field name info, which is sometimes needed
**Fully reproducible code snippet**
```python
import sqlglot
print(sqlglot.__version__)
# 25.28.1.dev12
inp = """SELECT CAST({'i':1, 's':'foo'} AS STRUCT("s" TEXT, "i" INT));"""
e = sqlglot.parse_one(inp, "duckdb")
actual = e... | 1,731,317,324,000 | [] | Bug Report | [
"sqlglot/dialects/duckdb.py:_struct_sql"
] | [] | 1 | 390 | |
tobymao/sqlglot | tobymao__sqlglot-3901 | e0cd7e20298f84dc245676ecded6f174cf1c9c3e | null | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index c6b5198977..11a58346aa 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -3267,8 +3267,10 @@ def _parse_join(
kwargs["on"] = self._parse_assignment()
elif self._match(TokenType.USING):
kwargs["using"] = self._parse_using... | diff --git a/tests/test_parser.py b/tests/test_parser.py
index f06f1e11b5..ff82e08cc5 100644
--- a/tests/test_parser.py
+++ b/tests/test_parser.py
@@ -699,77 +699,19 @@ def test_pivot_columns(self):
def test_parse_nested(self):
now = time.time()
- query = parse_one(
- """
- ... | Parsing APPLY doubles execution time per APPLY statement
SQLGlot versions: tested with 25.9.0 and 25.8.1
Dialect: "tsql"
Description: When parsing SQL using an APPLY statement, each successive APPLY seems to double parsing time.
I was parsing a (really badly written) SQL query and noticed my code sort of just s... | Also tested this using a LEFT JOIN LATERAL and dialect set to "databricks" - same timing
Thanks for the detailed report, I can reproduce these times. We'll take a look. | 1,723,554,402,000 | [] | Performance Issue | [
"sqlglot/parser.py:Parser._parse_join"
] | [] | 1 | 391 |
tobymao/sqlglot | tobymao__sqlglot-3436 | 30f9d30d8ab3727a43b1e6f363f28631cbfa7f92 | null | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index 229af18aec..8f5050ee31 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -1678,6 +1678,7 @@ def _parse_sequence_properties(self) -> t.Optional[exp.SequenceProperties]:
index = self._index
while self._curr:
+ self._match(Toke... | diff --git a/tests/dialects/test_snowflake.py b/tests/dialects/test_snowflake.py
index d03412cace..c9150ff3ee 100644
--- a/tests/dialects/test_snowflake.py
+++ b/tests/dialects/test_snowflake.py
@@ -1223,6 +1223,14 @@ def test_ddl(self):
"CREATE OR REPLACE FUNCTION my_udtf(foo BOOLEAN) RETURNS TABLE(col1 A... | Allow comma-separated values for `CREATE SEQUENCE` properties (Snowflake)
Allow comma-separated values for `CREATE SEQUENCE` properties in Snowflake queries.
This query can be parsed successfully (results is an instance of `exp.Create`):
```
query = "CREATE SEQUENCE seq1 WITH START=1 INCREMENT=1 ORDER"
re... | 1,715,204,998,000 | [] | Feature Request | [
"sqlglot/parser.py:Parser._parse_sequence_properties"
] | [] | 1 | 392 | |
tobymao/sqlglot | tobymao__sqlglot-3417 | e1b6483d5e26d556f6a3dd82c6d35f475c189c2b | null | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index b0a25167ba..51022bbe6b 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -5974,7 +5974,7 @@ def _parse_set_item_assignment(
if kind in ("GLOBAL", "SESSION") and self._match_text_seq("TRANSACTION"):
return self._parse_set_transaction(... | diff --git a/tests/dialects/test_snowflake.py b/tests/dialects/test_snowflake.py
index dae835574b..97b83ab930 100644
--- a/tests/dialects/test_snowflake.py
+++ b/tests/dialects/test_snowflake.py
@@ -10,14 +10,6 @@ class TestSnowflake(Validator):
dialect = "snowflake"
def test_snowflake(self):
- self.... | SET TAG identifier containing dots is parsed as Command
Would love for an identifier containing dots to be parsed as a `SetItem` ie: like an identifier without dots.
Without dots, is parsed as SetItem:
```
❯ sqlglot.parse_one("ALTER TABLE table1 SET TAG foo='baz'", read="snowflake")
AlterTable(
this=Table(
... | 1,715,084,497,000 | [] | Feature Request | [
"sqlglot/parser.py:Parser._parse_set_item_assignment"
] | [] | 1 | 393 | |
tobymao/sqlglot | tobymao__sqlglot-3167 | df4ce17f24bbb16a64172e351f4e27ac74de668a | null | diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py
index 6022132354..0ffe4cf0a3 100644
--- a/sqlglot/dialects/snowflake.py
+++ b/sqlglot/dialects/snowflake.py
@@ -48,10 +48,12 @@ def _builder(args: t.List) -> exp.Func:
return exp.UnixToTime(this=value, scale=seq_get(args, 1)... | diff --git a/tests/dialects/test_snowflake.py b/tests/dialects/test_snowflake.py
index 19a1eb58a9..00e6169ac0 100644
--- a/tests/dialects/test_snowflake.py
+++ b/tests/dialects/test_snowflake.py
@@ -985,6 +985,17 @@ def test_timestamps(self):
"SELECT CAST('2019-02-28' AS DATE) + INTERVAL '1 day, 1 year'",
... | feat(snowflake): Adding support for DATE, TO_DATE, TRY_TO_DATE functions
Fixes #3152
Add support for the mentioned functions by:
- Converting all of them to `exp.TsOrDsToDate` expressions if the format is present _or_ to date casts otherwise
- Adding a new argument `safe` in `exp.TsOrDsToDate` to preserve the r... | the command above works great.
```
>>> import sqlglot
>>> sqlglot.transpile("SELECT try_to_date('01-01-1999', 'MM-DD-YYYY')", read="snowflake", write="duckdb")
SELECT CAST(STRPTIME('01-01-1999', '%m-%d-%Y') AS DATE)
```
but seems this command still not work, when the parameter is a column_name instead of a str... | 1,710,868,748,000 | [] | Feature Request | [
"sqlglot/dialects/snowflake.py:_build_datetime"
] | [] | 1 | 394 |
sphinx-doc/sphinx | sphinx-doc__sphinx-12206 | 6d6feb240fa670597229b7c42de74711cc42a680 | null | diff --git a/sphinx/builders/linkcheck.py b/sphinx/builders/linkcheck.py
index 9178458b140..7d75cac9885 100644
--- a/sphinx/builders/linkcheck.py
+++ b/sphinx/builders/linkcheck.py
@@ -13,7 +13,7 @@
from queue import PriorityQueue, Queue
from threading import Thread
from typing import TYPE_CHECKING, NamedTuple, cast... | diff --git a/tests/roots/test-linkcheck-anchors-ignore-for-url/index.rst b/tests/roots/test-linkcheck-anchors-ignore-for-url/index.rst
index df287b4c425..02969b63e31 100644
--- a/tests/roots/test-linkcheck-anchors-ignore-for-url/index.rst
+++ b/tests/roots/test-linkcheck-anchors-ignore-for-url/index.rst
@@ -1,5 +1,6 @@... | linkcheck performance: downloading page multiple times when checking anchors
### Problem
- If my sphinx documentation contains multiple links with anchors to a web page with multiple anchors, it will download the page multiple times, once per anchor to check
- This scales very badly. If I have many hundreds or thous... | cc @jayaddison | 1,711,393,478,000 | [
"builder:linkcheck"
] | Performance Issue | [
"sphinx/builders/linkcheck.py:HyperlinkAvailabilityCheckWorker._check_uri",
"sphinx/builders/linkcheck.py:contains_anchor"
] | [] | 2 | 395 |
prowler-cloud/prowler | prowler-cloud__prowler-6128 | 9c089756c3cc745f6cecb3aa1f091cf603834f10 | null | diff --git a/prowler/providers/aws/services/iam/iam_rotate_access_key_90_days/iam_rotate_access_key_90_days.py b/prowler/providers/aws/services/iam/iam_rotate_access_key_90_days/iam_rotate_access_key_90_days.py
index 3de61166c5a..225b92381df 100644
--- a/prowler/providers/aws/services/iam/iam_rotate_access_key_90_days/... | diff --git a/tests/providers/aws/services/iam/iam_rotate_access_key_90_days/iam_rotate_access_key_90_days_test.py b/tests/providers/aws/services/iam/iam_rotate_access_key_90_days/iam_rotate_access_key_90_days_test.py
index 57c697062f2..8205cd583c1 100644
--- a/tests/providers/aws/services/iam/iam_rotate_access_key_90_d... | `finding_uid` is not unique in `iam_rotate_access_key_90_days`
### Steps to Reproduce
python prowler.py aws -s iam
### Expected behavior
The program output CSV file with the same finding_uid like that
prowler-aws-iam_rotate_access_key_90_days-<AWS ACCOUNT_ID>-us-east-1-<IAM_USER>
This IAM_USER has 2 access ke... | Hello @banv, that's a great catch!
The Prowler Finding UID has the above format for historical reasons and can't be an UUID since it can't be changed because it'll introduce a breaking change. I think in this case the check needs to be adjusted to add a index for each access key, like `prowler-aws-iam_rotate_access_... | 1,733,864,257,000 | [
"provider/aws",
"was-backported",
"backport-to-v4.6",
"backport-to-v5.0"
] | Bug Report | [
"prowler/providers/aws/services/iam/iam_rotate_access_key_90_days/iam_rotate_access_key_90_days.py:iam_rotate_access_key_90_days.execute"
] | [] | 1 | 396 |
prowler-cloud/prowler | prowler-cloud__prowler-6119 | b984f0423a8ad80fffd5b9dc4853b034f4d5211e | null | diff --git a/prowler/providers/aws/services/wafv2/wafv2_service.py b/prowler/providers/aws/services/wafv2/wafv2_service.py
index 85feed76f62..c867691b771 100644
--- a/prowler/providers/aws/services/wafv2/wafv2_service.py
+++ b/prowler/providers/aws/services/wafv2/wafv2_service.py
@@ -150,6 +150,22 @@ def _get_web_acl(s... | diff --git a/tests/providers/aws/services/wafv2/wafv2_webacl_with_rules/wafv2_webacl_with_rules_test.py b/tests/providers/aws/services/wafv2/wafv2_webacl_with_rules/wafv2_webacl_with_rules_test.py
index 6d476d0f9be..19cde553a77 100644
--- a/tests/providers/aws/services/wafv2/wafv2_webacl_with_rules/wafv2_webacl_with_ru... | False positive on `wafv2_webacl_with_rules` when when ACL provisioned with AWS Firewall Manager
### Steps to Reproduce
1. Depoloy ACL with AWS Firewall Manager including any rule group
2. Run prowler against AWS account where ACL is provisioned with Firewall Manager
3. Prowler will add that ACL as failed because t... | Hello @ivan-morhun , we will review it as soon as possible and get back to you once we have an update.
Thanks for using Prowler 🚀 | 1,733,850,101,000 | [
"provider/aws",
"was-backported",
"backport-to-v4.6",
"backport-to-v5.0"
] | Bug Report | [
"prowler/providers/aws/services/wafv2/wafv2_service.py:WAFv2._get_web_acl"
] | [] | 1 | 397 |
prowler-cloud/prowler | prowler-cloud__prowler-6108 | 38a0d2d740e886f905a047791b22274fe741d60d | null | diff --git a/prowler/providers/aws/services/firehose/firehose_stream_encrypted_at_rest/firehose_stream_encrypted_at_rest.py b/prowler/providers/aws/services/firehose/firehose_stream_encrypted_at_rest/firehose_stream_encrypted_at_rest.py
index fe63558366a..0430b3d1847 100644
--- a/prowler/providers/aws/services/firehose... | diff --git a/tests/providers/aws/services/firehose/firehose_stream_encrypted_at_rest/firehose_stream_encrypted_at_rest_test.py b/tests/providers/aws/services/firehose/firehose_stream_encrypted_at_rest/firehose_stream_encrypted_at_rest_test.py
index d2a1fa40a9a..da00aa89006 100644
--- a/tests/providers/aws/services/fire... | False positive alert on firehose_stream_encrypted_at_rest
### Steps to Reproduce
Hi! There is a bug with the check `firehose_stream_encrypted_at_rest`
### Expected behavior
The check should not alert if the Firehose Server-side encryption (SSE) is enabled
### Actual Result with Screenshots or Logs
!... | Hello @serhii-ciq, we will review it as soon as possible and get back to you once we have an update.
Thanks for using Prowler 🚀 | 1,733,827,600,000 | [
"provider/aws",
"was-backported",
"backport-to-v4.6",
"backport-to-v5.0"
] | Bug Report | [
"prowler/providers/aws/services/firehose/firehose_stream_encrypted_at_rest/firehose_stream_encrypted_at_rest.py:firehose_stream_encrypted_at_rest.execute"
] | [] | 1 | 398 |
prowler-cloud/prowler | prowler-cloud__prowler-6004 | 32d8da213137fc6f050a824e185e9cd717f60465 | null | diff --git a/prowler/providers/azure/services/app/app_minimum_tls_version_12/app_minimum_tls_version_12.py b/prowler/providers/azure/services/app/app_minimum_tls_version_12/app_minimum_tls_version_12.py
index 32020e2ec2c..519335a016e 100644
--- a/prowler/providers/azure/services/app/app_minimum_tls_version_12/app_minim... | diff --git a/tests/providers/azure/services/app/app_minimum_tls_version_12/app_minimum_tls_version_12_test.py b/tests/providers/azure/services/app/app_minimum_tls_version_12/app_minimum_tls_version_12_test.py
index da0271cbc07..12a149015e3 100644
--- a/tests/providers/azure/services/app/app_minimum_tls_version_12/app_m... | Minimum TLS 1.2 fails for Azure Web App when TLS 1.3 is enabled
### Steps to Reproduce
1. prowler azure --subscription-ids xx-88ae-4fe8-901a-16e33871e7c7 xx-5c28-4e32-94df-591a5baedf69 --az-cli-auth
2. Azure Web App with TLS 1.3 enabled
### Expected behavior
in file prowler/providers/azure/services/app/app_minimum_... | 1,733,241,577,000 | [
"provider/azure",
"was-backported",
"backport-to-v4.6",
"backport-to-v5.0"
] | Bug Report | [
"prowler/providers/azure/services/app/app_minimum_tls_version_12/app_minimum_tls_version_12.py:app_minimum_tls_version_12.execute"
] | [] | 1 | 399 | |
prowler-cloud/prowler | prowler-cloud__prowler-5933 | b69a0d51373572e33b3c416f8e73dae80735aeaa | null | diff --git a/prowler/lib/check/checks_loader.py b/prowler/lib/check/checks_loader.py
index 8d5ac97ac42..ff54ccc8d18 100644
--- a/prowler/lib/check/checks_loader.py
+++ b/prowler/lib/check/checks_loader.py
@@ -111,7 +111,7 @@ def load_checks_to_execute(
):
checks_to_execute.add(check_name)
... | diff --git a/tests/lib/check/check_loader_test.py b/tests/lib/check/check_loader_test.py
index 72080fc560f..a122d76ff5f 100644
--- a/tests/lib/check/check_loader_test.py
+++ b/tests/lib/check/check_loader_test.py
@@ -14,11 +14,13 @@
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY = "medium"
S3_BUCKET_LEVEL_PUBLIC_ACCESS... | Prowler scan stuck on cloudtrail_threat_detection checks
### Steps to Reproduce
Command: prowler aws --log-level DEBUG.
Cloud provider: aws
Environment: SecurityAudit and ReadOnlyAccess levels to a single organization. Trying to run it using the security audit credentials.
Error: Program seems to hang up on cloud... | Hi @Micah-Champagne, the CloudTrail threat detection checks analyze the last 24 hours of events in CloudTrail to identify potential enumeration, LLM hijacking, or privilege escalation attacks. This is why it takes some time to retrieve and process the CloudTrail log events.
That said, we’re working on a fix to ensur... | 1,732,719,908,000 | [
"was-backported",
"backport-to-v4.6"
] | Performance Issue | [
"prowler/lib/check/checks_loader.py:load_checks_to_execute"
] | [] | 1 | 400 |
prowler-cloud/prowler | prowler-cloud__prowler-5930 | 03db9d3f74372f71da8dc14a6aca151727b7df55 | null | diff --git a/prowler/lib/check/models.py b/prowler/lib/check/models.py
index 446440b6aae..b060c58e86e 100644
--- a/prowler/lib/check/models.py
+++ b/prowler/lib/check/models.py
@@ -322,8 +322,9 @@ def list_by_service(bulk_checks_metadata: dict, service: str = None) -> set:
checks = set()
if service:... | diff --git a/tests/lib/check/models_test.py b/tests/lib/check/models_test.py
index 7cf470c7e6f..414de90288c 100644
--- a/tests/lib/check/models_test.py
+++ b/tests/lib/check/models_test.py
@@ -32,6 +32,35 @@
Compliance=[],
)
+mock_metadata_lambda = CheckMetadata(
+ Provider="aws",
+ CheckID="awslambda_fun... | Could not run with argurment "-s lambda" or "-s awslambda"
### Steps to Reproduce
export LOG_LEVEL=INFO && python prowler.py aws -s lambda
### Expected behavior
Prowler should scan for awslambda service
### Actual Result with Screenshots or Logs
2024-11-27 13:57:25,877 [File: __main__.py:280] [Module: ... | Hello @banv-dev, we will review this and get back to you once we have an update.
Thanks for using Prowler! | 1,732,714,084,000 | [
"was-backported",
"backport-to-v4.6"
] | Bug Report | [
"prowler/lib/check/models.py:CheckMetadata.list_by_service"
] | [] | 1 | 401 |
networkx/networkx | networkx__networkx-7729 | 9beaf7a0b59fe21775cd93862d9c7b28152a2d8c | null | diff --git a/networkx/classes/coreviews.py b/networkx/classes/coreviews.py
index a6e85213f6c..4769ffa71ab 100644
--- a/networkx/classes/coreviews.py
+++ b/networkx/classes/coreviews.py
@@ -397,7 +397,11 @@ def __iter__(self):
yield n
def __getitem__(self, nbr):
- if nbr in self._atlas and... | diff --git a/networkx/classes/tests/test_subgraphviews.py b/networkx/classes/tests/test_subgraphviews.py
index 73e0fdd2d52..7f70b29ce9d 100644
--- a/networkx/classes/tests/test_subgraphviews.py
+++ b/networkx/classes/tests/test_subgraphviews.py
@@ -360,3 +360,12 @@ def test_readonly(self):
pytest.raises(nx.Net... | `MultiGraph` views return inconsistent `has_edge` results
### Current Behavior
Filtered views of a `MultiGraph`, created with `edge_subgraph`, return inconsistent results from `has_edge`.
### Expected Behavior
Match the same results from a `Graph`: either `True` if the edge exists in the subgraph view, or `Fal... | Thanks for this Report! It is not a known/reported issue.
I can verify that this behavior is indeed a bug.
To get this strange behavior, both 'a' and 'c' must be incident to visible edges in the subgraph and there must be a hidden edge between 'a' and 'c'. The means `'c' in H['a']` passes because the nodes are i... | 1,732,187,410,000 | [
"type: Bug fix"
] | Bug Report | [
"networkx/classes/coreviews.py:FilterMultiInner.__getitem__"
] | [] | 1 | 402 |
networkx/networkx | networkx__networkx-7721 | 5c3e8beef128f532b536d2d4a9f7e309ed53416b | null | diff --git a/networkx/algorithms/approximation/traveling_salesman.py b/networkx/algorithms/approximation/traveling_salesman.py
index 2080c99aec7..f2c37d9cc40 100644
--- a/networkx/algorithms/approximation/traveling_salesman.py
+++ b/networkx/algorithms/approximation/traveling_salesman.py
@@ -334,7 +334,9 @@ def traveli... | diff --git a/networkx/algorithms/approximation/tests/test_traveling_salesman.py b/networkx/algorithms/approximation/tests/test_traveling_salesman.py
index 28791daac7d..bb309fc705d 100644
--- a/networkx/algorithms/approximation/tests/test_traveling_salesman.py
+++ b/networkx/algorithms/approximation/tests/test_traveling... | traveling_salesman_problem seems to ignore the weight parameter
### Current Behavior
`traveling_salesman_problem` returns long paths when `weight` is named something other than 'weight'.
### Expected Behavior
`traveling_salesman_problem` should use the `weight` param to allow the correct edge attribute to be s... | Thanks for the issue, there is indeed a bug here!
Specifically, in the graph pre-processing for `traveling_salesman_problem`, we rebuild the graph to restrict it to the nodes passed in via the `nodes` parameter, create a complete graph and re-weight the graph to ensure it satisfies the triangle inequality (which is ... | 1,731,686,778,000 | [
"type: Bug fix",
"Needs review"
] | Bug Report | [
"networkx/algorithms/approximation/traveling_salesman.py:traveling_salesman_problem"
] | [] | 1 | 403 |
modin-project/modin | modin-project__modin-7189 | 6d64e08fc8e124b7a6ba2aecd4062dfe7fef90f1 | null | diff --git a/modin/config/envvars.py b/modin/config/envvars.py
index f1dc047adb9..60877dad883 100644
--- a/modin/config/envvars.py
+++ b/modin/config/envvars.py
@@ -313,6 +313,20 @@ def _get_default(cls) -> int:
return multiprocessing.cpu_count()
+ @classmethod
+ def get(cls) -> int:
+ """
+ ... | diff --git a/modin/tests/config/test_envvars.py b/modin/tests/config/test_envvars.py
index 5c82a7a9836..df1e5ef58e3 100644
--- a/modin/tests/config/test_envvars.py
+++ b/modin/tests/config/test_envvars.py
@@ -357,3 +357,15 @@ def test_context_manager_update_config(modify_config):
assert cfg.LazyExecution.get()... | Add extra check for `NPartitions` and `CpuCount` classes
| 1,713,282,426,000 | [] | Feature Request | [
"modin/config/envvars.py:LogMemoryInterval.get",
"modin/config/envvars.py:LogFileSize.get",
"modin/config/envvars.py:MinPartitionSize.get"
] | [
"modin/config/envvars.py:CpuCount.get",
"modin/config/envvars.py:NPartitions.get"
] | 3 | 405 | |
modin-project/modin | modin-project__modin-7059 | 4ab68c6c4b25f987c259754a5399da5df5b3608f | null | diff --git a/modin/pandas/dataframe.py b/modin/pandas/dataframe.py
index 0f0856652bc..35e9d58d54f 100644
--- a/modin/pandas/dataframe.py
+++ b/modin/pandas/dataframe.py
@@ -1048,10 +1048,17 @@ def insert(
or isinstance(value, (array, np.ndarray))
and len(value.shape) > 1
):
- ... | diff --git a/modin/pandas/test/dataframe/test_map_metadata.py b/modin/pandas/test/dataframe/test_map_metadata.py
index 40e1a7aded6..c63a00bb38e 100644
--- a/modin/pandas/test/dataframe/test_map_metadata.py
+++ b/modin/pandas/test/dataframe/test_map_metadata.py
@@ -1344,7 +1344,15 @@ def test_insert(data):
)
... | Update exception message for `insert` function
| 1,710,243,114,000 | [] | Feature Request | [
"modin/pandas/dataframe.py:DataFrame.insert"
] | [] | 1 | 406 | |
modin-project/modin | modin-project__modin-7052 | fe3a229df446172b98ed88e6993503d944695c81 | null | diff --git a/modin/pandas/base.py b/modin/pandas/base.py
index 6779c47dc9b..98d2e1d1520 100644
--- a/modin/pandas/base.py
+++ b/modin/pandas/base.py
@@ -1005,11 +1005,7 @@ def astype(self, dtype, copy=None, errors="raise"): # noqa: PR01, RT01, D200
# convert it to a dict before passing it to the query compile... | diff --git a/modin/pandas/test/dataframe/test_map_metadata.py b/modin/pandas/test/dataframe/test_map_metadata.py
index a47ab0cd850..40e1a7aded6 100644
--- a/modin/pandas/test/dataframe/test_map_metadata.py
+++ b/modin/pandas/test/dataframe/test_map_metadata.py
@@ -439,6 +439,9 @@ def test_astype():
if isin... | Update Exception message for `astype` function in the case of duplicated values
| 1,710,188,020,000 | [] | Feature Request | [
"modin/pandas/base.py:BasePandasDataset.astype"
] | [] | 1 | 407 | |
modin-project/modin | modin-project__modin-6995 | cd3d0c6b2c0023242365d87ba5a717f600dbb8bd | null | diff --git a/modin/pandas/series.py b/modin/pandas/series.py
index b079911bb82..b78e2c52641 100644
--- a/modin/pandas/series.py
+++ b/modin/pandas/series.py
@@ -1153,16 +1153,12 @@ def idxmax(self, axis=0, skipna=True, *args, **kwargs): # noqa: PR01, RT01, D20
"""
Return the row label of the maximum ... | diff --git a/modin/pandas/test/test_series.py b/modin/pandas/test/test_series.py
index 21eda400437..0cdf0cb09df 100644
--- a/modin/pandas/test/test_series.py
+++ b/modin/pandas/test/test_series.py
@@ -44,8 +44,6 @@
agg_func_values,
arg_keys,
assert_dtypes_equal,
- axis_keys,
- axis_values,
boo... | Update tests in `test_series.py`
| 1,709,553,987,000 | [] | Feature Request | [
"modin/pandas/series.py:Series.idxmax",
"modin/pandas/series.py:Series.idxmin"
] | [] | 2 | 408 | |
modin-project/modin | modin-project__modin-6980 | f601f8a9d602c70333548df3ef5e3c953df04f37 | null | diff --git a/modin/core/dataframe/pandas/dataframe/dataframe.py b/modin/core/dataframe/pandas/dataframe/dataframe.py
index b69bf25ad61..126539d5c49 100644
--- a/modin/core/dataframe/pandas/dataframe/dataframe.py
+++ b/modin/core/dataframe/pandas/dataframe/dataframe.py
@@ -196,7 +196,7 @@ def row_lengths(self):
@cl... | diff --git a/modin/test/storage_formats/pandas/test_internals.py b/modin/test/storage_formats/pandas/test_internals.py
index fe96a7c80bb..33d9ca1929e 100644
--- a/modin/test/storage_formats/pandas/test_internals.py
+++ b/modin/test/storage_formats/pandas/test_internals.py
@@ -1319,6 +1319,28 @@ def test_filter_empties_... | Unnecessary `._copartition()` on binary operations
<b>quote from [this comment](https://github.com/modin-project/modin/issues/6948#issuecomment-1969082652):</b>
This block with binary operations unnecessary triggers lazy executions because of [`._copartition()`](https://github.com/modin-project/modin/blob/271d98b260... | 1,709,143,726,000 | [] | Performance Issue | [
"modin/core/dataframe/pandas/dataframe/dataframe.py:PandasDataframe._get_lengths",
"modin/core/dataframe/pandas/dataframe/dataframe.py:PandasDataframe._copartition",
"modin/core/dataframe/pandas/dataframe/dataframe.py:PandasDataframe.n_ary_op"
] | [] | 3 | 409 | |
modin-project/modin | modin-project__modin-6951 | 4704751c4440574ce59b9948e88e6991e1d3183b | null | diff --git a/modin/pandas/groupby.py b/modin/pandas/groupby.py
index bfa425913f3..0103b5bce5d 100644
--- a/modin/pandas/groupby.py
+++ b/modin/pandas/groupby.py
@@ -22,6 +22,7 @@
import pandas.core.common as com
import pandas.core.groupby
from pandas._libs import lib
+from pandas.api.types import is_scalar
from pan... | diff --git a/modin/pandas/test/test_groupby.py b/modin/pandas/test/test_groupby.py
index bd27c3ce885..ff0e2bc5d61 100644
--- a/modin/pandas/test/test_groupby.py
+++ b/modin/pandas/test/test_groupby.py
@@ -3087,6 +3087,60 @@ def test_groupby_named_aggregation():
)
+def test_groupby_several_column_partitions():
... | Poor Performance on TPC-H Queries
In a benchmark project ([link](https://github.com/hesamshahrokhi/modin_tpch_benchmark)), I tried to compare the performance of Pandas and Modin (on Ray and Dask) on simple TPC-H queries (Q1 and Q6) written in Pandas. The results ([link](https://github.com/hesamshahrokhi/modin_tpch_benc... | Hi @hesamshahrokhi! Thanks for the question!
The method you use to initialize dask is not intended for Modin:
https://github.com/hesamshahrokhi/modin_tpch_benchmark/blob/e7cf88cf5d6152c32593193a94c837f5fde4f298/tpch_dask.py#L12C17-L12C32. The `processes` parameter should be `True`. Could I also ask why you initiali... | 1,708,450,910,000 | [] | Performance Issue | [
"modin/pandas/groupby.py:DataFrameGroupBy.aggregate"
] | [] | 1 | 410 |
modin-project/modin | modin-project__modin-6912 | 01c529cf06cfaf412b5725f41c81a5f914b44b95 | null | diff --git a/modin/core/dataframe/pandas/partitioning/partition_manager.py b/modin/core/dataframe/pandas/partitioning/partition_manager.py
index f7b3899550c..2ccbd1de47f 100644
--- a/modin/core/dataframe/pandas/partitioning/partition_manager.py
+++ b/modin/core/dataframe/pandas/partitioning/partition_manager.py
@@ -869... | diff --git a/modin/pandas/test/test_general.py b/modin/pandas/test/test_general.py
index 7a445657d2d..72f33d55d80 100644
--- a/modin/pandas/test/test_general.py
+++ b/modin/pandas/test/test_general.py
@@ -971,3 +971,15 @@ def make_frame(lib):
def test_get(key):
modin_df, pandas_df = create_test_dfs({"col0": [0, 1... | Remove unidist specific workaround in `.from_pandas()`
Since unidist 0.6.0 (modin-project/unidist#409) it now always copy the input data, so the workaround can be removed
https://github.com/modin-project/modin/blob/01c529cf06cfaf412b5725f41c81a5f914b44b95/modin/core/dataframe/pandas/partitioning/partition_manager.py#L... | 1,707,141,547,000 | [] | Feature Request | [
"modin/core/dataframe/pandas/partitioning/partition_manager.py:PandasDataframePartitionManager.split_pandas_df_into_partitions"
] | [] | 1 | 411 | |
ansible/ansible | ansible__ansible-84275 | 7501bbec201d121161e8c592749615e4f1e3eee1 | null | diff --git a/lib/ansible/modules/dnf5.py b/lib/ansible/modules/dnf5.py
index df4ee206748716..b157158514ffab 100644
--- a/lib/ansible/modules/dnf5.py
+++ b/lib/ansible/modules/dnf5.py
@@ -358,6 +358,15 @@
def is_installed(base, spec):
settings = libdnf5.base.ResolveSpecSettings()
+ # Disable checking whether ... | diff --git a/changelogs/fragments/84259-dnf5-latest-fix.yml b/changelogs/fragments/84259-dnf5-latest-fix.yml
new file mode 100644
index 00000000000000..40f6ddb740860f
--- /dev/null
+++ b/changelogs/fragments/84259-dnf5-latest-fix.yml
@@ -0,0 +1,2 @@
+bugfixes:
+ - "dnf5 - fix installing a package using ``state=latest`... | DNF5 state: latest fails when installing
### Summary
When using `ansible.builtin.dnf` with `state: latest` the previous behavior was to install and update all packages.
Fedora 41 uses DNF5 which does not install the package but instead fails.
A Bug report is filed on the DNF5 repo: https://github.com/rpm-software-ma... | Files identified in the description:
* [`lib/ansible/plugins/action/dnf.py`](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/dnf.py)
* [`lib/ansible/modules/dnf.py`](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/dnf.py)
If these files are incorrect, please update the `comp... | 1,730,983,497,000 | [
"module",
"stale_ci",
"bug",
"has_issue"
] | Bug Report | [
"lib/ansible/modules/dnf5.py:is_installed"
] | [] | 1 | 412 |
sympy/sympy | sympy__sympy-27325 | fbe647fb34f90de01946fcaa1314207f11e5b26e | 2c7f4aaf8f8a9d51cec7d280c5cde419486f5e7d | diff --git a/sympy/core/basic.py b/sympy/core/basic.py
index 62ccd3ed5508..f9f04c728245 100644
--- a/sympy/core/basic.py
+++ b/sympy/core/basic.py
@@ -2075,6 +2075,11 @@ def rewrite(self, *args, deep=True, **hints):
pattern = args[:-1]
rule = args[-1]
+ # Special case: map `abs` to `Abs`
+ ... | diff --git a/sympy/core/tests/test_basic.py b/sympy/core/tests/test_basic.py
index 44faf93162f0..d452af7e2a8e 100644
--- a/sympy/core/tests/test_basic.py
+++ b/sympy/core/tests/test_basic.py
@@ -17,6 +17,9 @@
from sympy.integrals.integrals import Integral
from sympy.functions.elementary.exponential import exp
from s... | rewrite(abs) should call rewrite(Abs)
```py
>>> sign(x).rewrite(abs)
sign(x)
>>> sign(x).rewrite(Abs)
Piecewise((0, Eq(x, 0)), (x/Abs(x), True))
```
This is because `rewrite(x)` just dispatches to the method called `_eval_rewrite_as_x`, and the method here is called `_eval_rewrite_as_Abs` since `Abs` is the name of th... | 1,732,781,175,000 | [] | Feature Request | [
"sympy/core/basic.py:Basic.rewrite"
] | [] | 1 | 414 | |
sympy/sympy | sympy__sympy-27288 | efa1521b638e4f7f2b9a18e0280a0667a4bb16c8 | null | diff --git a/sympy/physics/wigner.py b/sympy/physics/wigner.py
index 3dcd2f8f4690..5874acbb61c8 100644
--- a/sympy/physics/wigner.py
+++ b/sympy/physics/wigner.py
@@ -571,6 +571,8 @@ def wigner_6j(j_1, j_2, j_3, j_4, j_5, j_6, prec=None):
algebra system [Rasch03]_.
"""
+ j_1, j_2, j_3, j_4, j_5, j_6 = ma... | diff --git a/sympy/physics/tests/test_clebsch_gordan.py b/sympy/physics/tests/test_clebsch_gordan.py
index 2085d4a13479..e4313e3e412d 100644
--- a/sympy/physics/tests/test_clebsch_gordan.py
+++ b/sympy/physics/tests/test_clebsch_gordan.py
@@ -67,6 +67,10 @@ def test_clebsch_gordan_numpy():
def test_wigner():
+ ... | wigner_6j and wigner_9j violating numpy behavioral subtyping
Dear Developers,
I get an error when I run the following code snippet using wigner_6j.
```
import numpy as np
from sympy.physics.wigner import wigner_6j
y = np.float64(5.0)
x = wigner_6j(5, 5, y, 5, 5, 5)
```
This produces the following error message:
`... | 1,732,095,421,000 | [] | Bug Report | [
"sympy/physics/wigner.py:wigner_6j",
"sympy/physics/wigner.py:wigner_9j"
] | [] | 2 | 415 | |
sympy/sympy | sympy__sympy-27287 | efa1521b638e4f7f2b9a18e0280a0667a4bb16c8 | null | diff --git a/sympy/core/sympify.py b/sympy/core/sympify.py
index 0024d0a70a68..716833f15aa7 100644
--- a/sympy/core/sympify.py
+++ b/sympy/core/sympify.py
@@ -100,9 +100,14 @@ def _convert_numpy_types(a, **sympify_args):
prec = np.finfo(a).nmant + 1
# E.g. double precision means prec=53 but nmant=52
... | diff --git a/sympy/core/tests/test_sympify.py b/sympy/core/tests/test_sympify.py
index fc87fa448351..40be30c25d58 100644
--- a/sympy/core/tests/test_sympify.py
+++ b/sympy/core/tests/test_sympify.py
@@ -883,3 +883,10 @@ def test_issue_21536():
assert u.is_Add and set(u.args) == {4*x, 2}
assert v.is_Add and se... | Converting from `numpy.float32(float('inf'))` throws
sympy==1.13.3
numpy==2.1.3
python 3.10.15
```python
import numpy
import sympy
sympy.Float(np.float32(float('inf')))
```
thows with:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.10/site-packages/sympy/core/numbe... | The code here should be improved:
https://github.com/sympy/sympy/blob/efa1521b638e4f7f2b9a18e0280a0667a4bb16c8/sympy/core/sympify.py#L88-L105
I like to work on this issue. | 1,732,076,859,000 | [] | Bug Report | [
"sympy/core/sympify.py:_convert_numpy_types"
] | [] | 1 | 416 |
sympy/sympy | sympy__sympy-26358 | 2ce089415a59b7659c4b30d395381e0a92797e74 | null | diff --git a/sympy/integrals/heurisch.py b/sympy/integrals/heurisch.py
index 344edf250a2e..2a1b61c27da3 100644
--- a/sympy/integrals/heurisch.py
+++ b/sympy/integrals/heurisch.py
@@ -1,7 +1,8 @@
from __future__ import annotations
-from itertools import permutations
+from collections import defaultdict
from functool... | diff --git a/sympy/integrals/tests/test_heurisch.py b/sympy/integrals/tests/test_heurisch.py
index 2b4ffa0684f0..e220132f582f 100644
--- a/sympy/integrals/tests/test_heurisch.py
+++ b/sympy/integrals/tests/test_heurisch.py
@@ -2,7 +2,7 @@
from sympy.core.add import Add
from sympy.core.function import (Derivative, Fun... | Potential hang in monomials during integration
In the example given below, the `intermonomials` function is called from the `heurisch` function of the integration process. The problem I encounter is that before calling `internomomials`, the script subs some expression with polynomials. The expressions look something li... | > [a, b, c, d, e, f, g] = [100, sy.Rational('0.5'), 0.83, 50, sy.Rational('0.6'), 2, 120]
If there is a single floating point number `0.83`, then all numbers are considered inexact. That will be much slower than computing with exact numbers as the arithmetic laws are not valid, and therefore coefficients are treated... | 1,710,513,189,000 | [
"integrals",
"integrals.heurisch"
] | Performance Issue | [
"sympy/integrals/heurisch.py:heurisch"
] | [] | 1 | 417 |
huggingface/transformers | huggingface__transformers-34066 | a37a06a20b4a006963f15acf5f49afa5a0496f29 | null | diff --git a/src/transformers/pipelines/text_classification.py b/src/transformers/pipelines/text_classification.py
index 21ca70c2ac50aa..dadb29c386b41e 100644
--- a/src/transformers/pipelines/text_classification.py
+++ b/src/transformers/pipelines/text_classification.py
@@ -40,7 +40,8 @@ class ClassificationFunction(Ex... | diff --git a/tests/pipelines/test_pipelines_text_classification.py b/tests/pipelines/test_pipelines_text_classification.py
index 4956cb8aed132d..417b5f2c95c1e2 100644
--- a/tests/pipelines/test_pipelines_text_classification.py
+++ b/tests/pipelines/test_pipelines_text_classification.py
@@ -108,6 +108,12 @@ def test_sma... | Default behaviour in `TextClassificationPipeline` for regression problem type
### Feature request
The `AutoModelForSequenceClassification` class also supports problem_type=regression. I am not sure if it is popularly used but I believe that function_to_apply in `TextClassificationPipeline` should behave as if it is ... | cc @Rocketknight1 WDYT?
Hi @subhalingamd, when doing regression tasks, `num_labels` should usually be 0, right? If that is set, then the function to applied will be `NONE`.
Hi @Rocketknight1, that doesn't seem to be the case. The regression score is represented by LABEL_0 by default.
Indeed, `num_labels == 1` is used... | 1,728,568,286,000 | [] | Feature Request | [
"src/transformers/pipelines/text_classification.py:TextClassificationPipeline.__call__",
"src/transformers/pipelines/text_classification.py:TextClassificationPipeline.postprocess"
] | [] | 2 | 418 |
huggingface/transformers | huggingface__transformers-32652 | a22ff36e0e347d3d0095cccd931cbbd12b14e86a | null | diff --git a/src/transformers/models/splinter/tokenization_splinter.py b/src/transformers/models/splinter/tokenization_splinter.py
index 2859497ba882c2..ffa135556aa47d 100644
--- a/src/transformers/models/splinter/tokenization_splinter.py
+++ b/src/transformers/models/splinter/tokenization_splinter.py
@@ -137,6 +137,7 ... | diff --git a/tests/models/splinter/test_tokenization_splinter.py b/tests/models/splinter/test_tokenization_splinter.py
new file mode 100644
index 00000000000000..4c6d295e8a8281
--- /dev/null
+++ b/tests/models/splinter/test_tokenization_splinter.py
@@ -0,0 +1,174 @@
+# coding=utf-8
+# Copyright 2024 The HuggingFace Inc... | Add missing tokenizer test files [:building_construction: in progress]
# 🚀 Add missing tokenizer test files
Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
## Tokenizers conce... | Hi, I would like to add tests for `Longformer` tokenizer
@SaulLu I would like to add tests for Flaubert
Hey I would like to contribute for `Electra`,Pointers please!
Thank you all for offering your help!
@Rajathbharadwaj ,sure! what do you need help with? Do you need more details on any of the steps listed in the m... | 1,723,547,763,000 | [
"Core: Tokenization"
] | Feature Request | [
"src/transformers/models/splinter/tokenization_splinter.py:SplinterTokenizer.__init__"
] | [] | 1 | 419 |
huggingface/transformers | huggingface__transformers-32240 | 7f552e28e0aca00ce60868c7620f7463eab60e14 | null | diff --git a/src/transformers/tokenization_utils.py b/src/transformers/tokenization_utils.py
index 1853d2de4560ea..f04eaae4525de9 100644
--- a/src/transformers/tokenization_utils.py
+++ b/src/transformers/tokenization_utils.py
@@ -480,6 +480,7 @@ def added_tokens_decoder(self, value: Dict[int, Union[AddedToken, str]]) ... | diff --git a/tests/tokenization/test_tokenization_utils.py b/tests/tokenization/test_tokenization_utils.py
index 7ff6b29629ea6d..f97ef6a630221d 100644
--- a/tests/tokenization/test_tokenization_utils.py
+++ b/tests/tokenization/test_tokenization_utils.py
@@ -284,3 +284,15 @@ def test_instantiation_from_tokenizers_json_... | DataCollatorForLanguageModeling is (unnecessary) slow
### System Info
- `transformers` version: 4.43.1
- Platform: Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.3
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.32.1
- Accelerate config: n... | Was curious about this issue, so tried gathering some statistics on time spent in each line of [torch_mask_tokens](https://github.com/huggingface/transformers/blob/main/src/transformers/data/data_collator.py) (`transformers` commit ID `85a1269e19af022e04bc2aad82572cd5a9e8cdd9`) and this is what the statistics look like... | 1,721,984,657,000 | [] | Performance Issue | [
"src/transformers/tokenization_utils.py:PreTrainedTokenizer.added_tokens_decoder",
"src/transformers/tokenization_utils.py:PreTrainedTokenizer.__len__",
"src/transformers/tokenization_utils.py:PreTrainedTokenizer._add_tokens"
] | [
"src/transformers/tokenization_utils.py:PreTrainedTokenizer._update_total_vocab_size"
] | 3 | 420 |
huggingface/transformers | huggingface__transformers-31783 | af0e4b7b37b2d7eefe7531cf5201a5d6bae85525 | null | diff --git a/src/transformers/integrations/ggml.py b/src/transformers/integrations/ggml.py
index 71aa87afa94b5d..47f3f0cf8d57b4 100644
--- a/src/transformers/integrations/ggml.py
+++ b/src/transformers/integrations/ggml.py
@@ -36,6 +36,7 @@
# Listed here: https://github.com/ggerganov/ggml/blob/master/docs/gguf.md
GGM... | diff --git a/tests/quantization/ggml/test_ggml.py b/tests/quantization/ggml/test_ggml.py
index a5866094a1cc6f..e42900a1d51b44 100644
--- a/tests/quantization/ggml/test_ggml.py
+++ b/tests/quantization/ggml/test_ggml.py
@@ -33,6 +33,7 @@ class GgufIntegrationTests(unittest.TestCase):
mistral_model_id = "TheBloke/Mi... | Converting gguf fp16 & bf16 to hf is not supported.
### System Info
```
transformers==4.42.3
torch==2.3.0
numpy==1.26.4
gguf==0.6.0
```
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in ... | I found that the PPL issue is related to Llama3 or llama.cpp. It doesn't happen with TinyLlama. I'll create another issue to discuss if needed. | 1,720,054,488,000 | [
"Quantization"
] | Feature Request | [
"src/transformers/integrations/ggml.py:load_dequant_gguf_tensor"
] | [] | 1 | 421 |
huggingface/transformers | huggingface__transformers-31566 | c54a8ca48eb1b85785f7fdbefb5311f172d19726 | null | diff --git a/src/transformers/models/siglip/modeling_siglip.py b/src/transformers/models/siglip/modeling_siglip.py
index d605f49261ae6f..4c534bbce6ce8a 100644
--- a/src/transformers/models/siglip/modeling_siglip.py
+++ b/src/transformers/models/siglip/modeling_siglip.py
@@ -496,6 +496,13 @@ class SiglipPreTrainedModel(... | diff --git a/tests/models/siglip/test_modeling_siglip.py b/tests/models/siglip/test_modeling_siglip.py
index 12ac11251dc4c9..af5d0bf2bc3e83 100644
--- a/tests/models/siglip/test_modeling_siglip.py
+++ b/tests/models/siglip/test_modeling_siglip.py
@@ -443,6 +443,12 @@ class SiglipModelTest(ModelTesterMixin, PipelineTest... | SiqlipVisionModel does not support "device map= auto": no split modules`attribute
### Feature request
How to make SiglipVisionModel can support Auto map to map to multiple GPUs.
### Motivation
Currently, using MLLM with siglip, the whole model might need automap, since the vision encoder part is SiglipVisionModel, i... | Hi @lucasjinreal, thanks for opening a feature request!
Could you share a code snippet of how the model is being created with `auto_map` and the running environment (run `transformers-cli env` in the terminal and copy-paste the output)? SigLip should support `device_map="auto"`
I checked with the code in [model-doc... | 1,719,231,687,000 | [] | Feature Request | [
"src/transformers/models/siglip/modeling_siglip.py:SiglipModel.forward"
] | [] | 1 | 422 |
huggingface/transformers | huggingface__transformers-29688 | f4dc26d46687f5f4baf3fe64a1d87cafefbeec53 | null | diff --git a/src/transformers/models/whisper/generation_whisper.py b/src/transformers/models/whisper/generation_whisper.py
index b3865140f24ee4..c58b0d35e55618 100644
--- a/src/transformers/models/whisper/generation_whisper.py
+++ b/src/transformers/models/whisper/generation_whisper.py
@@ -262,7 +262,7 @@ def generate(... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
index 32b13bd5425f7e..fed1b9c0592522 100644
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -545,10 +545,19 @@ def test_generate_language(self):
... | Support mixed-language batches in `WhisperGenerationMixin`
### Feature request
It is currently not possible to mix multiple languages in a single batch when running [Whisper](https://huggingface.co/docs/transformers/en/model_doc/whisper). The `language` argument only accepts a single string (as opposed to a separate... | 1,710,584,247,000 | [] | Feature Request | [
"src/transformers/models/whisper/generation_whisper.py:WhisperGenerationMixin.generate",
"src/transformers/models/whisper/generation_whisper.py:WhisperGenerationMixin._set_language_and_task",
"src/transformers/models/whisper/generation_whisper.py:WhisperGenerationMixin._retrieve_init_tokens",
"src/transformer... | [] | 4 | 423 | |
huggingface/transformers | huggingface__transformers-29511 | bc764f42639d245114eaa077b4712aac5643603b | null | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
index 505c9cb45950cb..ca98c64a29a823 100644
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3297,9 +3297,12 @@ def from_pretrained(
elif metadata.get("format") == "flax":
... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
index d0db5031e8b7a0..acd60fbbea3119 100755
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1256,6 +1256,26 @@ def test_modifying_model_config_causes_warning_saving_generation_config(self):
self.assertEqual(l... | Add support for models trained using MLX
### Feature request
It would be great if model weights trained and saved through MLX could be easily loaded using the Transformers library. This way, MLX users and Transformers users can more freely collaborate when making open source models.
### Motivation
Currently, Trans... | As additional motivation: this would enable fine-tuning locally with MLX -> save in safetensors -> load model in transformers which would be 🔥
fyi @pcuenca | 1,709,812,694,000 | [] | Feature Request | [
"src/transformers/modeling_utils.py:PreTrainedModel.from_pretrained"
] | [] | 1 | 424 |
huggingface/transformers | huggingface__transformers-28940 | dd1c9052159ae824c8acef7c2552f9fad5ca020a | null | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
index 758484107b76f2..079da4980851f1 100644
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -861,7 +861,7 @@ def __init__(
raise ValueError(f"{device} unrecognized or not available.... | diff --git a/tests/pipelines/test_pipelines_common.py b/tests/pipelines/test_pipelines_common.py
index 5e3e15f39c10ea..13b97aff3216b5 100644
--- a/tests/pipelines/test_pipelines_common.py
+++ b/tests/pipelines/test_pipelines_common.py
@@ -199,6 +199,29 @@ def test_unbatch_attentions_hidden_states(self):
output... | Populate torch_dtype from a model to a pipeline
### Feature request
When constructing a pipeline object from a model and a tokenizer, the pipeline doesn't inherit the `torch_dtype` field from the underlying model.
```
model = AutoModelForCausalLM.from_pretrained("t5-small", torch_dtype = torch.bfloat16)
pipeline ... | cc @Rocketknight1 WDYT? Sounds good to me
This sounds like a safe assumption to me too, though obviously I'd like to confirm that with some tests! I'm in favour of the PR if you're happy to open it @B-Step62
@ArthurZucker @Rocketknight1 Great! I will open a PR soon, in the meantime could you assign the issue to me?
... | 1,707,480,313,000 | [] | Feature Request | [
"src/transformers/pipelines/base.py:Pipeline.__init__"
] | [
"src/transformers/pipelines/base.py:Pipeline.torch_dtype"
] | 1 | 425 |
iterative/dvc | iterative__dvc-10641 | 368c785410451288da4326d6c3701bfa1665ccae | null | diff --git a/dvc/repo/experiments/remove.py b/dvc/repo/experiments/remove.py
index cd8ca07e8b..1b29f30255 100644
--- a/dvc/repo/experiments/remove.py
+++ b/dvc/repo/experiments/remove.py
@@ -75,7 +75,7 @@ def remove( # noqa: C901, PLR0912
exp_ref_list.extend(exp_ref_dict.values())
elif all_commits:
... | diff --git a/tests/func/experiments/test_remove.py b/tests/func/experiments/test_remove.py
index 1864cc541d..8dd2b65498 100644
--- a/tests/func/experiments/test_remove.py
+++ b/tests/func/experiments/test_remove.py
@@ -43,6 +43,26 @@ def test_remove_all_queued_experiments(tmp_dir, scm, dvc, exp_stage):
assert scm.... | `dvc exp remove --queue -A`: possibly incomplete result returned
# Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect inpu... | thanks @rmic ! where do we use the returned value atm? E.g. does it affect the way we print the results for the `dvc exp remove` command to the screen?
The place where this seems the most visible is in
https://github.com/iterative/dvc/blob/d38b2dbcb873b5112976c5ad40c5574b5d2a41f3/dvc/commands/experiments/remove.py#L3... | 1,733,002,163,000 | [] | Bug Report | [
"dvc/repo/experiments/remove.py:remove"
] | [] | 1 | 426 |
matplotlib/matplotlib | matplotlib__matplotlib-29133 | 9caa0a648d73ac402dce3d5177497260a5ad1019 | 7025fb9b5d35de66a3abcd51f0e15b87950f34f2 | diff --git a/lib/matplotlib/axes/_axes.py b/lib/matplotlib/axes/_axes.py
index f6a4ebfdc7c6..2bc77f63b376 100644
--- a/lib/matplotlib/axes/_axes.py
+++ b/lib/matplotlib/axes/_axes.py
@@ -2321,6 +2321,56 @@ def _convert_dx(dx, x0, xconv, convert):
dx = convert(dx)
return dx
+ def _parse_bar_co... | diff --git a/lib/matplotlib/tests/test_axes.py b/lib/matplotlib/tests/test_axes.py
index e3a59a1751ab..ed775b913e9e 100644
--- a/lib/matplotlib/tests/test_axes.py
+++ b/lib/matplotlib/tests/test_axes.py
@@ -9452,3 +9452,28 @@ def test_wrong_use_colorizer():
for kwrd in kwrds:
with pytest.raises(ValueError... | [MNT]: More consistent color parameters for bar()
### Summary
From #29072. `bar()` supports
- `color` : color or list of color
- `edgecolor` : color or list of color
- `facecolor`: color
i.e.
- `facecolor` cannot take a sequence
- there are no plural aliase (e.g. `edgecolors`)
- likely (t.b.c.) the aliase... | Hi Matplotlib team! My partner and I would like to work on this issue as part of a project for our software engineering class. We're looking forward to contributing by improving the consistency of the bar() function’s color parameters. Any guidance or pointers on starting would be greatly appreciated!
Thanks very mu... | 1,731,472,764,000 | [
"API: consistency",
"Maintenance"
] | Feature Request | [
"lib/matplotlib/axes/_axes.py:Axes.bar"
] | [
"lib/matplotlib/axes/_axes.py:Axes._parse_bar_color_args"
] | 1 | 427 |
jax-ml/jax | jax-ml__jax-25787 | 39ce7916f1fcabb11edd33ad006e5c8dc0929656 | null | diff --git a/jax/_src/lax/linalg.py b/jax/_src/lax/linalg.py
index 74e9eaa01482..440ee424ffa1 100644
--- a/jax/_src/lax/linalg.py
+++ b/jax/_src/lax/linalg.py
@@ -2568,6 +2568,26 @@ def _tridiagonal_solve_cpu_lowering(ctx, dl, d, du, b, **kwargs):
b_out, b_aval, _nan_like_hlo(ctx, b_aval), b_aval)]
+def _tri... | diff --git a/tests/linalg_test.py b/tests/linalg_test.py
index b613ec714a62..9e872d192a12 100644
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -2184,20 +2184,55 @@ def testSelect(self, dtype):
self.assertAllClose(
eigvals_all[first:(last + 1)], eigvals_index, atol=atol)
- @jtu.sample... | Differentiation rule for tridiagonal_solve
Hi,
I don't think it's bug report, but a feature request. I tried to recently switch away from my own implementation of tridiagonal_solve using Thomas algorithm to the jax.lax.tridiagonal_solve and I discovered that the tridiagonal_solve implementation in jax does not seem... | Take a look at [Lineax](https://github.com/patrick-kidger/lineax), which should support this :)
@segasai — Thanks for bringing this up! I agree that it makes sense for JAX to support this directly, and I don't expect it would be too complicated to add. If you're keen to make a PR, that would be awesome and I'll add som... | 1,736,369,824,000 | [
"pull ready"
] | Feature Request | [
"jax/_src/lax/linalg.py:_tridiagonal_solve_transpose_rule",
"jax/_src/lax/linalg.py:_tridiagonal_solve_jax"
] | [
"jax/_src/lax/linalg.py:_tridiagonal_product",
"jax/_src/lax/linalg.py:_tridiagonal_solve_jvp_rule",
"jax/_src/lax/linalg.py:_tridiagonal_solve_jax_impl"
] | 2 | 428 |
jax-ml/jax | jax-ml__jax-25511 | cd7109e6b55730c3ace264ed5eccf7b2ce84407e | null | diff --git a/jax/_src/lax/control_flow/loops.py b/jax/_src/lax/control_flow/loops.py
index b08819bcf545..5839ec3e50e4 100644
--- a/jax/_src/lax/control_flow/loops.py
+++ b/jax/_src/lax/control_flow/loops.py
@@ -347,6 +347,10 @@ def _get_states(attrs_tracked):
vals.extend(leaves)
return vals
+def _capitalize(s... | diff --git a/tests/lax_control_flow_test.py b/tests/lax_control_flow_test.py
index 1f04a788914d..f323b035db6f 100644
--- a/tests/lax_control_flow_test.py
+++ b/tests/lax_control_flow_test.py
@@ -1897,6 +1897,16 @@ def testScanBodyOutputError(self):
re.escape("scan body output must be a pair, got ShapedArray(fl... | jax.lax.scan transforms dict keys to lower case when reporting mismatch in pytree structures
### Description
When `jax.lax.scan` reports an error due to mismatch in the pytree input/output structres, it transforms dict keys to lowercase.
Small repro:
```python
def f(loop_i, x):
return {'T': jnp.array([0.5])}... | I tracked it down to this line: https://github.com/jax-ml/jax/blob/2b06f93c703d062bebb6154a0dc030f2467c67cc/jax/_src/lax/control_flow/loops.py#L383
This causes all characters after the first within the error message to be converted to lowercase, which results in the error output you're seeing. | 1,734,370,005,000 | [
"pull ready"
] | Bug Report | [
"jax/_src/lax/control_flow/loops.py:_check_carry_type"
] | [
"jax/_src/lax/control_flow/loops.py:_capitalize"
] | 1 | 429 |
jax-ml/jax | jax-ml__jax-25239 | 40122f7c03d6a39f5877204951df86998f1001d8 | null | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
index 5af8c6ddad19..3d99405428de 100644
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -11971,6 +11971,14 @@ def _int(aval):
def _index_to_gather(x_shape: Sequence[int], idx: Sequence[Any],
normalize_... | diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
index 392af2688c1d..ab625d10b4d8 100644
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -399,6 +399,14 @@ def check_grads(f, args, order, atol=None, rtol=None, eps=None):
IndexSpec(shape=(3, 4), indexe... | Difference between numpy and jax.numpy in advanced indexing axes order
### Description
According to #15653, jax should match numpy with respect to advanced indexing.
There is a difference with empty ellipses.
```python
import numpy as np
a=np.ones((3,4,5))
print(a[:,(0,1),...,(0,1)].shape)
# (2, 3) # numpy... | Thanks for the report! I'll take a look.
The issue here is that we determine contiguous advanced indices here: https://github.com/jax-ml/jax/blob/c9a5902216b61bddb88415b9da6bbcf589ef12ea/jax/_src/numpy/lax_numpy.py#L12010-L12019
But we've already removed ellipses a few lines before: https://github.com/jax-ml/jax/blo... | 1,733,270,227,000 | [
"pull ready"
] | Bug Report | [
"jax/_src/numpy/lax_numpy.py:_index_to_gather"
] | [] | 1 | 430 |
jax-ml/jax | jax-ml__jax-24823 | 54e72d505413aa46e73157e1d14994c4917b46b9 | null | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
index 8b6a517a54b3..6e1e3ea14fb1 100644
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -5317,14 +5317,14 @@ def _sort_jvp(primals, tangents, *, dimension, is_stable, num_keys):
shape = primals[0].shape
iotas = []
for dim, size in enumerate(shape):
... | diff --git a/tests/shape_poly_test.py b/tests/shape_poly_test.py
index ead77e2b5053..eda4c4309960 100644
--- a/tests/shape_poly_test.py
+++ b/tests/shape_poly_test.py
@@ -3302,6 +3302,14 @@ def test_vmap_error(self):
lambda x: lax.slice_in_dim(x, 0, x.shape[0], stride=1 + x.shape[0] // 4, axis=0),
... | InconclusiveDimensionOperation: Symbolic dimension comparison 'b' < '2147483647' is inconclusive.
### Description
A simple code to reproduce:
```py
import jax
import jax.numpy as jnp
import jax.experimental.jax2tf as jax2tf
import tensorflow as tf
def f(a):
return jnp.sort(a, axis=-1)
my_model =... | Assigning @gnecula who is most familiar with shape polymorphism and TF model exporting.
In the immediate term, you can unblock by adding an explicit constraint 'b < '2147483647', as explained in the documentation link from the error message.
The issue is that JAX lowering for `jnp.sort` uses an `iota` of indices and... | 1,731,233,592,000 | [
"pull ready"
] | Bug Report | [
"jax/_src/lax/lax.py:_sort_jvp"
] | [] | 1 | 431 |
jax-ml/jax | jax-ml__jax-24814 | da89c9e38c00a3499d8f5ac381fb29de0ea0c597 | null | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
index d2e89833915d..b90004e19932 100644
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -3070,6 +3070,8 @@ def bincount(x: ArrayLike, weights: ArrayLike | None = None,
Array([2, 1, 0, 1, 0], dtype=int32)
"""
util.ch... | diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
index 61baa7c97df4..7c2728af415e 100644
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -4905,7 +4905,7 @@ def testAtLeastNdLiterals(self, dtype, op):
@jtu.sample_product(
shape=[(0,), (5,), (10,)],
- dtype=int_dtypes,
+ dtype... | bincount rejects bool
### Description
[numpy.bincount](https://numpy.org/doc/stable/reference/generated/numpy.bincount.html) accepts bool, but [jax.numpy.bincount](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.bincount.html) does not:
```sh
$ py -c "import numpy as np; print(np.bincount(np.ones(3, bo... | 1,731,112,200,000 | [
"pull ready"
] | Bug Report | [
"jax/_src/numpy/lax_numpy.py:bincount"
] | [] | 1 | 432 | |
jax-ml/jax | jax-ml__jax-24492 | e82d5a973b53d2c0ba68309f94962a2b5fd838a7 | null | diff --git a/jax/_src/numpy/util.py b/jax/_src/numpy/util.py
index 27496ad99056..15cbc22dfa0d 100644
--- a/jax/_src/numpy/util.py
+++ b/jax/_src/numpy/util.py
@@ -13,11 +13,9 @@
# limitations under the License.
from __future__ import annotations
-from collections.abc import Callable, Sequence
+from collections.abc ... | diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
index f853c742c811..b37237cae28c 100644
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -51,7 +51,6 @@
from jax._src import dtypes
from jax._src import test_util as jtu
from jax._src.lax import lax as lax_internal
-from jax._src.numpy.util... | Tracking issue: inline docstrings
For many public APIs, JAX currently uses the `jax._src.numpy.util.implements` helper to define docstrings dynamically at runtime. This has several drawbacks:
1. The docstrings extracted from NumPy do not always precisely match the semantics of the associated JAX function. This leads... | Looking forward to it!
> they come from numpy, and for licensing reasons, we cannot replicate NumPy docstring verbiage within the JAX repository
IANAL, but isn't Numpy under a 3-clause BSD license, which is Apache-compatible? The Apache Software Foundation itself seems to allow 3-clause BSD code within Apache Sof... | 1,729,723,213,000 | [
"pull ready"
] | Feature Request | [
"jax/_src/numpy/util.py:_parse_numpydoc",
"jax/_src/numpy/util.py:_parse_parameters",
"jax/_src/numpy/util.py:implements"
] | [] | 3 | 433 |
jax-ml/jax | jax-ml__jax-23020 | 5cf89b3f61faec83ee0bbf27bfb6850c8da3de50 | null | diff --git a/jax/_src/numpy/setops.py b/jax/_src/numpy/setops.py
index db4237dbd069..8e0162660951 100644
--- a/jax/_src/numpy/setops.py
+++ b/jax/_src/numpy/setops.py
@@ -61,6 +61,17 @@ def _in1d(ar1: ArrayLike, ar2: ArrayLike, invert: bool) -> Array:
return (ar1_flat[:, None] == ar2_flat[None, :]).any(-1)
+de... | diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
index d5efe1e03f31..860b3358d2ef 100644
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -18,7 +18,7 @@
import collections
from collections.abc import Iterator
import copy
-from functools import partial
+from functools import partial, wraps... | shape hint arguments in lax numpy functions to enable compilation
Several functions in `jax.numpy` are not `jit`-compilable for shape-level arguments. Specifically, we can't decide their output shape from input shapes alone.
We've given [`jax.numpy.bincount`](https://github.com/google/jax/blob/db8f66d508d536e1312291... | For something like `unique`, what would the other elements of the result be filled with if not unique values? It seems like we probably would need to add another parameter for indicating the `fill_value`. | 1,723,492,758,000 | [
"pull ready"
] | Feature Request | [
"jax/_src/numpy/setops.py:setxor1d",
"jax/_src/numpy/setops.py:_intersect1d_size"
] | [
"jax/_src/numpy/setops.py:_concat_unique",
"jax/_src/numpy/setops.py:_setxor1d_size"
] | 2 | 434 |
jax-ml/jax | jax-ml__jax-22897 | 8b9ceb598b368c1762246e3a81e38ea02c16e3ec | null | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
index 3af51e30585d..7ef27b1eea66 100644
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -8340,7 +8340,10 @@ def _expand_bool_indices(idx, shape):
i_shape = _shape(i)
start = len(out) + ellipsis_offset - newax... | diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
index cbb8e92ed603..bf2785f62d68 100644
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -1030,6 +1030,23 @@ def testNontrivialBooleanIndexing(self):
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)... | JAX indexing should support zero-dimensional boolean masks
NumPy supports this:
```python
>>> import numpy as np
>>> x = np.arange(3)
>>> m = np.array([], dtype=bool)
>>> x[m]
array([], dtype=int64)
>>> x = np.arange(12).reshape(3, 4)
>>> m = np.empty((3, 0), dtype=bool)
>>> x[m]
array([], dtype=int64)
`... | 1,722,963,089,000 | [
"pull ready"
] | Feature Request | [
"jax/_src/numpy/lax_numpy.py:_expand_bool_indices"
] | [] | 1 | 435 | |
jax-ml/jax | jax-ml__jax-22049 | e0efcaee840e8db7940723e65750ddf8c138b264 | null | diff --git a/jax/experimental/shard_map.py b/jax/experimental/shard_map.py
index 17965b7ceec2..23a80471279f 100644
--- a/jax/experimental/shard_map.py
+++ b/jax/experimental/shard_map.py
@@ -52,7 +52,7 @@
from jax._src.util import (HashableFunction, HashablePartial, unzip2, unzip3,
as_hasha... | diff --git a/tests/shard_map_test.py b/tests/shard_map_test.py
index 5dfeec6a20ae..e38801afb5b8 100644
--- a/tests/shard_map_test.py
+++ b/tests/shard_map_test.py
@@ -1815,6 +1815,101 @@ def g(x):
with self.assertRaisesRegex(ValueError, "spmd_axis_name cannot appear"):
jax.vmap(g, spmd_axis_name='i')(xs)
... | shard_map should support static_argnums
We've started using (a type-based derivative of) shard_map as a function decorator. Ideally we'd like to annotate our top-level training function with `@jax.jit @typed_shard_map` decorator on a single top-level function. However, `shard_map` doesn't support `static_argnums`. We'r... | Assigning @mattjj as this is similar to #17461
Hey @reinerp !
What if instead of `static_argnums`, we allowed passing `in_specs=(P(...), P(...), None, None)`? That'd be more like `vmap`'s `in_axes=None`. (Or do you prefer `static_argnums`? In that case, what do you pass for the corresponding `in_specs`? Or do we jus... | 1,719,095,011,000 | [
"pull ready"
] | Feature Request | [
"jax/experimental/shard_map.py:shard_map",
"jax/experimental/shard_map.py:_shard_map",
"jax/experimental/shard_map.py:_check_specs_vs_args",
"jax/experimental/shard_map.py:_iter_paths",
"jax/experimental/shard_map.py:_shard_map_staging"
] | [
"jax/experimental/shard_map.py:_expand_fail"
] | 5 | 436 |
jax-ml/jax | jax-ml__jax-20862 | 83aff78d1220f64d44349018b5852830d96bc269 | null | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
index 3767633deefc..f7686ec467d6 100644
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1162,12 +1162,12 @@ def select(
raise ValueError(msg.format(len(condlist), len(choicelist)))
if len(condlist) == 0:
raise Va... | diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
index ddc599792e63..ee6c3601a1bb 100644
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -4532,6 +4532,7 @@ def testWhereScalarPromotion(self):
# maximal set of dtypes.
dtypes=itertools.combinations_with_replacement(all_dtypes, 3),
... | jnp.select should use jax.lax.select_n
### Description
jnp.select is a function that's like jnp.where, but selects between n different arrays. jnp.where is a wrapper around jax.lax.select, which makes it more flexible in terms of input shapes and dtypes.
jax.lax.select_n (added in #9482 ) is a generalization of jax... | 1,713,789,654,000 | [
"pull ready"
] | Performance Issue | [
"jax/_src/numpy/lax_numpy.py:select"
] | [] | 1 | 437 | |
jax-ml/jax | jax-ml__jax-19909 | 5f19f7712b485493ac141c44eea3b3eb1ffdfb59 | null | diff --git a/jax/_src/custom_derivatives.py b/jax/_src/custom_derivatives.py
index 2752604444dc..9a0a46e89b4b 100644
--- a/jax/_src/custom_derivatives.py
+++ b/jax/_src/custom_derivatives.py
@@ -734,12 +734,17 @@ def _flatten_bwd(in_tree, in_avals, out_trees, *args):
zero = object() # non-pytree sentinel to replace... | diff --git a/tests/api_test.py b/tests/api_test.py
index 81b04d5849a7..9ac72ea3bb2b 100644
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -9212,6 +9212,38 @@ def g(x):
g(1.) # doesn't crash
+ def test_nones_representing_zeros_in_subtrees_returned_by_bwd(self):
+ # https://github.com/google/jax/issues/... | Improve error message in custom_vjp when returning the wrong grad structure
```
import jax
from jax import numpy as jnp
def splat_core(data, warp, output_dims):
output = jnp.zeros((output_dims[0], output_dims[1], data.shape[-1]),
dtype=data.dtype)
output = output.at[:data.shape[0], :... | Hi @johnpjf
I executed the mentioned code with latest JAX version 0.4.23 on Google Colab. Now the error message gives a `ValueError` indicating that the `safe_zip() argument 2 is shorter than argument 1`.
```python
JaxStackTraceBeforeTransformation: ValueError: safe_zip() argument 2 is shorter than argument 1
... | 1,708,503,673,000 | [
"pull ready"
] | Feature Request | [
"jax/_src/custom_derivatives.py:_flatten_bwd"
] | [] | 1 | 438 |
jax-ml/jax | jax-ml__jax-19710 | b9824d7de3cb30f1df738cc42e486db3e9d915ff | null | diff --git a/jax/experimental/shard_map.py b/jax/experimental/shard_map.py
index a987d8d0faf4..40c9c216215e 100644
--- a/jax/experimental/shard_map.py
+++ b/jax/experimental/shard_map.py
@@ -923,7 +923,6 @@ def _standard_collective_check(prim, mesh, x_rep, *, axis_name, **params):
def _standard_collective_rewrite(pr... | diff --git a/tests/shard_map_test.py b/tests/shard_map_test.py
index d9776a3e6216..cec18fe47ae1 100644
--- a/tests/shard_map_test.py
+++ b/tests/shard_map_test.py
@@ -137,6 +137,28 @@ def fwd(a):
for i, a_shard in enumerate(np.split(a, 2, axis=0)):
self.assertAllClose(d.addressable_data(i), a_shard)
+ de... | [shmap] shard_map collectives don't support axis_index_groups
I am primarily interested in collective matrix multiplication algorithms where `all_to_all` and `all_gather` are useful for overlapping reduce scatter permutes and matrix multiplication.
Collectives like `psum` may be hard to support since the replication... | 1,707,372,249,000 | [
"pull ready"
] | Feature Request | [
"jax/experimental/shard_map.py:_standard_collective_rewrite"
] | [] | 1 | 439 | |
jax-ml/jax | jax-ml__jax-19601 | 9a098e922aff62a3b49bd673b9518d97ee599248 | null | diff --git a/jax/experimental/shard_map.py b/jax/experimental/shard_map.py
index dc45196f3eb4..a987d8d0faf4 100644
--- a/jax/experimental/shard_map.py
+++ b/jax/experimental/shard_map.py
@@ -923,13 +923,14 @@ def _standard_collective_check(prim, mesh, x_rep, *, axis_name, **params):
def _standard_collective_rewrite(... | diff --git a/tests/shard_map_test.py b/tests/shard_map_test.py
index 0c087fd1025d..d9776a3e6216 100644
--- a/tests/shard_map_test.py
+++ b/tests/shard_map_test.py
@@ -125,10 +125,17 @@ def test_all_gather(self):
@partial(shard_map, mesh=mesh,
in_specs=(P('z', ('x', 'y')),), out_specs=P('z', ('x', 'y'... | [shmap] shard_map doesn't support multiple axes
### Description
```
import functools
import jax
from jax import numpy as jnp
from jax import sharding
from jax.experimental import mesh_utils
from jax.experimental import shard_map
mesh = sharding.Mesh(
mesh_utils.create_device_mesh((1, 1), jax.devices(... | 1,706,721,286,000 | [
"pull ready"
] | Feature Request | [
"jax/experimental/shard_map.py:_standard_collective_rewrite"
] | [] | 1 | 440 | |
mlflow/mlflow | mlflow__mlflow-13888 | 6cf9a247892fd2fc987cba794fd176c4590ecddf | null | diff --git a/mlflow/tracking/fluent.py b/mlflow/tracking/fluent.py
index 14c98dff0e9b6..6ffa788c34534 100644
--- a/mlflow/tracking/fluent.py
+++ b/mlflow/tracking/fluent.py
@@ -219,8 +219,10 @@ def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
- status = RunStatus.FI... | diff --git a/tests/tracking/fluent/test_fluent.py b/tests/tracking/fluent/test_fluent.py
index 486f78665e84b..77e6ed61bfd73 100644
--- a/tests/tracking/fluent/test_fluent.py
+++ b/tests/tracking/fluent/test_fluent.py
@@ -1659,3 +1659,21 @@ def test_subprocess_inherit_registry_uri(tmp_path):
text=True,
)
... | [BUG] end_run() within nested run context affects parent run as well
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Local machine
### ML... | no.
`with mlflow.start_run(nested=True):`
means when the code exit the `with` block, the started run is ended.
if you use `mlflow.start_run(...)` without `with`, then it will not be ended automatically.
oh , I got your point,
in your case, it unexpectedly terminates the parent run at the step 3 in the lo... | 1,732,713,180,000 | [
"rn/bug-fix",
"area/tracking",
"v2.18.1"
] | Bug Report | [
"mlflow/tracking/fluent.py:ActiveRun.__exit__"
] | [] | 1 | 442 |
mlflow/mlflow | mlflow__mlflow-13802 | 9fb2524d8ed646d8dc6b3e86bcb1e5d63513b189 | null | diff --git a/mlflow/openai/_openai_autolog.py b/mlflow/openai/_openai_autolog.py
index 294153c1bf38a..757bf5360765b 100644
--- a/mlflow/openai/_openai_autolog.py
+++ b/mlflow/openai/_openai_autolog.py
@@ -4,7 +4,7 @@
import os
from contextlib import contextmanager
from copy import deepcopy
-from typing import Iterat... | diff --git a/tests/openai/test_openai_autolog.py b/tests/openai/test_openai_autolog.py
index 0ad67fbafd1ce..b354d43ca002d 100644
--- a/tests/openai/test_openai_autolog.py
+++ b/tests/openai/test_openai_autolog.py
@@ -362,3 +362,41 @@ def test_autolog_use_active_run_id(client, log_models):
assert traces[1].info.req... | [BUG] MLflow Tracing Fails to Parse Response Objects When Using OpenAI via DSPy
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Databricks... | Thanks for reporting this. It looks like dspy uses litellm, and litellm calls `openai_client.chat.completions.with_raw_response.create`:
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-b4df211a-5889-49df-bdd9-aaf77bee200f/lib/python3.10/site-packages/dspy/utils/callback.py", line 202, in wrapper
return ... | 1,731,666,041,000 | [
"rn/bug-fix",
"v2.18.0"
] | Bug Report | [
"mlflow/openai/_openai_autolog.py:patched_call"
] | [
"mlflow/openai/_openai_autolog.py:_try_parse_raw_response"
] | 1 | 443 |
mlflow/mlflow | mlflow__mlflow-10923 | 4179ca2fffbaab94f6f1f8d2fc55acd2534b557d | null | diff --git a/mlflow/store/artifact/local_artifact_repo.py b/mlflow/store/artifact/local_artifact_repo.py
index f7d78e90fd0ac..42a0dd98938cc 100644
--- a/mlflow/store/artifact/local_artifact_repo.py
+++ b/mlflow/store/artifact/local_artifact_repo.py
@@ -9,6 +9,7 @@
mkdir,
relative_path_to_artifact_path,
)
+fr... | diff --git a/tests/store/artifact/test_local_artifact_repo.py b/tests/store/artifact/test_local_artifact_repo.py
index ed0c7693848b8..0cea539777dfb 100644
--- a/tests/store/artifact/test_local_artifact_repo.py
+++ b/tests/store/artifact/test_local_artifact_repo.py
@@ -200,3 +200,8 @@ def test_delete_artifacts(local_art... | [BUG] Security Vulnerability
Please check it here https://huntr.com/bounties/e3d7a994-bfd6-4772-ac9b-9aee1aa16a5f/
| 1,706,510,204,000 | [
"rn/none"
] | Security Vulnerability | [
"mlflow/store/artifact/local_artifact_repo.py:LocalArtifactRepository.download_artifacts",
"mlflow/store/artifact/local_artifact_repo.py:LocalArtifactRepository._download_file"
] | [] | 2 | 444 | |
sqlfluff/sqlfluff | sqlfluff__sqlfluff-6554 | 469c38f24169580b2e16fc4603b8fd09fa1ec4db | null | diff --git a/src/sqlfluff/rules/references/RF01.py b/src/sqlfluff/rules/references/RF01.py
index 3ef60bbd8ed..ef9fe2c4955 100644
--- a/src/sqlfluff/rules/references/RF01.py
+++ b/src/sqlfluff/rules/references/RF01.py
@@ -37,7 +37,7 @@ class Rule_RF01(BaseRule):
.. note::
- This rule is disabled by defaul... | diff --git a/test/fixtures/rules/std_rule_cases/RF01.yml b/test/fixtures/rules/std_rule_cases/RF01.yml
index 01253a0895b..10276c06530 100644
--- a/test/fixtures/rules/std_rule_cases/RF01.yml
+++ b/test/fixtures/rules/std_rule_cases/RF01.yml
@@ -36,7 +36,7 @@ test_pass_object_referenced_3:
SELECT * FROM db.sc.tbl2
... | DuckDB needs to be added to list of dialects supporting dot-based struct access
### Search before asking
- [X] I searched the [issues](https://github.com/sqlfluff/sqlfluff/issues) and found no similar issues.
### What Happened
The following query
```sql
SELECT ex.x.hi
FROM (SELECT { 'hi': 'there' } AS x... | 1,736,346,017,000 | [] | Feature Request | [
"src/sqlfluff/rules/references/RF01.py:Rule_RF01._dialect_supports_dot_access"
] | [] | 1 | 445 | |
sqlfluff/sqlfluff | sqlfluff__sqlfluff-6444 | e864e174ae53a0d481247cf00af0ecccd3f0f41e | null | diff --git a/src/sqlfluff/utils/analysis/select.py b/src/sqlfluff/utils/analysis/select.py
index 27b722acff1..3b9576c2aeb 100644
--- a/src/sqlfluff/utils/analysis/select.py
+++ b/src/sqlfluff/utils/analysis/select.py
@@ -212,7 +212,13 @@ def _get_pivot_table_columns(
def _get_lambda_argument_columns(
segment: Bas... | diff --git a/test/fixtures/rules/std_rule_cases/RF02.yml b/test/fixtures/rules/std_rule_cases/RF02.yml
index 78c8b9ae814..fdc0e55e7ba 100644
--- a/test/fixtures/rules/std_rule_cases/RF02.yml
+++ b/test/fixtures/rules/std_rule_cases/RF02.yml
@@ -106,6 +106,18 @@ test_allow_unqualified_references_in_sparksql_lambdas:
... | RF02 issue from (x, acc) in aggregate function
### Search before asking
- [X] I searched the [issues](https://github.com/sqlfluff/sqlfluff/issues) and found no similar issues.
### What Happened
This is more of a question, really. I'm not sure if the issue is with sqlfluff, or if the issue is between the keyboard an... | 1,731,216,328,000 | [] | Bug Report | [
"src/sqlfluff/utils/analysis/select.py:_get_lambda_argument_columns"
] | [] | 1 | 446 | |
sqlfluff/sqlfluff | sqlfluff__sqlfluff-6391 | 693381882a673c0ae7831f13cf909e42cf41bcb9 | null | diff --git a/src/sqlfluff/core/helpers/string.py b/src/sqlfluff/core/helpers/string.py
index 85d21b0bf56..c425e9cc612 100644
--- a/src/sqlfluff/core/helpers/string.py
+++ b/src/sqlfluff/core/helpers/string.py
@@ -38,31 +38,45 @@ def split_colon_separated_string(in_str: str) -> Tuple[Tuple[str, ...], str]:
>>> spli... | diff --git a/test/core/config/fluffconfig_test.py b/test/core/config/fluffconfig_test.py
index 35b0619b546..bcd67dda9d4 100644
--- a/test/core/config/fluffconfig_test.py
+++ b/test/core/config/fluffconfig_test.py
@@ -340,6 +340,14 @@ def test__process_inline_config():
cfg.process_inline_config("-- sqlfluff:jinja:m... | Improve inline directives for templater context
### Search before asking
- [X] I searched the [issues](https://github.com/sqlfluff/sqlfluff/issues) and found no similar issues.
### Description
I cannot alter jinja context in inline fashion.
Good:
```
[sqlfluff:templater:jinja:context]
my_inline_dict={"k1": ... | 1,729,717,254,000 | [] | Feature Request | [
"src/sqlfluff/core/helpers/string.py:split_colon_separated_string"
] | [
"src/sqlfluff/core/helpers/string.py:should_split_on_colon"
] | 1 | 448 | |
sqlfluff/sqlfluff | sqlfluff__sqlfluff-6312 | 00f865f03718da24b45f042311dda15fa5f85327 | null | diff --git a/src/sqlfluff/cli/commands.py b/src/sqlfluff/cli/commands.py
index 894c39b3c45..84dddbe3aa5 100644
--- a/src/sqlfluff/cli/commands.py
+++ b/src/sqlfluff/cli/commands.py
@@ -715,6 +715,11 @@ def lint(
github_result_native = []
for record in result.as_records():
filepath = recor... | diff --git a/test/cli/commands_test.py b/test/cli/commands_test.py
index 823ada02985..34f3739aee9 100644
--- a/test/cli/commands_test.py
+++ b/test/cli/commands_test.py
@@ -1652,7 +1652,7 @@ def test__cli__command_lint_serialize_multiple_files(serialize, write_file, tmp_
# SQLFluff produces trailing newline
... | Include filename in visible GHA output
### Search before asking
- [X] I searched the [issues](https://github.com/sqlfluff/sqlfluff/issues) and found no similar issues.
### Description
When using `github-annotation-native` format, the raw output contains the filename
```
::warning title=SQLFluff,file=migrations/ma... | 1,728,348,019,000 | [] | Feature Request | [
"src/sqlfluff/cli/commands.py:lint"
] | [] | 1 | 451 | |
sqlfluff/sqlfluff | sqlfluff__sqlfluff-5805 | d28db5732dc5fc28c4b60690bc3f44966f89e5c2 | null | diff --git a/src/sqlfluff/cli/commands.py b/src/sqlfluff/cli/commands.py
index ad60730e7e9..3addea2382d 100644
--- a/src/sqlfluff/cli/commands.py
+++ b/src/sqlfluff/cli/commands.py
@@ -313,6 +313,17 @@ def core_options(f: Callable) -> Callable:
" inline directives."
),
)(f)
+ f = click.opt... | diff --git a/test/cli/commands_test.py b/test/cli/commands_test.py
index ed3057d4359..c4af7082798 100644
--- a/test/cli/commands_test.py
+++ b/test/cli/commands_test.py
@@ -234,6 +234,103 @@ def test__cli__command_extra_config_fail():
)
+stdin_cli_input = (
+ "SELECT\n A.COL1,\n B.COL2\nFROM TABA AS A... | Add support for a `--stdin-filename` cli option
### Search before asking
- [X] I searched the [issues](https://github.com/sqlfluff/sqlfluff/issues) and found no similar issues.
### Description
Tools such as `black` and `ruff` support a `--stdin-filename` option for interpreting configuration options when passing fi... | 1,713,875,563,000 | [] | Feature Request | [
"src/sqlfluff/cli/commands.py:core_options",
"src/sqlfluff/cli/commands.py:lint",
"src/sqlfluff/cli/commands.py:fix",
"src/sqlfluff/cli/commands.py:cli_format",
"src/sqlfluff/cli/commands.py:parse"
] | [] | 5 | 453 | |
ranaroussi/yfinance | ranaroussi__yfinance-2139 | d3ec9c63ff25342a838b8f4e6ae9ab2d7b79cfac | null | diff --git a/yfinance/scrapers/history.py b/yfinance/scrapers/history.py
index bda169ea..e4bdca2b 100644
--- a/yfinance/scrapers/history.py
+++ b/yfinance/scrapers/history.py
@@ -1426,7 +1426,7 @@ def _fix_bad_div_adjust(self, df, interval, currency):
typical_volatility = np.nan
else:
... | diff --git a/tests/data/KWS-L-1d-bad-div-fixed.csv b/tests/data/KWS-L-1d-bad-div-fixed.csv
index d3830ccd..2e701df5 100644
--- a/tests/data/KWS-L-1d-bad-div-fixed.csv
+++ b/tests/data/KWS-L-1d-bad-div-fixed.csv
@@ -1,666 +1,725 @@
Datetime,Open,High,Low,Close,Adj Close,Volume,Dividends,Stock Splits,Repaired?
-2022-01-... | Tests failing
Running `python -m unittest discover -s tests` from #1084 causes 5 failures and 1 error.
======================================================================
ERROR: test_resampling (test_price_repair.TestPriceRepairAssumptions.test_resampling)
-------------------------------------------------------... | I ran into this too.
I think the first `test_price_repair.TestPriceRepair.test_repair_bad_div_adjusts` and `test_price_repair.TestPriceRepair.test_repair_zeroes_daily` are related to https://github.com/ranaroussi/yfinance/commit/8daa47716766cca2820b9bf78bb9f4487d387c7c as when I checkout the branch before that, it ... | 1,732,052,256,000 | [] | Bug Report | [
"yfinance/scrapers/history.py:PriceHistory._fix_bad_div_adjust"
] | [] | 1 | 454 |
huggingface/diffusers | huggingface__diffusers-10189 | f35a38725b4d263330a591dc7bdb54b002b96675 | null | diff --git a/src/diffusers/pipelines/pipeline_utils.py b/src/diffusers/pipelines/pipeline_utils.py
index a504184ea2f2..c505c5a262a3 100644
--- a/src/diffusers/pipelines/pipeline_utils.py
+++ b/src/diffusers/pipelines/pipeline_utils.py
@@ -13,6 +13,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or ... | diff --git a/tests/pipelines/test_pipelines.py b/tests/pipelines/test_pipelines.py
index 43b01c40f5bb..423c82e0602e 100644
--- a/tests/pipelines/test_pipelines.py
+++ b/tests/pipelines/test_pipelines.py
@@ -1802,6 +1802,16 @@ def test_pipe_same_device_id_offload(self):
sd.maybe_free_model_hooks()
asse... | [pipelines] add better checking when a wrong model is passed when initializing a pipeline
With this code:
```py
from diffusers import DiffusionPipeline
from transformers import T5EncoderModel
import torch
repo_id = "black-forest-labs/FLUX.1-dev"
text_encoder_2 = T5EncoderModel.from_pretrained(repo_id, subf... | @sayakpaul
1. adding a `validate_pipeline()` method in pipeline loading class if not validated throw simple yet understandable errors like these not sure how good these are:
```
`ValueError: Mismatched model type for component 'text_encoder_2'.
Expected types: ['T5EncoderModel'], but got: 'T5Model'
```
Ensure the co... | 1,733,912,537,000 | [] | Feature Request | [
"src/diffusers/pipelines/pipeline_utils.py:DiffusionPipeline.from_pretrained"
] | [] | 1 | 455 |
huggingface/diffusers | huggingface__diffusers-9659 | 5956b68a6927126daffc2c5a6d1a9a189defe288 | null | diff --git a/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py b/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py
index 96d53663b867..e7345ad96624 100644
--- a/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py
+++ b/src/dif... | diff --git a/tests/lora/test_lora_layers_sd3.py b/tests/lora/test_lora_layers_sd3.py
index 8f61c95c2fc8..78d4b786d21b 100644
--- a/tests/lora/test_lora_layers_sd3.py
+++ b/tests/lora/test_lora_layers_sd3.py
@@ -12,13 +12,29 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the Licens... | add load_lora_weights to pipeline_stable_diffusion_3_img2img.py
**Is your feature request related to a problem? Please describe.*"
Yes, there is no effective way to load lora weights with SD3
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].
I can't get LoRa weights to ... | Would you like to open a PR for this?
It should be as simple as deriving from the lora loader class and supporting the lora parameters in the pipeline I believe. Relevant bits:
https://github.com/huggingface/diffusers/blob/31058cdaef63ca660a1a045281d156239fba8192/src/diffusers/pipelines/stable_diffusion_3/pipelin... | 1,728,733,527,000 | [] | Feature Request | [
"src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py:StableDiffusion3Img2ImgPipeline.__call__"
] | [
"src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py:StableDiffusion3Img2ImgPipeline.joint_attention_kwargs"
] | 1 | 456 |
scrapy/scrapy | scrapy__scrapy-6542 | ab5cb7c7d9e268b501009d991d97ca19b6f7fe96 | null | diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py
index 9071395e3d9..3b4f932a014 100644
--- a/scrapy/contracts/__init__.py
+++ b/scrapy/contracts/__init__.py
@@ -38,9 +38,7 @@ def add_pre_hook(self, request: Request, results: TestResult) -> Request:
assert cb is not None
... | diff --git a/tests/test_contracts.py b/tests/test_contracts.py
index d578b3af450..b0cb92d12d9 100644
--- a/tests/test_contracts.py
+++ b/tests/test_contracts.py
@@ -556,3 +556,61 @@ def test_inherited_contracts(self):
requests = self.conman.from_spider(spider, self.results)
self.assertTrue(requests)... | return in finally can swallow exception
### Description
There are two places in `scrapy/contracts/__init__.py` where a `finally:` body contains a `return` statement, which would swallow any in-flight exception.
This means that if a `BaseException` (such as `KeyboardInterrupt`) is raised from the body, or any exc... | > If the finally clause executes a [break](https://docs.python.org/3/reference/simple_stmts.html#break), [continue](https://docs.python.org/3/reference/simple_stmts.html#continue) or [return](https://docs.python.org/3/reference/simple_stmts.html#return) statement, exceptions are not re-raised.
TIL
Hey,
What about... | 1,731,554,370,000 | [] | Bug Report | [
"scrapy/contracts/__init__.py:Contract.add_pre_hook",
"scrapy/contracts/__init__.py:Contract.add_post_hook"
] | [] | 2 | 457 |
instructor-ai/instructor | instructor-ai__instructor-1254 | bc7a7b3265669ca8266722bb92a437a6da64b66c | null | diff --git a/instructor/retry.py b/instructor/retry.py
index 205c7843f..853a73a54 100644
--- a/instructor/retry.py
+++ b/instructor/retry.py
@@ -73,7 +73,7 @@ def initialize_usage(mode: Mode) -> CompletionUsage | Any:
"""
total_usage = CompletionUsage(completion_tokens=0, prompt_tokens=0, total_tokens=0,
... | prompt_tokens_details not being populated correctly in response model
- [ - ] This is actually a bug report.
- [ ] I am not getting good LLM Results
- [ - ] I have tried asking for help in the community on discord or discussions and have not received a response.
- [ - ] I have tried searching the documentation and h... | This is happening due to a typo in https://github.com/instructor-ai/instructor/blob/bc7a7b3265669ca8266722bb92a437a6da64b66c/instructor/retry.py#L76
Its currently:
```python
total_usage = CompletionUsage(completion_tokens=0, prompt_tokens=0, total_tokens=0,
completion_tokens_details = CompletionTokensDet... | 1,734,033,074,000 | null | Bug Report | [
"instructor/retry.py:initialize_usage"
] | [] | 1 | 458 | |
prowler-cloud/prowler | prowler-cloud__prowler-6157 | bd9673c9deaebbd52b6303dd991d9bbe6821d090 | null | diff --git a/prowler/providers/aws/services/rds/rds_instance_no_public_access/rds_instance_no_public_access.py b/prowler/providers/aws/services/rds/rds_instance_no_public_access/rds_instance_no_public_access.py
index ca0e1b3bebe..dffa850693c 100644
--- a/prowler/providers/aws/services/rds/rds_instance_no_public_access/... | rds_instance_no_public_access reports incorrect SG
### Steps to Reproduce
1. prowler aws -c rds_instance_no_public_access
2. AWS
3. N/A
4. See finding's `status_detail`
### Expected behavior
The status detail should correctly identify the Security Group(s) that contain the rule(s)
### Actual Result with Screensh... | Looks like in code the IF Statement will `break ` on the first instance of port=DB PORT (3306) and source=ANY and will not report on any other Security groups. Its either public or its not. I suspect that the check doesnt matter how many SGs allow access from ANY, its the fact the RDS instance is publicly available fla... | 1,733,992,670,000 | null | Bug Report | [
"prowler/providers/aws/services/rds/rds_instance_no_public_access/rds_instance_no_public_access.py:rds_instance_no_public_access.execute"
] | [] | 1 | 459 | |
prowler-cloud/prowler | prowler-cloud__prowler-5961 | 6dea923866a3ab77b58e3fd3c8457f6bb67fb254 | null | diff --git a/prowler/providers/aws/services/rds/rds_service.py b/prowler/providers/aws/services/rds/rds_service.py
index b4dd5d66ce9..60d24b1cb1b 100644
--- a/prowler/providers/aws/services/rds/rds_service.py
+++ b/prowler/providers/aws/services/rds/rds_service.py
@@ -446,7 +446,7 @@ def _describe_db_event_subscription... | rds_service.py error
### Steps to Reproduce
1. Running prowler aws -log-level ERROR
2. AWS
3. Single Account
### Expected behavior
To execute without throwing the error
### Actual Result with Screenshots or Logs
2024-11-28 10:20:18,851 [File: rds_service.py:458] [Module: rds_service] ERROR: eu-west-2 -- KeyError... | 1,732,873,038,000 | null | Bug Report | [
"prowler/providers/aws/services/rds/rds_service.py:RDS._describe_db_event_subscriptions"
] | [] | 1 | 460 | ||
prowler-cloud/prowler | prowler-cloud__prowler-5856 | a83725fbedb15d6465bee076b8c8d740e677f8d1 | null | diff --git a/prowler/providers/aws/services/ec2/lib/instance.py b/prowler/providers/aws/services/ec2/lib/instance.py
index 150d37db52e..bef05ad7c50 100644
--- a/prowler/providers/aws/services/ec2/lib/instance.py
+++ b/prowler/providers/aws/services/ec2/lib/instance.py
@@ -1,3 +1,4 @@
+from prowler.lib.check.models impo... | Prowler CSV Output no longer outputs Failed Critical Findings
### Steps to Reproduce
Running prowler 4.5.3
Execute prowler aws against an AWS account.
CSV Outputs PASS critical findings but does output FAIL findings.
Prowler Dashboard does not display any cirtical findings
It was working with 4.3.5 but afte... | Hello! @garym-krrv Could you add the `--log-level ERROR` flag to your Prowler execution to see if we can find logs that indicate what the error might be?
Will do.. let me get back to you @pedrooot
Many Thanks | 1,732,201,006,000 | null | Bug Report | [
"prowler/providers/aws/services/ec2/lib/instance.py:get_instance_public_status"
] | [] | 1 | 461 | |
prowler-cloud/prowler | prowler-cloud__prowler-5814 | 572d5a1f2e2bd336c40721324a159f18fd413a58 | null | diff --git a/prowler/providers/kubernetes/services/core/core_seccomp_profile_docker_default/core_seccomp_profile_docker_default.py b/prowler/providers/kubernetes/services/core/core_seccomp_profile_docker_default/core_seccomp_profile_docker_default.py
index 71c4a7f28af..20a9c16b2fb 100644
--- a/prowler/providers/kuberne... | CIS 1.8 5.7.2 fails even though seccomp profile is set to runTimeDefault
### Steps to Reproduce
1. Create a Pod with securityContext.seccompProfile.type: RunTimeDefault
2. Run Prowler
3. Prowler will complain on "Pod <pod name> does not have docker/default seccomp profile enabled."
### Expected behavior
CIS 1.8 5.... | Hey! @misterdohl thanks for the ping. Requirement `5.7.2` is associated with the check `core_seccomp_profile_docker_default`. Based on the information you’ve provided, I can see that a modification will be needed to improve it. I’ll let you know as soon as the change is made! | 1,731,962,338,000 | null | Bug Report | [
"prowler/providers/kubernetes/services/core/core_seccomp_profile_docker_default/core_seccomp_profile_docker_default.py:core_seccomp_profile_docker_default.execute"
] | [] | 1 | 462 | |
prowler-cloud/prowler | prowler-cloud__prowler-5811 | 193b79c221275c59b9979bc2b47d53a0f04e32fc | null | diff --git a/prowler/providers/aws/services/wafv2/wafv2_service.py b/prowler/providers/aws/services/wafv2/wafv2_service.py
index 4971f1c4223..85feed76f62 100644
--- a/prowler/providers/aws/services/wafv2/wafv2_service.py
+++ b/prowler/providers/aws/services/wafv2/wafv2_service.py
@@ -99,7 +99,7 @@ def _get_logging_conf... | Invalid WAF arn
### Steps to Reproduce
Run prowler on Docker against an account with global WAF enabled.
### Expected behavior
To pass
### Actual Result with Screenshots or Logs
2024-11-18 10:09:43,933 [File: wafv2_service.py:122] [Module: wafv2_service] ERROR: us-east-1 -- WAFInvalidParameterException[105]: A... | Thank you very much for using Prowler and for reporting this issue.
We are reviewing it and will provide you with a solution as soon as possible! 🚀 | 1,731,943,339,000 | null | Bug Report | [
"prowler/providers/aws/services/wafv2/wafv2_service.py:WAFv2._list_resources_for_web_acl"
] | [] | 1 | 463 | |
prowler-cloud/prowler | prowler-cloud__prowler-5653 | 816b49fac5f9c9e011fcbb1039938ad01a4d92c8 | null | diff --git a/prowler/providers/common/provider.py b/prowler/providers/common/provider.py
index 3681cdcc262..4e86ad42bbf 100644
--- a/prowler/providers/common/provider.py
+++ b/prowler/providers/common/provider.py
@@ -161,54 +161,54 @@ def init_global_provider(arguments: Namespace) -> None:
if not isinstanc... | AWS MuteList broken
### Steps to Reproduce
**What command are you running:**
```bash
prowler aws -M json-ocsf -f us-west-2 -w mutelist.yaml --ignore-exit-code-3
```
**Cloud provider you are launching:** AWS
**Environment you have:**
* Single account
* Prowler open source
* Issues with:
* Prowler ins... | I was able to solve this by updating the arguments to the AWS provider when it is created:
I made this change in the `providers/common/provider.py` file
```diff
10a11
> from prowler.lib.mutelist.models import mutelist_schema
176,177c177,178
< arguments.config_file,
< ... | 1,730,933,449,000 | null | Bug Report | [
"prowler/providers/common/provider.py:Provider.init_global_provider"
] | [] | 1 | 464 | |
vllm-project/vllm | vllm-project__vllm-11138 | 85362f028c0324d8d00b0438f29c3d9f64737b9a | null | diff --git a/vllm/executor/ray_utils.py b/vllm/executor/ray_utils.py
index 4f28efd639084..426aa1b5c728f 100644
--- a/vllm/executor/ray_utils.py
+++ b/vllm/executor/ray_utils.py
@@ -277,10 +277,14 @@ def initialize_ray_cluster(
f"Total number of devices: {device_bundles}.")
else:
num_devic... | [Feature]: Allow Ray placement group more time to wait for resources to be ready
### 🚀 The feature, motivation and pitch
```
ray start --head --dashboard-host 0.0.0.0 --disable-usage-stats --block --num-gpus 1
python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-1.5B-Instruct --trust-remote-... | 1,734,011,795,000 | null | Feature Request | [
"vllm/executor/ray_utils.py:initialize_ray_cluster"
] | [] | 1 | 465 | ||
vllm-project/vllm | vllm-project__vllm-11073 | 75f89dc44c6e44cc28bae59d5b40a588735b507b | null | diff --git a/vllm/entrypoints/openai/serving_completion.py b/vllm/entrypoints/openai/serving_completion.py
index fc1c4908d6650..03bcf26ae7e91 100644
--- a/vllm/entrypoints/openai/serving_completion.py
+++ b/vllm/entrypoints/openai/serving_completion.py
@@ -392,6 +392,12 @@ def request_output_to_completion_response(
... | [Bug]: Internal Server Error when echo'ing logprobs with sampling
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Your output of `python collect_env.py` here
```
</details>
### Model Input Dumps
_No response_
### 🐛 Describe the bug
Sendi... | @DarkLight1337 could you please assign me to this one? I'll investigate. | 1,733,866,457,000 | null | Bug Report | [
"vllm/entrypoints/openai/serving_completion.py:OpenAIServingCompletion.request_output_to_completion_response"
] | [] | 1 | 466 | |
vllm-project/vllm | vllm-project__vllm-10903 | 82eb5ea8f3bd3aabbe5c2fd43e37d263768603c5 | null | diff --git a/vllm/v1/worker/gpu_model_runner.py b/vllm/v1/worker/gpu_model_runner.py
index 4692762493f00..e8d964a722f60 100644
--- a/vllm/v1/worker/gpu_model_runner.py
+++ b/vllm/v1/worker/gpu_model_runner.py
@@ -260,7 +260,8 @@ def _prepare_inputs(self, scheduler_output: "SchedulerOutput"):
# E.g., [0, 1, 0, ... | [Bug, V1]: LlaVa outputs wrong results in batch inference with V1 code(V0 code is correct)
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Warning: Your installation of OpenCV appears to be broken: module 'cv2.dnn' has no attribute 'DictValue'.Please follow ... | cc @ywang96 @WoosukKwon
@PYNing Thanks for reporting this bug! I can confirm this is reproducible, and only happens ~~when `max_model_len` is smaller than `max_num_batched_tokens`.~~ when the `max_model_len` is not divisible by `block_size` (default block_size is 16 in V1).
Code to reproduce:
``` python
import os... | 1,733,348,156,000 | null | Bug Report | [
"vllm/v1/worker/gpu_model_runner.py:GPUModelRunner._prepare_inputs"
] | [] | 1 | 467 | |
vllm-project/vllm | vllm-project__vllm-10809 | c11f172187b6f44710e1f011ca8bff923ce49a7f | null | diff --git a/vllm/engine/metrics.py b/vllm/engine/metrics.py
index 5bfd6a9f4b386..4869557ba9b44 100644
--- a/vllm/engine/metrics.py
+++ b/vllm/engine/metrics.py
@@ -473,13 +473,13 @@ def log(self, stats: Stats) -> None:
)
if (stats.cpu_prefix_cache_hit_rate >= 0
or stats.g... | [Core] Avoid metrics log noise when idle
1a789ae7 [Core] Avoid metrics log noise when idle
commit 1a789ae76695bca801b55c9d4531e5438a3378c4
Author: Russell Bryant <rbryant@redhat.com>
Date: Thu Sep 26 20:56:41 2024 +0000
[Core] Avoid metrics log noise when idle
Prior to this change, there was always a ... | 👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run `fastcheck` CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your `fastch... | 1,733,070,837,000 | null | Performance Issue | [
"vllm/engine/metrics.py:LoggingStatLogger.log"
] | [] | 1 | 468 | |
vllm-project/vllm | vllm-project__vllm-10778 | c82b432d4a40fd6376a35fd38cb5fc37e9c53798 | null | diff --git a/vllm/model_executor/models/idefics3.py b/vllm/model_executor/models/idefics3.py
index 014e27bc869d4..e5d2edbd81eb1 100644
--- a/vllm/model_executor/models/idefics3.py
+++ b/vllm/model_executor/models/idefics3.py
@@ -267,54 +267,56 @@ def input_processor_for_idefics3(ctx: InputContext,
n_images_in_text... | [Bug]: idefics3 doesn't stream
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x8... | Please re-run this with `--disable-frontend-multiprocessing` to get a clearer stack trace.
Sure, here you go:
```
ERROR: Exception in ASGI application
+ Exception Group Traceback (most recent call last):
| File "/home/jeff/.virtualenvs/vllm312/lib/python3.12/site-packages/starlette/middleware/base.py", l... | 1,732,881,965,000 | null | Bug Report | [
"vllm/model_executor/models/idefics3.py:input_processor_for_idefics3"
] | [] | 1 | 469 | |
vllm-project/vllm | vllm-project__vllm-10620 | 571841b7fcc67f8b1d171522f6249ed4224033e1 | null | diff --git a/vllm/plugins/__init__.py b/vllm/plugins/__init__.py
index d5056b18fe968..bd4764c5cc79c 100644
--- a/vllm/plugins/__init__.py
+++ b/vllm/plugins/__init__.py
@@ -3,6 +3,8 @@
from contextlib import contextmanager
from typing import TYPE_CHECKING, Optional
+import torch
+
import vllm.envs as envs
if TY... | [Usage]: torch.compile still generates multiple subprocesses
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.... | 1,732,516,290,000 | null | Performance Issue | [
"vllm/plugins/__init__.py:load_general_plugins"
] | [] | 1 | 470 | ||
vllm-project/vllm | vllm-project__vllm-10536 | 8a93a598d9ac265882e55432e7aef55c8bff23f4 | null | diff --git a/vllm/model_executor/models/qwen2_audio.py b/vllm/model_executor/models/qwen2_audio.py
index a4965f34b1ca8..0c2374c3c3fc9 100644
--- a/vllm/model_executor/models/qwen2_audio.py
+++ b/vllm/model_executor/models/qwen2_audio.py
@@ -212,7 +212,7 @@ def input_processor_for_qwen2_audio(
return token_inputs... | [Bug]: Error when calling vLLM with audio input using Qwen/Qwen2-Audio-7B-Instruct model
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Your output of `python collect_env.py` here
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is... | > I downloaded the [v0.6.4.post1](https://github.com/vllm-project/vllm/releases/tag/v0.6.4.post1)
>
> I’m encountering an error when trying to use **_vLLM serve_** with the Qwen/Qwen2-Audio-7B-Instruct model to process audio input. Run the following curl command:
>
> ```shell
> curl https://huxtwsgqgqkueq-5000.p... | 1,732,195,966,000 | null | Bug Report | [
"vllm/model_executor/models/qwen2_audio.py:input_processor_for_qwen2_audio"
] | [] | 1 | 471 | |
vllm-project/vllm | vllm-project__vllm-10398 | 272e31c0bd8640c15e85211c74fc9b428ad86902 | null | diff --git a/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py b/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py
index faa6f653b835c..18816cd665b3e 100644
--- a/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py
+++ b/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py
@@ -12,8 +12,6 ... | [Bug]: Hermes tool parser output error stream arguments in some cases.
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
WARNING 11-16 23:18:45 _custom_ops.py:20] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/oem/... | 1,731,783,021,000 | null | Bug Report | [
"vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py:Hermes2ProToolParser.extract_tool_calls_streaming"
] | [] | 1 | 472 | ||
vllm-project/vllm | vllm-project__vllm-10347 | 2ec88272881a49d40d91ae0cd858b19d22996c70 | null | diff --git a/vllm/model_executor/models/falcon.py b/vllm/model_executor/models/falcon.py
index dcfcb6694feb5..b3dbf063ac298 100644
--- a/vllm/model_executor/models/falcon.py
+++ b/vllm/model_executor/models/falcon.py
@@ -250,6 +250,9 @@ def __init__(
self.mlp = FalconMLP(config, quant_config)
self.con... | [Bug]: Falcon fails if `trust_remote_code=True`
### Your current environment
v0.4.3
### 🐛 Describe the bug
```python
from vllm import LLM
model = LLM("tiiuae/falcon-7b", trust_remote_code=True)
```
```bash
--- Logging error ---
Traceback (most recent call last):
File "/home/rshaw/.pyenv/versions/3.10.14/... | Confirmed the same issue on my end. Actually it doesn't matter whether trust_remote_code is True or False (I have tried both), what matters is that this parameter is set. Once I remove this then all worked well to me.
This issue has been automatically marked as stale because it has not had any activity within 90 days. ... | 1,731,640,258,000 | null | Bug Report | [
"vllm/model_executor/models/falcon.py:FalconDecoderLayer.__init__"
] | [] | 1 | 473 | |
vllm-project/vllm | vllm-project__vllm-10076 | db7db4aab9fd23e818d89ca9037099d30c071a5a | null | diff --git a/vllm/entrypoints/openai/serving_engine.py b/vllm/entrypoints/openai/serving_engine.py
index e7aeac8f8c018..e31dc2ced61fb 100644
--- a/vllm/entrypoints/openai/serving_engine.py
+++ b/vllm/entrypoints/openai/serving_engine.py
@@ -443,29 +443,28 @@ async def _preprocess_chat(
tokenizer,
... | [Bug]: When apply continue_final_message, encounter: got multiple values for keyword argument
### Your current environment
vLLM Version: 0.6.3.post2.dev256+g4be3a451
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Collecting environment information... ... | You should pass `continue_final_message` directly (like how you did it with `add_generation_prompt`) instead of using separate `chat_template_kwargs`.
Previously, it could only be called in `chat_template_kwargs`, there is already a lot of code to support the “continuation generating” in this way, can vLLM support both... | 1,730,890,982,000 | null | Bug Report | [
"vllm/entrypoints/openai/serving_engine.py:OpenAIServing._preprocess_chat"
] | [] | 1 | 474 | |
vllm-project/vllm | vllm-project__vllm-10012 | 8f0a9ca890a125f2b0fef49ba042ecf5b37830a8 | null | diff --git a/vllm/entrypoints/openai/api_server.py b/vllm/entrypoints/openai/api_server.py
index bef36ffdbfcd3..917b347ff1161 100644
--- a/vllm/entrypoints/openai/api_server.py
+++ b/vllm/entrypoints/openai/api_server.py
@@ -569,7 +569,8 @@ async def run_server(args, **uvicorn_kwargs) -> None:
# This avoids race c... | [Bug]: "Address already in use" for 1 minute after crash (since 0.6.2)
### 🐛 Describe the bug
Since version 0.6.2 (happens also in 0.6.3.post1), after the server dies (due to an exception/crash or hitting ctrl-c), for about a minute, it fails to start again with:
```
Traceback (most recent call last):
File "/u... | I encountered the same issue, but it seems to only occur in the Docker container. I haven’t experienced this issue on the bare server.
@yansh97 👍
In the description above I reproduce it also by running locally the pip package in WSL (and it also happens in Kubernetes, not in WSL)
Ya ever since some version, maybe 0.... | 1,730,765,881,000 | null | Bug Report | [
"vllm/entrypoints/openai/api_server.py:run_server"
] | [] | 1 | 475 | |
vllm-project/vllm | vllm-project__vllm-9974 | 91c9ebbb1bfc39e98aa2bd444b9569e5f2f92c9e | null | diff --git a/vllm/platforms/openvino.py b/vllm/platforms/openvino.py
index 35dbe22abf7ff..31fe3f1fcbfe4 100644
--- a/vllm/platforms/openvino.py
+++ b/vllm/platforms/openvino.py
@@ -1,10 +1,12 @@
import torch
import vllm.envs as envs
-from vllm.utils import print_warning_once
+from vllm.logger import init_logger
... | [Bug]: from vllm.platforms import current_platform infinite loop error with OpenVino Build.
### Your current environment
Unrelated
### Model Input Dumps
Unrelated
### 🐛 Describe the bug
In OpenVino Build, from vllm.platforms import current_platform for OpenVino will...
* reference openvino.py in vllm.platforms... | cc @wangshuai09 | 1,730,686,235,000 | null | Bug Report | [
"vllm/platforms/openvino.py:OpenVinoPlatform.is_pin_memory_available"
] | [] | 1 | 476 | |
UKPLab/sentence-transformers | UKPLab__sentence-transformers-3126 | 58d68ac7c6153112b4abe925ba7be1a9dad640be | null | diff --git a/sentence_transformers/cross_encoder/CrossEncoder.py b/sentence_transformers/cross_encoder/CrossEncoder.py
index c4345017a..3d7ee0abb 100644
--- a/sentence_transformers/cross_encoder/CrossEncoder.py
+++ b/sentence_transformers/cross_encoder/CrossEncoder.py
@@ -527,6 +527,11 @@ def rank(
'sc... | CrossEncoder .rank condition error in CrossEncoder.py
I get the following error when I use .rank method:
```
File /usr/local/lib/python3.12/dist-packages/sentence_transformers/cross_encoder/CrossEncoder.py:551, in CrossEncoder.rank(self, query, documents, top_k, return_documents, batch_size, show_progress_bar, num_... | @saeeddhqan can you also share the structure of your query and docs?
```python
cross_encoder.rank('docx', ['doc1', 'doc2', 'doc3'])
```
@saeeddhqan it works for me
```
from sentence_transformers.cross_encoder import CrossEncoder
cross_encoder = CrossEncoder("cross-encoder/stsb-distilroberta-base")
cross_encod... | 1,733,832,701,000 | null | Bug Report | [
"sentence_transformers/cross_encoder/CrossEncoder.py:CrossEncoder.rank"
] | [] | 1 | 477 | |
UKPLab/sentence-transformers | UKPLab__sentence-transformers-3122 | 679ab5d38e4cf9cd73d4dcf1cda25ba2ef1ad837 | null | diff --git a/sentence_transformers/evaluation/NanoBEIREvaluator.py b/sentence_transformers/evaluation/NanoBEIREvaluator.py
index ec29ad094..82907e3ed 100644
--- a/sentence_transformers/evaluation/NanoBEIREvaluator.py
+++ b/sentence_transformers/evaluation/NanoBEIREvaluator.py
@@ -420,6 +420,8 @@ def _load_dataset(self,... | NanoBeirEvaluator takes in empty dataset
I noticed that NanoBeir takes in an empty dataset and returns no error during instantiation. But when the model gets passed, the error kinda seems confusing.
So -
1. is it fine to allow the evaluator to take in an empty dataset.
2. Should we change the error message saying "i... | Hello!
Well spotted, I think we should indeed add an error stating that `dataset_names` cannot be an empty list. It can be `None`, which will default to a full list, but it shouldn't be empty.
- Tom Aarsen
cool, ill be away for a few days im going back to school for my graduation :)
ill be back and raise a PR ... | 1,733,667,234,000 | null | Bug Report | [
"sentence_transformers/evaluation/NanoBEIREvaluator.py:NanoBEIREvaluator._validate_dataset_names"
] | [] | 1 | 478 | |
UKPLab/sentence-transformers | UKPLab__sentence-transformers-3110 | ba8cb2ef1d7a5ad6b9169eddee469e900456532c | null | diff --git a/sentence_transformers/SentenceTransformer.py b/sentence_transformers/SentenceTransformer.py
index 830564a57..ba02f8bfa 100644
--- a/sentence_transformers/SentenceTransformer.py
+++ b/sentence_transformers/SentenceTransformer.py
@@ -1559,8 +1559,8 @@ def _load_module_class_from_ref(
rev... | Breaking change for module loading in PyLate
Hello,
Since the version 3.1, the module are loaded with [get_class_from_dynamic_module](https://github.com/UKPLab/sentence-transformers/commit/6257cb00ef87ddc8f445f0e436377ea4c48879b2#diff-b85567d4fdaffe34a3ccd8fe6cd1fcb15a986ebd34af373c71f1f5cf5efff021R1466) when using ... | 1,733,221,375,000 | null | Bug Report | [
"sentence_transformers/SentenceTransformer.py:SentenceTransformer._load_module_class_from_ref"
] | [] | 1 | 479 | ||
UKPLab/sentence-transformers | UKPLab__sentence-transformers-3096 | a542b0a6249dfbff65663780514b4418b96bd8d1 | null | diff --git a/sentence_transformers/evaluation/SentenceEvaluator.py b/sentence_transformers/evaluation/SentenceEvaluator.py
index 4708ea6e2..da5876208 100644
--- a/sentence_transformers/evaluation/SentenceEvaluator.py
+++ b/sentence_transformers/evaluation/SentenceEvaluator.py
@@ -57,7 +57,7 @@ def __call__(
def pr... | NanoBeirEvaluator returns a mix of floats and numpy.float64 in the results.
I was writing test cases and noticed that the results were returned as a mix of floats and np floats. Should we aim to convert these to floats as well?
```
{'NanoQuoraRetrieval_cosine_accuracy@1': 0.1,
'NanoQuoraRetrieval_cosine_accuracy@... | Hello!
Yes, it'd be ideal to normalize this to floats. I think this might mean that the same happens in InformationRetrievalEvaluator.
- Tom Aarsen
cool, ill run across the other ones too and check if the same happens and raise a PR at one go | 1,732,781,278,000 | null | Bug Report | [
"sentence_transformers/evaluation/SentenceEvaluator.py:SentenceEvaluator.prefix_name_to_metrics"
] | [] | 1 | 480 | |
UKPLab/sentence-transformers | UKPLab__sentence-transformers-3076 | 348190d46b0c010c7a4693f198f0ddf70c6ceb35 | null | diff --git a/sentence_transformers/evaluation/BinaryClassificationEvaluator.py b/sentence_transformers/evaluation/BinaryClassificationEvaluator.py
index e0e00ebc9..4a701279d 100644
--- a/sentence_transformers/evaluation/BinaryClassificationEvaluator.py
+++ b/sentence_transformers/evaluation/BinaryClassificationEvaluato... | BinaryClassificationEvaluator returns np.float32() and np.float64()
I came across the problem where
```python
output_scores[similarity_fn_name] = {
"accuracy": acc,
"accuracy_threshold": acc_threshold,
"f1": f1,
"f1_threshold": f1_threshold,
"precision": precision,
"recall": recall,
"ap": ap,
... | Hello!
Well spotted, this is suboptimal. I'd rather have all of these converted to floats as expected. I'm tackling some other PRs right now, but I'll pick this up in a few days if someone else hasn't beat me to it by then.
Thanks for reporting!
- Tom Aarsen
ill be happy to work on this :) | 1,732,120,474,000 | null | Bug Report | [
"sentence_transformers/evaluation/BinaryClassificationEvaluator.py:BinaryClassificationEvaluator.compute_metrices"
] | [] | 1 | 481 | |
UKPLab/sentence-transformers | UKPLab__sentence-transformers-3073 | 043445051d60dc2dcf62d7d5afbc4da27a5a8dbd | null | diff --git a/sentence_transformers/sampler.py b/sentence_transformers/sampler.py
index 7efc19bf1..f3021b42f 100644
--- a/sentence_transformers/sampler.py
+++ b/sentence_transformers/sampler.py
@@ -184,7 +184,10 @@ def __iter__(self) -> Iterator[list[int]]:
if self.generator and self.seed:
self.gen... | Same order of training samples with `NoDuplicatesBatchSampler`
It seems that `NoDuplicatesBatchSampler` produces the same set of batches in the same order regardless of the epoch index.
Indeed, in [this piece of code](https://github.com/UKPLab/sentence-transformers/blob/e156f38b1007e7860218c27223e1d577f3a021fe/senten... | Hello!
`set_epoch` is used via the `transformers` `Trainer` which calls `set_epoch` of the `accelerate`-wrapped DataLoader, which should propagates it down into the sampler. However, it indeed seems that `accelerate` only propagates it into the `sampler`, not the batch sampler: https://github.com/huggingface/acceler... | 1,732,032,717,000 | null | Bug Report | [
"sentence_transformers/sampler.py:NoDuplicatesBatchSampler.__iter__"
] | [] | 1 | 482 | |
Chainlit/chainlit | Chainlit__chainlit-1534 | dde96bcbf61c73022bb025d65a88e567b2eaf2aa | null | diff --git a/backend/chainlit/data/__init__.py b/backend/chainlit/data/__init__.py
index dfc359c24b..513bc5975d 100644
--- a/backend/chainlit/data/__init__.py
+++ b/backend/chainlit/data/__init__.py
@@ -7,26 +7,38 @@
)
_data_layer: Optional[BaseDataLayer] = None
+_data_layer_initialized = False
def get_data_la... | Prevent attempting to setup data layer on every call toget_data_layer() when no data layer's defined.
**Is your feature request related to a problem? Please describe.**
Currently, the `get_data_layer()` function attempts to set up the data layer on every call, even when no data layer is defined. This can lead to unn... | @dosu Would you help me fill out the full issue report based on the title and the contents of `backend/data/__init__.py`?
<!-- Answer -->
To fill out the full issue report based on the title and the contents of `backend/data/__init__.py`, you can use the following template:
---
**Is your feature request related to a ... | 1,732,024,059,000 | null | Performance Issue | [
"backend/chainlit/data/__init__.py:get_data_layer"
] | [] | 1 | 483 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.